Extrapolation of Functions of Many Variables by Means of Metric Analysis
NASA Astrophysics Data System (ADS)
Kryanev, Alexandr; Ivanov, Victor; Romanova, Anastasiya; Sevastianov, Leonid; Udumyan, David
2018-02-01
The paper considers a problem of extrapolating functions of several variables. It is assumed that the values of the function of m variables at a finite number of points in some domain D of the m-dimensional space are given. It is required to restore the value of the function at points outside the domain D. The paper proposes a fundamentally new method for functions of several variables extrapolation. In the presented paper, the method of extrapolating a function of many variables developed by us uses the interpolation scheme of metric analysis. To solve the extrapolation problem, a scheme based on metric analysis methods is proposed. This scheme consists of two stages. In the first stage, using the metric analysis, the function is interpolated to the points of the domain D belonging to the segment of the straight line connecting the center of the domain D with the point M, in which it is necessary to restore the value of the function. In the second stage, based on the auto regression model and metric analysis, the function values are predicted along the above straight-line segment beyond the domain D up to the point M. The presented numerical example demonstrates the efficiency of the method under consideration.
Braberg, Hannes; Moehle, Erica A.; Shales, Michael; Guthrie, Christine; Krogan, Nevan J.
2014-01-01
We have achieved a residue-level resolution of genetic interaction mapping – a technique that measures how the function of one gene is affected by the alteration of a second gene – by analyzing point mutations. Here, we describe how to interpret point mutant genetic interactions, and outline key applications for the approach, including interrogation of protein interaction interfaces and active sites, and examination of post-translational modifications. Genetic interaction analysis has proven effective for characterizing cellular processes; however, to date, systematic high-throughput genetic interaction screens have relied on gene deletions or knockdowns, which limits the resolution of gene function analysis and poses problems for multifunctional genes. Our point mutant approach addresses these issues, and further provides a tool for in vivo structure-function analysis that complements traditional biophysical methods. We also discuss the potential for genetic interaction mapping of point mutations in human cells and its application to personalized medicine. PMID:24842270
NASA Astrophysics Data System (ADS)
Savitri, D.
2018-01-01
This articel discusses a predator prey model with anti-predator on intermediate predator using ratio dependent functional responses. Dynamical analysis performed on the model includes determination of equilibrium point, stability and simulation. Three kinds of equilibrium points have been discussed, namely the extinction of prey point, the extinction of intermediate predator point and the extinction of predator point are exists under certain conditions. It can be shown that the result of numerical simulations are in accordance with analitical results
Second feature of the matter two-point function
NASA Astrophysics Data System (ADS)
Tansella, Vittorio
2018-05-01
We point out the existence of a second feature in the matter two-point function, besides the acoustic peak, due to the baryon-baryon correlation in the early Universe and positioned at twice the distance of the peak. We discuss how the existence of this feature is implied by the well-known heuristic argument that explains the baryon bump in the correlation function. A standard χ2 analysis to estimate the detection significance of the second feature is mimicked. We conclude that, for realistic values of the baryon density, a SKA-like galaxy survey will not be able to detect this feature with standard correlation function analysis.
Report on 3 and 4-point correlation statistics in the COBE DMR anisotrophy maps
NASA Technical Reports Server (NTRS)
Hinshaw, Gary (Principal Investigator); Gorski, Krzystof M.; Banday, Anthony J.; Bennett, Charles L.
1996-01-01
As part of the work performed under NASA contract # NAS5-32648, we have computed the 3-point and 4-point correlation functions of the COBE-DNIR 2-year and 4-year anisotropy maps. The motivation for this study was to search for evidence of non-Gaussian statistical fluctuations in the temperature maps: skewness or asymmetry in the case of the 3-point function, kurtosis in the case of the 4-point function. Such behavior would have very significant implications for our understanding of the processes of galaxy formation, because our current models of galaxy formation predict that non-Gaussian features should not be present in the DMR maps. The results of our work showed that the 3-point correlation function is consistent with zero and that the 4-point function is not a very sensitive probe of non-Gaussian behavior in the COBE-DMR data. Our computation and analysis of 3-point correlations in the 2-year DMR maps was published in the Astrophysical Journal Letters, volume 446, page L67, 1995. Our computation and analysis of 3-point correlations in the 4-year DMR maps will be published, together with some additional tests, in the June 10, 1996 issue of the Astrophysical Journal Letters. Copies of both of these papers are attached as an appendix to this report.
Analysis of the two-point velocity correlations in turbulent boundary layer flows
NASA Technical Reports Server (NTRS)
Oberlack, M.
1995-01-01
The general objective of the present work is to explore the use of Rapid Distortion Theory (RDT) in analysis of the two-point statistics of the log-layer. RDT is applicable only to unsteady flows where the non-linear turbulence-turbulence interaction can be neglected in comparison to linear turbulence-mean interactions. Here we propose to use RDT to examine the structure of the large energy-containing scales and their interaction with the mean flow in the log-region. The contents of the work are twofold: First, two-point analysis methods will be used to derive the law-of-the-wall for the special case of zero mean pressure gradient. The basic assumptions needed are one-dimensionality in the mean flow and homogeneity of the fluctuations. It will be shown that a formal solution of the two-point correlation equation can be obtained as a power series in the von Karman constant, known to be on the order of 0.4. In the second part, a detailed analysis of the two-point correlation function in the log-layer will be given. The fundamental set of equations and a functional relation for the two-point correlation function will be derived. An asymptotic expansion procedure will be used in the log-layer to match Kolmogorov's universal range and the one-point correlations to the inviscid outer region valid for large correlation distances.
Discriminating topology in galaxy distributions using network analysis
NASA Astrophysics Data System (ADS)
Hong, Sungryong; Coutinho, Bruno C.; Dey, Arjun; Barabási, Albert-L.; Vogelsberger, Mark; Hernquist, Lars; Gebhardt, Karl
2016-07-01
The large-scale distribution of galaxies is generally analysed using the two-point correlation function. However, this statistic does not capture the topology of the distribution, and it is necessary to resort to higher order correlations to break degeneracies. We demonstrate that an alternate approach using network analysis can discriminate between topologically different distributions that have similar two-point correlations. We investigate two galaxy point distributions, one produced by a cosmological simulation and the other by a Lévy walk. For the cosmological simulation, we adopt the redshift z = 0.58 slice from Illustris and select galaxies with stellar masses greater than 108 M⊙. The two-point correlation function of these simulated galaxies follows a single power law, ξ(r) ˜ r-1.5. Then, we generate Lévy walks matching the correlation function and abundance with the simulated galaxies. We find that, while the two simulated galaxy point distributions have the same abundance and two-point correlation function, their spatial distributions are very different; most prominently, filamentary structures, absent in Lévy fractals. To quantify these missing topologies, we adopt network analysis tools and measure diameter, giant component, and transitivity from networks built by a conventional friends-of-friends recipe with various linking lengths. Unlike the abundance and two-point correlation function, these network quantities reveal a clear separation between the two simulated distributions; therefore, the galaxy distribution simulated by Illustris is not a Lévy fractal quantitatively. We find that the described network quantities offer an efficient tool for discriminating topologies and for comparing observed and theoretical distributions.
Percolation analysis for cosmic web with discrete points
NASA Astrophysics Data System (ADS)
Zhang, Jiajun; Cheng, Dalong; Chu, Ming-Chung
2016-03-01
Percolation analysis has long been used to quantify the connectivity of the cosmic web. Unlike most of the previous works using density field on grids, we have studied percolation analysis based on discrete points. Using a Friends-of-Friends (FoF) algorithm, we generate the S-bb relation, between the fractional mass of the largest connected group (S) and the FoF linking length (bb). We propose a new model, the Probability Cloud Cluster Expansion Theory (PCCET) to relate the S-bb relation with correlation functions. We show that the S-bb relation reflects a combination of all orders of correlation functions. We have studied the S-bb relation with simulation and find that the S-bb relation is robust against redshift distortion and incompleteness in observation. From the Bolshoi simulation, with Halo Abundance Matching (HAM), we have generated a mock galaxy catalogue. Good matching of the projected two-point correlation function with observation is confirmed. However, comparing the mock catalogue with the latest galaxy catalogue from SDSS DR12, we have found significant differences in their S-bb relations. This indicates that the mock catalogue cannot accurately recover higher order correlation functions than the two-point correlation function, which reveals the limit of HAM method.
Spatial Point Pattern Analysis of Neurons Using Ripley's K-Function in 3D
Jafari-Mamaghani, Mehrdad; Andersson, Mikael; Krieger, Patrik
2010-01-01
The aim of this paper is to apply a non-parametric statistical tool, Ripley's K-function, to analyze the 3-dimensional distribution of pyramidal neurons. Ripley's K-function is a widely used tool in spatial point pattern analysis. There are several approaches in 2D domains in which this function is executed and analyzed. Drawing consistent inferences on the underlying 3D point pattern distributions in various applications is of great importance as the acquisition of 3D biological data now poses lesser of a challenge due to technological progress. As of now, most of the applications of Ripley's K-function in 3D domains do not focus on the phenomenon of edge correction, which is discussed thoroughly in this paper. The main goal is to extend the theoretical and practical utilization of Ripley's K-function and corresponding tests based on bootstrap resampling from 2D to 3D domains. PMID:20577588
Mathematical construction and perturbation analysis of Zernike discrete orthogonal points.
Shi, Zhenguang; Sui, Yongxin; Liu, Zhenyu; Peng, Ji; Yang, Huaijiang
2012-06-20
Zernike functions are orthogonal within the unit circle, but they are not over the discrete points such as CCD arrays or finite element grids. This will result in reconstruction errors for loss of orthogonality. By using roots of Legendre polynomials, a set of points within the unit circle can be constructed so that Zernike functions over the set are discretely orthogonal. Besides that, the location tolerances of the points are studied by perturbation analysis, and the requirements of the positioning precision are not very strict. Computer simulations show that this approach provides a very accurate wavefront reconstruction with the proposed sampling set.
Structural Analysis of Single-Point Mutations Given an RNA Sequence: A Case Study with RNAMute
NASA Astrophysics Data System (ADS)
Churkin, Alexander; Barash, Danny
2006-12-01
We introduce here for the first time the RNAMute package, a pattern-recognition-based utility to perform mutational analysis and detect vulnerable spots within an RNA sequence that affect structure. Mutations in these spots may lead to a structural change that directly relates to a change in functionality. Previously, the concept was tried on RNA genetic control elements called "riboswitches" and other known RNA switches, without an organized utility that analyzes all single-point mutations and can be further expanded. The RNAMute package allows a comprehensive categorization, given an RNA sequence that has functional relevance, by exploring the patterns of all single-point mutants. For illustration, we apply the RNAMute package on an RNA transcript for which individual point mutations were shown experimentally to inactivate spectinomycin resistance in Escherichia coli. Functional analysis of mutations on this case study was performed experimentally by creating a library of point mutations using PCR and screening to locate those mutations. With the availability of RNAMute, preanalysis can be performed computationally before conducting an experiment.
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giannantonio, T.; et al.
Optical imaging surveys measure both the galaxy density and the gravitational lensing-induced shear fields across the sky. Recently, the Dark Energy Survey (DES) collaboration used a joint fit to two-point correlations between these observables to place tight constraints on cosmology (DES Collaboration et al. 2017). In this work, we develop the methodology to extend the DES Collaboration et al. (2017) analysis to include cross-correlations of the optical survey observables with gravitational lensing of the cosmic microwave background (CMB) as measured by the South Pole Telescope (SPT) and Planck. Using simulated analyses, we show how the resulting set of five two-pointmore » functions increases the robustness of the cosmological constraints to systematic errors in galaxy lensing shear calibration. Additionally, we show that contamination of the SPT+Planck CMB lensing map by the thermal Sunyaev-Zel'dovich effect is a potentially large source of systematic error for two-point function analyses, but show that it can be reduced to acceptable levels in our analysis by masking clusters of galaxies and imposing angular scale cuts on the two-point functions. The methodology developed here will be applied to the analysis of data from the DES, the SPT, and Planck in a companion work.« less
Sato, Atsushi; Okuda, Yutaka; Fujita, Takaaki; Kimura, Norihiko; Hoshina, Noriyuki; Kato, Sayaka; Tanaka, Shigenari
2016-01-01
This study aimed to clarify which cognitive and physical factors are associated with the need for toileting assistance in stroke patients and to calculate cut-off values for discriminating between independent supervision and dependent toileting ability. This cross-sectional study included 163 first-stroke patients in nine convalescent rehabilitation wards. Based on their FIM Ⓡ instrument score for toileting, the patients were divided into an independent-supervision group and a dependent group. Multiple logistic regression analysis and receiver operating characteristic analysis were performed to identify factors related to toileting performance. The Minimental State Examination (MMSE); the Stroke Impairment Assessment Set (SIAS) score for the affected lower limb, speech, and visuospatial functions; and the Functional Assessment for Control of Trunk (FACT) were analyzed as independent variables. The multiple logistic regression analysis showed that the FIM Ⓡ instrument score for toileting was associated with the SIAS score for the affected lower limb function, MMSE, and FACT. On receiver operating characteristic analysis, the SIAS score for the affected lower limb function cut-off value was 8/7 points, the MMSE cut-off value was 25/24 points, and the FACT cut-off value was 14/13 points. Affected lower limb function, cognitive function, and trunk function were related with the need for toileting assistance. These cut-off values may be useful for judging whether toileting assistance is needed in stroke patients.
Peculiar velocity effect on galaxy correlation functions in nonlinear clustering regime
NASA Astrophysics Data System (ADS)
Matsubara, Takahiko
1994-03-01
We studied the distortion of the apparent distribution of galaxies in redshift space contaminated by the peculiar velocity effect. Specifically we obtained the expressions for N-point correlation functions in redshift space with given functional form for velocity distribution f(v) and evaluated two- and three-point correlation functions quantitatively. The effect of velocity correlations is also discussed. When the two-point correlation function in real space has a power-law form, Xir(r) is proportional to r(-gamma), the redshift-space counterpart on small scales also has a power-law form but with an increased power-law index: Xis(s) is proportional to s(1-gamma). When the three-point correlation function has the hierarchical form and the two-point correlation function has the power-law form in real space, the hierarchical form of the three-point correlation function is almost preserved in redshift space. The above analytic results are compared with the direct analysis based on N-body simulation data for cold dark matter models. Implications on the hierarchical clustering ansatz are discussed in detail.
Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde
2016-12-01
In this paper, the coexistence and dynamical behaviors of multiple equilibrium points are discussed for a class of memristive neural networks (MNNs) with unbounded time-varying delays and nonmonotonic piecewise linear activation functions. By means of the fixed point theorem, nonsmooth analysis theory and rigorous mathematical analysis, it is proven that under some conditions, such n-neuron MNNs can have 5 n equilibrium points located in ℜ n , and 3 n of them are locally μ-stable. As a direct application, some criteria are also obtained on the multiple exponential stability, multiple power stability, multiple log-stability and multiple log-log-stability. All these results reveal that the addressed neural networks with activation functions introduced in this paper can generate greater storage capacity than the ones with Mexican-hat-type activation function. Numerical simulations are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Alien calculus and a Schwinger-Dyson equation: two-point function with a nonperturbative mass scale
NASA Astrophysics Data System (ADS)
Bellon, Marc P.; Clavier, Pierre J.
2018-02-01
Starting from the Schwinger-Dyson equation and the renormalization group equation for the massless Wess-Zumino model, we compute the dominant nonperturbative contributions to the anomalous dimension of the theory, which are related by alien calculus to singularities of the Borel transform on integer points. The sum of these dominant contributions has an analytic expression. When applied to the two-point function, this analysis gives a tame evolution in the deep euclidean domain at this approximation level, making doubtful the arguments on the triviality of the quantum field theory with positive β -function. On the other side, we have a singularity of the propagator for timelike momenta of the order of the renormalization group invariant scale of the theory, which has a nonperturbative relationship with the renormalization point of the theory. All these results do not seem to have an interpretation in terms of semiclassical analysis of a Feynman path integral.
Analysis of data from NASA B-57B gust gradient program
NASA Technical Reports Server (NTRS)
Frost, W.; Lin, M. C.; Chang, H. P.; Ringnes, E.
1985-01-01
Statistical analysis of the turbulence measured in flight 6 of the NASA B-57B over Denver, Colorado, from July 7 to July 23, 1982 included the calculations of average turbulence parameters, integral length scales, probability density functions, single point autocorrelation coefficients, two point autocorrelation coefficients, normalized autospectra, normalized two point autospectra, and two point cross sectra for gust velocities. The single point autocorrelation coefficients were compared with the theoretical model developed by von Karman. Theoretical analyses were developed which address the effects spanwise gust distributions, using two point spatial turbulence correlations.
Rogers, Geoffrey
2018-06-01
The Yule-Nielsen effect is an influence on halftone color caused by the diffusion of light within the paper upon which the halftone ink is printed. The diffusion can be characterized by a point spread function. In this paper, a point spread function for paper is derived using the multiple-path model of reflection. This model treats the interaction of light with turbid media as a random walk. Using the multiple-path point spread function, a general expression is derived for the average reflectance of light from a frequency-modulated halftone, in which dot size is constant and the number of dots is varied, with the arrangement of dots random. It is also shown that the line spread function derived from the multiple-path model has the form of a Lorentzian function.
Kordaß, Bernd; Ruge, Sebastian
2015-01-01
Analysis of temporomandibular joint (TMJ) function using condylar path tracings is a challenge in functionally oriented dentistry. In most cases, reference points on the skin surface over the TMJ region are defined as "arbitrary", "individual" or "kinematic" condylar hinge axis points, which are displayed as "condylar paths" in motion. To what extent these reference points represent the actual condylar paths in each individual patient is ultimately unclear because the geometric relationship of the actual condyle to the selected reference point is usually unknown. Depending on the location of the point on the condyle and the centers of rotation of mandibular movement, these trajectories can vary greatly during combined rotational and sliding movements (eg, opening and closing movements of the mandible); this represents a grid of points located in the vicinity of the TMJ. To record the actual condylar path as the movement trajectory of a given point (eg, the condylar center), technological solutions are needed with which to link the tracing technology with the appropriate imaging technology capable of scanning the condyle, including the points of interest, and displaying them in real dynamic motion. Sicat Function (Sicat, D-Bonn) is such a solution. Sicat Function links cone beam computed tomography (CBCT) scans (made using the Galileos CBCT scanner; Sirona, Bensheim, Germany) with ultrasound-based, three-dimensional (3D) functional jaw movement recordings of the mandible (made using the JMT+ Jaw Motion Tracker; Sicat, Bonn, Germany). Digital images of the dental arches acquired with the intraoral scanner Cerec system (Sirona) can also be superimposed. This results in the generation of a 3D model of the bony mandible, including the TMJ, which reproduces the 3D real dynamic movement of the condyles simultaneously with that of the condylar paths at defined points (with the condylar centers being a particular point of interest). Sicat Function is an integrated, digital 3D solution for additional instrumental and imaging diagnosis of temporomandibular joint dysfunction (TMD). The primary indication for Sicat Function is persistent, arthrogenic TMD complaints that require additional studies for evaluation of bony structural components of the TMJ.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Mckissick, B. T.; Steinmetz, G. G.
1979-01-01
A recent modification of the methodology of profile analysis, which allows the testing for differences between two functions as a whole with a single test, rather than point by point with multiple tests is discussed. The modification is applied to the examination of the issue of motion/no motion conditions as shown by the lateral deviation curve as a function of engine cut speed of a piloted 737-100 simulator. The results of this application are presented along with those of more conventional statistical test procedures on the same simulator data.
Linking Advanced Visualization and MATLAB for the Analysis of 3D Gene Expression Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruebel, Oliver; Keranen, Soile V.E.; Biggin, Mark
Three-dimensional gene expression PointCloud data generated by the Berkeley Drosophila Transcription Network Project (BDTNP) provides quantitative information about the spatial and temporal expression of genes in early Drosophila embryos at cellular resolution. The BDTNP team visualizes and analyzes Point-Cloud data using the software application PointCloudXplore (PCX). To maximize the impact of novel, complex data sets, such as PointClouds, the data needs to be accessible to biologists and comprehensible to developers of analysis functions. We address this challenge by linking PCX and Matlab via a dedicated interface, thereby providing biologists seamless access to advanced data analysis functions and giving bioinformatics researchersmore » the opportunity to integrate their analysis directly into the visualization application. To demonstrate the usefulness of this approach, we computationally model parts of the expression pattern of the gene even skipped using a genetic algorithm implemented in Matlab and integrated into PCX via our Matlab interface.« less
Jin, Xin; Liu, Li; Chen, Yanqin; Dai, Qionghai
2017-05-01
This paper derives a mathematical point spread function (PSF) and a depth-invariant focal sweep point spread function (FSPSF) for plenoptic camera 2.0. Derivation of PSF is based on the Fresnel diffraction equation and image formation analysis of a self-built imaging system which is divided into two sub-systems to reflect the relay imaging properties of plenoptic camera 2.0. The variations in PSF, which are caused by changes of object's depth and sensor position variation, are analyzed. A mathematical model of FSPSF is further derived, which is verified to be depth-invariant. Experiments on the real imaging systems demonstrate the consistency between the proposed PSF and the actual imaging results.
Accuracy analysis of pointing control system of solar power station
NASA Technical Reports Server (NTRS)
Hung, J. C.; Peebles, P. Z., Jr.
1978-01-01
The first-phase effort concentrated on defining the minimum basic functions that the retrodirective array must perform, identifying circuits that are capable of satisfying the basic functions, and looking at some of the error sources in the system and how they affect accuracy. The initial effort also examined three methods for generating torques for mechanical antenna control, performed a rough analysis of the flexible body characteristics of the solar collector, and defined a control system configuration for mechanical pointing control of the array.
Percolation analysis for cosmic web with discrete points
NASA Astrophysics Data System (ADS)
Zhang, Jiajun; Cheng, Dalong; Chu, Ming-Chung
2018-01-01
Percolation analysis has long been used to quantify the connectivity of the cosmic web. Most of the previous work is based on density fields on grids. By smoothing into fields, we lose information about galaxy properties like shape or luminosity. The lack of mathematical modeling also limits our understanding for the percolation analysis. To overcome these difficulties, we have studied percolation analysis based on discrete points. Using a friends-of-friends (FoF) algorithm, we generate the S -b b relation, between the fractional mass of the largest connected group (S ) and the FoF linking length (b b ). We propose a new model, the probability cloud cluster expansion theory to relate the S -b b relation with correlation functions. We show that the S -b b relation reflects a combination of all orders of correlation functions. Using N-body simulation, we find that the S -b b relation is robust against redshift distortion and incompleteness in observation. From the Bolshoi simulation, with halo abundance matching (HAM), we have generated a mock galaxy catalog. Good matching of the projected two-point correlation function with observation is confirmed. However, comparing the mock catalog with the latest galaxy catalog from Sloan Digital Sky Survey (SDSS) Data Release (DR)12, we have found significant differences in their S -b b relations. This indicates that the mock galaxy catalog cannot accurately retain higher-order correlation functions than the two-point correlation function, which reveals the limit of the HAM method. As a new measurement, the S -b b relation is applicable to a wide range of data types, fast to compute, and robust against redshift distortion and incompleteness and contains information of all orders of correlation functions.
A free boundary approach to the Rosensweig instability of ferrofluids
NASA Astrophysics Data System (ADS)
Parini, Enea; Stylianou, Athanasios
2018-04-01
We establish the existence of saddle points for a free boundary problem describing the two-dimensional free surface of a ferrofluid undergoing normal field instability. The starting point is the ferrohydrostatic equations for the magnetic potentials in the ferrofluid and air, and the function describing their interface. These constitute the strong form for the Euler-Lagrange equations of a convex-concave functional, which we extend to include interfaces that are not necessarily graphs of functions. Saddle points are then found by iterating the direct method of the calculus of variations and applying classical results of convex analysis. For the existence part, we assume a general nonlinear magnetization law; for a linear law, we also show, via convex duality, that the saddle point is a constrained minimizer of the relevant energy functional.
Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease
Jie, Biao; Liu, Mingxia; Liu, Jun
2016-01-01
Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313
Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde
2015-11-01
The problem of coexistence and dynamical behaviors of multiple equilibrium points is addressed for a class of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays. By virtue of the fixed point theorem, nonsmooth analysis theory and other analytical tools, some sufficient conditions are established to guarantee that such n-dimensional memristive Cohen-Grossberg neural networks can have 5(n) equilibrium points, among which 3(n) equilibrium points are locally exponentially stable. It is shown that greater storage capacity can be achieved by neural networks with the non-monotonic activation functions introduced herein than the ones with Mexican-hat-type activation function. In addition, unlike most existing multistability results of neural networks with monotonic activation functions, those obtained 3(n) locally stable equilibrium points are located both in saturated regions and unsaturated regions. The theoretical findings are verified by an illustrative example with computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Estimation of the auto frequency response function at unexcited points using dummy masses
NASA Astrophysics Data System (ADS)
Hosoya, Naoki; Yaginuma, Shinji; Onodera, Hiroshi; Yoshimura, Takuya
2015-02-01
If structures with complex shapes have space limitations, vibration tests using an exciter or impact hammer for the excitation are difficult. Although measuring the auto frequency response function at an unexcited point may not be practical via a vibration test, it can be obtained by assuming that the inertia acting on a dummy mass is an external force on the target structure upon exciting a different excitation point. We propose a method to estimate the auto frequency response functions at unexcited points by attaching a small mass (dummy mass), which is comparable to the accelerometer mass. The validity of the proposed method is demonstrated by comparing the auto frequency response functions estimated at unexcited points in a beam structure to those obtained from numerical simulations. We also consider random measurement errors by finite element analysis and vibration tests, but not bias errors. Additionally, the applicability of the proposed method is demonstrated by applying it to estimate the auto frequency response function of the lower arm in a car suspension.
Chen, Szu-Chia; Lin, Tsung-Hsien; Hsu, Po-Chao; Chang, Jer-Ming; Lee, Chee-Siong; Tsai, Wei-Chung; Su, Ho-Ming; Voon, Wen-Chol; Chen, Hung-Chun
2011-09-01
Heart failure and increased arterial stiffness are associated with declining renal function. Few studies have evaluated the association between left ventricular ejection fraction (LVEF) and brachial-ankle pulse-wave velocity (baPWV) and renal function progression. The aim of this study was to assess whether LVEF<40% and baPWV are associated with a decline in the estimated glomerular filtration rate (eGFR) and the progression to a renal end point of ≥25% decline in eGFR. This longitudinal study included 167 patients. The baPWV was measured with an ankle-brachial index-form device. The change in renal function was estimated by eGFR slope. The renal end point was defined as ≥25% decline in eGFR. Clinical and echocardiographic parameters were compared and analyzed. After a multivariate analysis, serum hematocrit was positively associated with eGFR slope, and diabetes mellitus, baPWV (P=0.031) and LVEF<40% (P=0.001) were negatively associated with eGFR slope. Forty patients reached the renal end point. Multivariate, forward Cox regression analysis found that lower serum albumin and hematocrit levels, higher triglyceride levels, higher baPWV (P=0.039) and LVEF<40% (P<0.001) were independently associated with progression to the renal end point. Our results show that LVEF<40% and increased baPWV are independently associated with renal function decline and progression to the renal end point.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanno, Shoichi; Matsuo, Yutaka; Shiba, Shotaro
We give some evidences of the Alday-Gaiotto-Tachikawa-Wyllard relation between SU(3) quiver gauge theories and A{sub 2} Toda theory. In particular, we derive the explicit form of 5-point correlation functions in the lower orders and confirm the agreement with Nekrasov's partition function for SU(3)xSU(3) quiver gauge theory. The algorithm to derive the correlation functions can be applied to a general n-point function in A{sub 2} Toda theory, which will be useful to establish the relation for more generic quivers. Partial analysis is also given for the SU(3)xSU(2) case, and we comment on some technical issues that need clarification before establishing themore » relation.« less
NASA Technical Reports Server (NTRS)
Farassat, F.; Baty, R. S.
2000-01-01
The study of the shock structure in a viscous heat conducting fluid is an old problem. We study this problem from a novel mathematical point of view. A new class of generalized functions is defined where multiplication of any two functions is allowed with the usual properties. A Heaviside function in this class has the unit jump at occurring on an infinitesimal interval of the nonstandard analysis (NSA) in the halo of . This jump has a smooth microstructure over the infinitesimal interval . From this point of view, we have a new class of Heaviside functions, and their derivatives the Dirac delta functions, which are equivalent when viewed as continuous linear functionals over the test function space of Schwartz. However, they differ in their microstructures which in applications are determined from physics of the problem as shown in our presentation.
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
NASA Technical Reports Server (NTRS)
Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.
2017-01-01
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.
NASA Astrophysics Data System (ADS)
Xu, Jun; Dang, Chao; Kong, Fan
2017-10-01
This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.
Electronic spectrum of trilayer graphene
NASA Astrophysics Data System (ADS)
Kumar, S.; Ajay
2014-08-01
Present work deals with the analysis of the single particle electronic spectral function in trilayer (ABC-, ABA- and AAA-stacked) graphene. Tight binding Hamiltonian containing intralayer nearest-neighbor and next-nearest neighbor hopping along-with the interlayer coupling parameter within two triangular sub-lattice approach for trilayer graphene has been employed. The expression of single particle spectral functions A(kw) is obtained within mean-field Green's function equations of motion approach. Spectral function at Γ, M and K points of the Brillouin zone has been numerically computed. It is pointed out that the nature of electronic states at different points of Brillouin zone is found to be influenced by stacking order and Coulomb interactions. At Γ and M points, a trilayer splitting is predicted while at K point a bilayer splitting effect is observed due to crossing of two bands (at K point). Interlayer coupling ( t_{ bot } ) is found to be responsible for the splitting of quasi-particle peaks at each point of Brillouin zone. The influence of t_{ bot } in trilayer graphene is prominent for AAA-stacking compared to ABC- and ABA-stacking. On the other hand, onsite Coulomb interaction reduces the trilayer splitting effect into bilayer splitting at Γ and M points of Brillouin zone and bilayer splitting into single peak spectral function at K point with a shifting of the peak away from Fermi level.
NASA Technical Reports Server (NTRS)
Avila, Edwin M. Martinez; Muniz, Ricardo; Szafran, Jamie; Dalton, Adam
2011-01-01
Lines of code (LOC) analysis is one of the methods used to measure programmer productivity and estimate schedules of programming projects. The Launch Control System (LCS) had previously used this method to estimate the amount of work and to plan development efforts. The disadvantage of using LOC as a measure of effort is that one can only measure 30% to 35% of the total effort of software projects involves coding [8]. In the application, instead of using the LOC we are using function point for a better estimation of hours in each software to develop. Because of these disadvantages, Jamie Szafran of the System Software Branch of Control And Data Systems (NE-C3) at Kennedy Space Canter developed a web application called Function Point Analysis (FPA) Depot. The objective of this web application is that the LCS software architecture team can use the data to more accurately estimate the effort required to implement customer requirements. This paper describes the evolution of the domain model used for function point analysis as project managers continually strive to generate more accurate estimates.
The point-spread function of fiber-coupled area detectors
Holton, James M.; Nielsen, Chris; Frankel, Kenneth A.
2012-01-01
The point-spread function (PSF) of a fiber-optic taper-coupled CCD area detector was measured over five decades of intensity using a 20 µm X-ray beam and ∼2000-fold averaging. The ‘tails’ of the PSF clearly revealed that it is neither Gaussian nor Lorentzian, but instead resembles the solid angle subtended by a pixel at a point source of light held a small distance (∼27 µm) above the pixel plane. This converges to an inverse cube law far from the beam impact point. Further analysis revealed that the tails are dominated by the fiber-optic taper, with negligible contribution from the phosphor, suggesting that the PSF of all fiber-coupled CCD-type detectors is best described as a Moffat function. PMID:23093762
NASA Technical Reports Server (NTRS)
Salomonson, V. V.; Nickeson, J. E.; Bodechtel, J.; Zilger, J.
1988-01-01
Point-spread functions (PSF) comparisons were made between the Modular Optoelectronic Multispectral Scanner (MOMS-01), the LANDSAT Thematic Mapper (TM) and the SPOT-HRV instruments, principally near Lake Nakuru, Kenya. The results, expressed in terms of the width of the point spread functions at the 50 percent power points as determined from the in-scene analysis show that the TM has a PSF equal to or narrower than the MOMS-01 instrument (50 to 55 for the TM versus 50 to 68 for the MOMS). The SPOT estimates of the PSF range from 36 to 40. When the MOMS results are adjusted for differences in edge scanning as compared to the TM and SPOT, they are nearer 40 in the 575 to 625 nm band.
JPL-ANTOPT antenna structure optimization program
NASA Technical Reports Server (NTRS)
Strain, D. M.
1994-01-01
New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.
Lung function in type 2 diabetes: the Normative Aging Study.
Litonjua, Augusto A; Lazarus, Ross; Sparrow, David; Demolles, Debbie; Weiss, Scott T
2005-12-01
Cross-sectional studies have noted that subjects with diabetes have lower lung function than non-diabetic subjects. We conducted this analysis to determine whether diabetic subjects have different rates of lung function change compared with non-diabetic subjects. We conducted a nested case-control analysis in 352 men who developed diabetes and 352 non-diabetic subjects in a longitudinal observational study of aging in men. We assessed lung function among cases and controls at three time points: Time0, prior to meeting the definition of diabetes; Time1, the point when the definition of diabetes was met; and Time2, the most recent follow-up exam. Cases had lower forced expiratory volume in 1s (FEV1) and forced vital capacity (FVC) at all time points, even with adjustment for age, height, weight, and smoking. In multiple linear regression models adjusting for relevant covariates, there were no differences in rates of FEV1 or FVC change over time between cases and controls. Men who are predisposed to develop diabetes have decreased lung function many years prior to the diagnosis, compared with men who do not develop diabetes. This decrement in lung function remains after the development of diabetes. We postulate that mechanisms involved in the insulin resistant state contribute to the diminished lung function observed in our subjects.
Exponential approximations in optimal design
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.
1990-01-01
One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.
Sensor Authentication: Embedded Processor Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svoboda, John
2012-09-25
Described is the c code running on the embedded Microchip 32bit PIC32MX575F256H located on the INL developed noise analysis circuit board. The code performs the following functions: Controls the noise analysis circuit board preamplifier voltage gains of 1, 10, 100, 000 Initializes the analog to digital conversion hardware, input channel selection, Fast Fourier Transform (FFT) function, USB communications interface, and internal memory allocations Initiates high resolution 4096 point 200 kHz data acquisition Computes complex 2048 point FFT and FFT magnitude. Services Host command set Transfers raw data to Host Transfers FFT result to host Communication error checking
A Student's Construction of Transformations of Functions in a Multiple Representational Environment.
ERIC Educational Resources Information Center
Borba, Marcelo C.; Confrey, Jere
1996-01-01
Reports on a case study of a 16-year-old student working on transformations of functions in a computer-based, multirepresentational environment. Presents an analysis of the work during the transition from the use of visualization and analysis of discrete points to the use of algebraic symbolism. (AIM)
Pressure Points in Reading Comprehension: A Quantile Multiple Regression Analysis
ERIC Educational Resources Information Center
Logan, Jessica
2017-01-01
The goal of this study was to examine how selected pressure points or areas of vulnerability are related to individual differences in reading comprehension and whether the importance of these pressure points varies as a function of the level of children's reading comprehension. A sample of 245 third-grade children were given an assessment battery…
Local linear regression for function learning: an analysis based on sample discrepancy.
Cervellera, Cristiano; Macciò, Danilo
2014-11-01
Local linear regression models, a kind of nonparametric structures that locally perform a linear estimation of the target function, are analyzed in the context of empirical risk minimization (ERM) for function learning. The analysis is carried out with emphasis on geometric properties of the available data. In particular, the discrepancy of the observation points used both to build the local regression models and compute the empirical risk is considered. This allows to treat indifferently the case in which the samples come from a random external source and the one in which the input space can be freely explored. Both consistency of the ERM procedure and approximating capabilities of the estimator are analyzed, proving conditions to ensure convergence. Since the theoretical analysis shows that the estimation improves as the discrepancy of the observation points becomes smaller, low-discrepancy sequences, a family of sampling methods commonly employed for efficient numerical integration, are also analyzed. Simulation results involving two different examples of function learning are provided.
Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo
2017-01-01
This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.
Research of GIS-services applicability for solution of spatial analysis tasks.
NASA Astrophysics Data System (ADS)
Terekhin, D. A.; Botygin, I. A.; Sherstneva, A. I.; Sherstnev, V. S.
2017-01-01
Experiments for working out the areas of applying various gis-services in the tasks of spatial analysis are discussed in this paper. Google Maps, Yandex Maps, Microsoft SQL Server are used as services of spatial analysis. All services have shown a comparable speed of analyzing the spatial data when carrying out elemental spatial requests (building up the buffer zone of a point object) as well as the preferences of Microsoft SQL Server in operating with more complicated spatial requests. When building up elemental spatial requests, internet-services show higher efficiency due to cliental data handling with JavaScript-subprograms. A weak point of public internet-services is an impossibility to handle data on a server side and a barren variety of spatial analysis functions. Microsoft SQL Server offers a large variety of functions needed for spatial analysis on the server side. The authors conclude that when solving practical problems, the capabilities of internet-services used in building up routes and completing other functions with spatial analysis with Microsoft SQL Server should be involved.
Zhang, Jiachao; Hu, Qisong; Xu, Chuanbiao; Liu, Sixin; Li, Congfa
2016-01-01
Pepper pericarp microbiota plays an important role in the pepper peeling process for the production of white pepper. We collected pepper samples at different peeling time points from Hainan Province, China, and used a metagenomic approach to identify changes in the pericarp microbiota based on functional gene analysis. UniFrac distance-based principal coordinates analysis revealed significant changes in the pericarp microbiota structure during peeling, which were attributed to increases in bacteria from the genera Selenomonas and Prevotella. We identified 28 core operational taxonomic units at each time point, mainly belonging to Selenomonas, Prevotella, Megasphaera, Anaerovibrio, and Clostridium genera. The results were confirmed by quantitative polymerase chain reaction. At the functional level, we observed significant increases in microbial features related to acetyl xylan esterase and pectinesterase for pericarp degradation during peeling. These findings offer a new insight into biodegradation for pepper peeling and will promote the development of the white pepper industry.
Key Microbiota Identification Using Functional Gene Analysis during Pepper (Piper nigrum L.) Peeling
Xu, Chuanbiao; Liu, Sixin; Li, Congfa
2016-01-01
Pepper pericarp microbiota plays an important role in the pepper peeling process for the production of white pepper. We collected pepper samples at different peeling time points from Hainan Province, China, and used a metagenomic approach to identify changes in the pericarp microbiota based on functional gene analysis. UniFrac distance-based principal coordinates analysis revealed significant changes in the pericarp microbiota structure during peeling, which were attributed to increases in bacteria from the genera Selenomonas and Prevotella. We identified 28 core operational taxonomic units at each time point, mainly belonging to Selenomonas, Prevotella, Megasphaera, Anaerovibrio, and Clostridium genera. The results were confirmed by quantitative polymerase chain reaction. At the functional level, we observed significant increases in microbial features related to acetyl xylan esterase and pectinesterase for pericarp degradation during peeling. These findings offer a new insight into biodegradation for pepper peeling and will promote the development of the white pepper industry. PMID:27768750
An analysis of neural receptive field plasticity by point process adaptive filtering
Brown, Emery N.; Nguyen, David P.; Frank, Loren M.; Wilson, Matthew A.; Solo, Victor
2001-01-01
Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields. PMID:11593043
Four-point functions and the permutation group S4
NASA Astrophysics Data System (ADS)
Eichmann, Gernot; Fischer, Christian S.; Heupel, Walter
2015-09-01
Four-point functions are at the heart of many interesting physical processes. A prime example is the light-by-light scattering amplitude, which plays an important role in the calculation of hadronic contributions to the anomalous magnetic moment of the muon. In the calculation of such quantities one faces the challenge of finding a suitable and well-behaved basis of tensor structures in coordinate and/or momentum space. Provided all (or many) of the external legs represent similar particle content, a powerful tool to construct and organize such bases is the permutation group S4. We introduce an efficient notation for dealing with the irreducible multiplets of S4, and we highlight the merits of this treatment by exemplifying four-point functions with gauge-boson legs such as the four-gluon vertex and the light-by-light scattering amplitude. The multiplet analysis is also useful for isolating the important kinematic regions and the dynamical singularity content of such amplitudes. Our analysis serves as a basis for future efficient calculations of these and similar objects.
NASA Astrophysics Data System (ADS)
Nishimura, Takahiro; Kimura, Hitoshi; Ogura, Yusuke; Tanida, Jun
2018-06-01
This paper presents an experimental assessment and analysis of super-resolution microscopy based on multiple-point spread function fitting of spectrally demultiplexed images using a designed DNA structure as a test target. For the purpose, a DNA structure was designed to have binding sites at a certain interval that is smaller than the diffraction limit. The structure was labeled with several types of quantum dots (QDs) to acquire their spatial information as spectrally encoded images. The obtained images are analyzed with a point spread function multifitting algorithm to determine the QD locations that indicate the binding site positions. The experimental results show that the labeled locations can be observed beyond the diffraction-limited resolution using three-colored fluorescence images that were obtained with a confocal fluorescence microscope. Numerical simulations show that labeling with eight types of QDs enables the positions aligned at 27.2-nm pitches on the DNA structure to be resolved with high accuracy.
Development of a probabilistic analysis methodology for structural reliability estimation
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.
1991-01-01
The novel probabilistic analysis method for assessment of structural reliability presented, which combines fast-convolution with an efficient structural reliability analysis, can after identifying the most important point of a limit state proceed to establish a quadratic-performance function. It then transforms the quadratic function into a linear one, and applies fast convolution. The method is applicable to problems requiring computer-intensive structural analysis. Five illustrative examples of the method's application are given.
Non-invasive evaluation of stable renal allograft function using point shear-wave elastography.
Kim, Bom Jun; Kim, Chan Kyo; Park, Jung Jae
2018-01-01
To investigate the feasibility of point shear-wave elastography (SWE) in evaluating patients with stable renal allograft function who underwent protocol biopsies. 95 patients with stable renal allograft function that underwent ultrasound-guided biopsies at predefined time points (10 days or 1 year after transplantation) were enrolled. Ultrasound and point SWE examinations were performed immediately before protocol biopsies. Patients were categorized into two groups: subclinical rejection (SCR) and non-SCR. Tissue elasticity (kPa) on SWE was measured in the cortex of all renal allografts. SCR was pathologically confirmed in 34 patients. Tissue elasticity of the SCR group (31.0 kPa) was significantly greater than that of the non-SCR group (24.5 kPa) (=0.016), while resistive index value did not show a significant difference between the two groups (p = 0.112). Tissue elasticity in renal allografts demonstrated significantly moderate negative correlation with estimated glomerular filtration rate (correlation coefficient = -0.604, p < 0.001). Tissue elasticity was not independent factor for SCR prediction on multivariate analysis. As a non-invasive tool, point SWE appears feasible in distinguishing between patients with SCR and without SCR in stable functioning renal allografts. Moreover, it may demonstrate the functional state of renal allografts. Advances in knowledge: On point SWE, SCR has greater tissue elasticity than non-SCR.
Arigovindan, Muthuvel; Shaevitz, Joshua; McGowan, John; Sedat, John W; Agard, David A
2010-03-29
We address the problem of computational representation of image formation in 3D widefield fluorescence microscopy with depth varying spherical aberrations. We first represent 3D depth-dependent point spread functions (PSFs) as a weighted sum of basis functions that are obtained by principal component analysis (PCA) of experimental data. This representation is then used to derive an approximating structure that compactly expresses the depth variant response as a sum of few depth invariant convolutions pre-multiplied by a set of 1D depth functions, where the convolving functions are the PCA-derived basis functions. The model offers an efficient and convenient trade-off between complexity and accuracy. For a given number of approximating PSFs, the proposed method results in a much better accuracy than the strata based approximation scheme that is currently used in the literature. In addition to yielding better accuracy, the proposed methods automatically eliminate the noise in the measured PSFs.
High-fidelity modeling and impact footprint prediction for vehicle breakup analysis
NASA Astrophysics Data System (ADS)
Ling, Lisa
For decades, vehicle breakup analysis had been performed for space missions that used nuclear heater or power units in order to assess aerospace nuclear safety for potential launch failures leading to inadvertent atmospheric reentry. Such pre-launch risk analysis is imperative to assess possible environmental impacts, obtain launch approval, and for launch contingency planning. In order to accurately perform a vehicle breakup analysis, the analysis tool should include a trajectory propagation algorithm coupled with thermal and structural analyses and influences. Since such a software tool was not available commercially or in the public domain, a basic analysis tool was developed by Dr. Angus McRonald prior to this study. This legacy software consisted of low-fidelity modeling and had the capability to predict vehicle breakup, but did not predict the surface impact point of the nuclear component. Thus the main thrust of this study was to develop and verify the additional dynamics modeling and capabilities for the analysis tool with the objectives to (1) have the capability to predict impact point and footprint, (2) increase the fidelity in the prediction of vehicle breakup, and (3) reduce the effort and time required to complete an analysis. The new functions developed for predicting the impact point and footprint included 3-degrees-of-freedom trajectory propagation, the generation of non-arbitrary entry conditions, sensitivity analysis, and the calculation of impact footprint. The functions to increase the fidelity in the prediction of vehicle breakup included a panel code to calculate the hypersonic aerodynamic coefficients for an arbitrary-shaped body and the modeling of local winds. The function to reduce the effort and time required to complete an analysis included the calculation of node failure criteria. The derivation and development of these new functions are presented in this dissertation, and examples are given to demonstrate the new capabilities and the improvements made, with comparisons between the results obtained from the upgraded analysis tool and the legacy software wherever applicable.
Subdiffraction incoherent optical imaging via spatial-mode demultiplexing: Semiclassical treatment
NASA Astrophysics Data System (ADS)
Tsang, Mankei
2018-02-01
I present a semiclassical analysis of a spatial-mode demultiplexing (SPADE) measurement scheme for far-field incoherent optical imaging under the effects of diffraction and photon shot noise. Building on previous results that assume two point sources or the Gaussian point-spread function, I generalize SPADE for a larger class of point-spread functions and evaluate its errors in estimating the moments of an arbitrary subdiffraction object. Compared with the limits to direct imaging set by the Cramér-Rao bounds, the results show that SPADE can offer far superior accuracy in estimating second- and higher-order moments.
A toolbox to visually explore cerebellar shape changes in cerebellar disease and dysfunction.
Abulnaga, S Mazdak; Yang, Zhen; Carass, Aaron; Kansal, Kalyani; Jedynak, Bruno M; Onyike, Chiadi U; Ying, Sarah H; Prince, Jerry L
2016-02-27
The cerebellum plays an important role in motor control and is also involved in cognitive processes. Cerebellar function is specialized by location, although the exact topographic functional relationship is not fully understood. The spinocerebellar ataxias are a group of neurodegenerative diseases that cause regional atrophy in the cerebellum, yielding distinct motor and cognitive problems. The ability to study the region-specific atrophy patterns can provide insight into the problem of relating cerebellar function to location. In an effort to study these structural change patterns, we developed a toolbox in MATLAB to provide researchers a unique way to visually explore the correlation between cerebellar lobule shape changes and function loss, with a rich set of visualization and analysis modules. In this paper, we outline the functions and highlight the utility of the toolbox. The toolbox takes as input landmark shape representations of subjects' cerebellar substructures. A principal component analysis is used for dimension reduction. Following this, a linear discriminant analysis and a regression analysis can be performed to find the discriminant direction associated with a specific disease type, or the regression line of a specific functional measure can be generated. The characteristic structural change pattern of a disease type or of a functional score is visualized by sampling points on the discriminant or regression line. The sampled points are used to reconstruct synthetic cerebellar lobule shapes. We showed a few case studies highlighting the utility of the toolbox and we compare the analysis results with the literature.
A toolbox to visually explore cerebellar shape changes in cerebellar disease and dysfunction
NASA Astrophysics Data System (ADS)
Abulnaga, S. Mazdak; Yang, Zhen; Carass, Aaron; Kansal, Kalyani; Jedynak, Bruno M.; Onyike, Chiadi U.; Ying, Sarah H.; Prince, Jerry L.
2016-03-01
The cerebellum plays an important role in motor control and is also involved in cognitive processes. Cerebellar function is specialized by location, although the exact topographic functional relationship is not fully understood. The spinocerebellar ataxias are a group of neurodegenerative diseases that cause regional atrophy in the cerebellum, yielding distinct motor and cognitive problems. The ability to study the region-specific atrophy patterns can provide insight into the problem of relating cerebellar function to location. In an effort to study these structural change patterns, we developed a toolbox in MATLAB to provide researchers a unique way to visually explore the correlation between cerebellar lobule shape changes and function loss, with a rich set of visualization and analysis modules. In this paper, we outline the functions and highlight the utility of the toolbox. The toolbox takes as input landmark shape representations of subjects' cerebellar substructures. A principal component analysis is used for dimension reduction. Following this, a linear discriminant analysis and a regression analysis can be performed to find the discriminant direction associated with a specific disease type, or the regression line of a specific functional measure can be generated. The characteristic structural change pattern of a disease type or of a functional score is visualized by sampling points on the discriminant or regression line. The sampled points are used to reconstruct synthetic cerebellar lobule shapes. We showed a few case studies highlighting the utility of the toolbox and we compare the analysis results with the literature.
Metabolomics is becoming well-established for studying chemical contaminant-induced alterations to normal biological function. For example, the literature contains a wealth of laboratory-based studies involving analysis of samples from organisms exposed to individual chemical tox...
Metabolomics has become well-established for studying chemical contaminant-induced alterations to normal biological function. For example, the literature contains a wealth of laboratory-based studies involving analysis of samples from organisms exposed to individual chemical toxi...
Analysis of a New Variational Model to Restore Point-Like and Curve-Like Singularities in Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aubert, Gilles, E-mail: gaubert@unice.fr; Blanc-Feraud, Laure, E-mail: Laure.Blanc-Feraud@inria.fr; Graziani, Daniele, E-mail: Daniele.Graziani@inria.fr
2013-02-15
The paper is concerned with the analysis of a new variational model to restore point-like and curve-like singularities in biological images. To this aim we investigate the variational properties of a suitable energy which governs these pathologies. Finally in order to realize numerical experiments we minimize, in the discrete setting, a regularized version of this functional by fast descent gradient scheme.
Successful ageing: A study of the literature using citation network analysis.
Kusumastuti, Sasmita; Derks, Marloes G M; Tellier, Siri; Di Nucci, Ezio; Lund, Rikke; Mortensen, Erik Lykke; Westendorp, Rudi G J
2016-11-01
Ageing is accompanied by an increased risk of disease and a loss of functioning on several bodily and mental domains and some argue that maintaining health and functioning is essential for a successful old age. Paradoxically, studies have shown that overall wellbeing follows a curvilinear pattern with the lowest point at middle age but increases thereafter up to very old age. To shed further light on this paradox, we reviewed the existing literature on how scholars define successful ageing and how they weigh the contribution of health and functioning to define success. We performed a novel, hypothesis-free and quantitative analysis of citation networks exploring the literature on successful ageing that exists in the Web of Science Core Collection Database using the CitNetExplorer software. Outcomes were visualized using timeline-based citation patterns. The clusters and sub-clusters of citation networks identified were starting points for in-depth qualitative analysis. Within the literature from 1902 through 2015, two distinct citation networks were identified. The first cluster had 1146 publications and 3946 citation links. It focused on successful ageing from the perspective of older persons themselves. Analysis of the various sub-clusters emphasized the importance of coping strategies, psycho-social engagement, and cultural differences. The second cluster had 609 publications and 1682 citation links and viewed successful ageing based on the objective measurements as determined by researchers. Subsequent sub-clustering analysis pointed to different domains of functioning and various ways of assessment. In the current literature two mutually exclusive concepts of successful ageing are circulating that depend on whether the individual himself or an outsider judges the situation. These different points of view help to explain the disability paradox, as successful ageing lies in the eyes of the beholder. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Lewis, Debra
2013-05-01
Relative equilibria of Lagrangian and Hamiltonian systems with symmetry are critical points of appropriate scalar functions parametrized by the Lie algebra (or its dual) of the symmetry group. Setting aside the structures - symplectic, Poisson, or variational - generating dynamical systems from such functions highlights the common features of their construction and analysis, and supports the construction of analogous functions in non-Hamiltonian settings. If the symmetry group is nonabelian, the functions are invariant only with respect to the isotropy subgroup of the given parameter value. Replacing the parametrized family of functions with a single function on the product manifold and extending the action using the (co)adjoint action on the algebra or its dual yields a fully invariant function. An invariant map can be used to reverse the usual perspective: rather than selecting a parametrized family of functions and finding their critical points, conditions under which functions will be critical on specific orbits, typically distinguished by isotropy class, can be derived. This strategy is illustrated using several well-known mechanical systems - the Lagrange top, the double spherical pendulum, the free rigid body, and the Riemann ellipsoids - and generalizations of these systems.
Nie, Xiaobing; Cao, Jinde
2011-11-01
In this paper, second-order interactions are introduced into competitive neural networks (NNs) and the multistability is discussed for second-order competitive NNs (SOCNNs) with nondecreasing saturated activation functions. Firstly, based on decomposition of state space, Cauchy convergence principle, and inequality technique, some sufficient conditions ensuring the local exponential stability of 2N equilibrium points are derived. Secondly, some conditions are obtained for ascertaining equilibrium points to be locally exponentially stable and to be located in any designated region. Thirdly, the theory is extended to more general saturated activation functions with 2r corner points and a sufficient criterion is given under which the SOCNNs can have (r+1)N locally exponentially stable equilibrium points. Even if there is no second-order interactions, the obtained results are less restrictive than those in some recent works. Finally, three examples with their simulations are presented to verify the theoretical analysis.
NASA Astrophysics Data System (ADS)
Kim, Namkug; Seo, Joon Beom; Heo, Jeong Nam; Kang, Suk-Ho
2007-03-01
The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies, semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear optimization, the optimal points was evaluated and compared between reference points. The average distance between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muralidhar, K Raja; Komanduri, K
2014-06-01
Purpose: The objective of this work is to present a mechanism for calculating inflection points on profiles at various depths and field sizes and also a significant study on the percentage of doses at the inflection points for various field sizes and depths for 6XFFF and 10XFFF energy profiles. Methods: Graphical representation was done on Percentage of dose versus Inflection points. Also using the polynomial function, the authors formulated equations for calculating spot-on inflection point on the profiles for 6X FFF and 10X FFF energies for all field sizes and at various depths. Results: In a flattening filter free radiationmore » beam which is not like in Flattened beams, the dose at inflection point of the profile decreases as field size increases for 10XFFF. Whereas in 6XFFF, the dose at the inflection point initially increases up to 10x10cm2 and then decreases. The polynomial function was fitted for both FFF beams for all field sizes and depths. For small fields less than 5x5 cm2 the inflection point and FWHM are almost same and hence analysis can be done just like in FF beams. A change in 10% of dose can change the field width by 1mm. Conclusion: The present study, Derivative of equations based on the polynomial equation to define inflection point concept is precise and accurate way to derive the inflection point dose on any FFF beam profile at any depth with less than 1% accuracy. Corrections can be done in future studies based on the multiple number of machine data. Also a brief study was done to evaluate the inflection point positions with respect to dose in FFF energies for various field sizes and depths for 6XFFF and 10XFFF energy profiles.« less
Functional analysis of tight junction organization.
DiBona, D R
1985-01-01
The functional basis of tight junction design has been examined from the point of view that this rate-limiting barrier to paracellular transport is a multicompartment system. Review of the osmotic sensitivity of these structures points to the need for this sort of analysis for meaningful correlation of structure and function under a range of conditions. A similar conclusion is drawn with respect to results from voltage-clamping protocols where reversal of spontaneous transmural potential difference elicits parallel changes in both structure and function in much the same way as does reversal of naturally occurring osmotic gradients. In each case, it becomes necessary to regard the junction as a functionally polarized structure to account for observations of its rectifying properties. Lastly, the details of experimentally-induced junction deformation are examined in light of current theories of its organization; arguments are presented in favor of the view that the primary components of intramembranous organization (as viewed with freeze-fracture techniques) are lipidic rather than proteinaceous.
Nusinersen versus Sham Control in Later-Onset Spinal Muscular Atrophy.
Mercuri, Eugenio; Darras, Basil T; Chiriboga, Claudia A; Day, John W; Campbell, Craig; Connolly, Anne M; Iannaccone, Susan T; Kirschner, Janbernd; Kuntz, Nancy L; Saito, Kayoko; Shieh, Perry B; Tulinius, Már; Mazzone, Elena S; Montes, Jacqueline; Bishop, Kathie M; Yang, Qingqing; Foster, Richard; Gheuens, Sarah; Bennett, C Frank; Farwell, Wildon; Schneider, Eugene; De Vivo, Darryl C; Finkel, Richard S
2018-02-15
Nusinersen is an antisense oligonucleotide drug that modulates pre-messenger RNA splicing of the survival motor neuron 2 ( SMN2) gene. It has been developed for the treatment of spinal muscular atrophy (SMA). We conducted a multicenter, double-blind, sham-controlled, phase 3 trial of nusinersen in 126 children with SMA who had symptom onset after 6 months of age. The children were randomly assigned, in a 2:1 ratio, to undergo intrathecal administration of nusinersen at a dose of 12 mg (nusinersen group) or a sham procedure (control group) on days 1, 29, 85, and 274. The primary end point was the least-squares mean change from baseline in the Hammersmith Functional Motor Scale-Expanded (HFMSE) score at 15 months of treatment; HFMSE scores range from 0 to 66, with higher scores indicating better motor function. Secondary end points included the percentage of children with a clinically meaningful increase from baseline in the HFMSE score (≥3 points), an outcome that indicates improvement in at least two motor skills. In the prespecified interim analysis, there was a least-squares mean increase from baseline to month 15 in the HFMSE score in the nusinersen group (by 4.0 points) and a least-squares mean decrease in the control group (by -1.9 points), with a significant between-group difference favoring nusinersen (least-squares mean difference in change, 5.9 points; 95% confidence interval, 3.7 to 8.1; P<0.001). This result prompted early termination of the trial. Results of the final analysis were consistent with results of the interim analysis. In the final analysis, 57% of the children in the nusinersen group as compared with 26% in the control group had an increase from baseline to month 15 in the HFMSE score of at least 3 points (P<0.001), and the overall incidence of adverse events was similar in the nusinersen group and the control group (93% and 100%, respectively). Among children with later-onset SMA, those who received nusinersen had significant and clinically meaningful improvement in motor function as compared with those in the control group. (Funded by Biogen and Ionis Pharmaceuticals; CHERISH ClinicalTrials.gov number, NCT02292537 .).
A Scalable Nonuniform Pointer Analysis for Embedded Program
NASA Technical Reports Server (NTRS)
Venet, Arnaud
2004-01-01
In this paper we present a scalable pointer analysis for embedded applications that is able to distinguish between instances of recursively defined data structures and elements of arrays. The main contribution consists of an efficient yet precise algorithm that can handle multithreaded programs. We first perform an inexpensive flow-sensitive analysis of each function in the program that generates semantic equations describing the effect of the function on the memory graph. These equations bear numerical constraints that describe nonuniform points-to relationships. We then iteratively solve these equations in order to obtain an abstract storage graph that describes the shape of data structures at every point of the program for all possible thread interleavings. We bring experimental evidence that this approach is tractable and precise for real-size embedded applications.
Approximate analytical solutions in the analysis of thin elastic plates
NASA Astrophysics Data System (ADS)
Goloskokov, Dmitriy P.; Matrosov, Alexander V.
2018-05-01
Two approaches to the construction of approximate analytical solutions for bending of a rectangular thin plate are presented: the superposition method based on the method of initial functions (MIF) and the one built using the Green's function in the form of orthogonal series. Comparison of two approaches is carried out by analyzing a square plate clamped along its contour. Behavior of the moment and the shear force in the neighborhood of the corner points is discussed. It is shown that both solutions give identical results at all points of the plate except for the neighborhoods of the corner points. There are differences in the values of bending moments and generalized shearing forces in the neighborhoods of the corner points.
Past Taurine Intake Has a Positive Effect on Present Cognitive Function in the Elderly.
Bae, Mi Ae; Gao, Ranran; Kim, Sung Hoon; Chang, Kyung Ja
2017-01-01
This study investigated the associations between dietary history of past taurine intake and cognitive function in the elderly. Subjects of this study were 40 elderly persons with dementia (men 14, women 26) and 37 normal elderly persons (men 5, women 32). Data were collected using questionnaires by investigator-based interview to the elderly and family caregivers. We examined their general characteristics, anthropometric data, cognitive function, and taurine index. Cognitive function was measured using MMSE-DS and higher score means better cognitive function. As dietary history of past taurine intake, taurine index was evaluated by scoring the intake frequency of 41 kinds of taurine-containing foods. Part correlation analysis (sex, age, and school educational period correction) was used to analyze associations between taurine index and cognitive function. The analysis of all data was carried out by the SPSS 20.0 program for windows. The age, height, weight, and BMI of elderly with dementia showed no statistical significance compared to normal elderly. The elderly with dementia had significantly higher school education period (7.4 years) than the normal elderly (4.8 years) (p < 0.01). Nevertheless, the average total score of cognitive function (MMSE-DS) of the elderly with dementia (18.1 points) was significantly lower than score of the normal elderly (21.7 points) (p < 0.05). The average taurine index of the elderly with dementia (104.7 points) was significantly lower than average taurine index of the normal elderly (123.7 points) (p < 0.01). There were positive correlations between total taurine index and total score of cognitive function in all the elderly subjects (p < 0.05). In particular, as taurine index was higher, there were significantly higher scores of cognitive function such as 'time orientation' and 'judgement and abstract thinking' (p < 0.01). In conclusion, these results suggest that past taurine intake may have a positive effect on present cognitive function in the elderly.
An Overview of Discourse Analysis and Its Usefulness in TESOL.
ERIC Educational Resources Information Center
Milne, Geraldine Veronica
This paper provides an overview of discourse analysis from a linguistic point of view, discussing why it is relevant to Teaching English to Speakers of Other Languages (TESOL). It focuses on the following: discourse and discourse analysis; discourse analysis and TESOL; approaches to discourse analysis; systemic functional linguistics; theme and…
2010-04-30
scrambled controls. Responsiveness was tested using luciferase activity of the 3TP reporter construct and normalized to renilla activity. Data points...was tested using luciferase activity of the TOP-flash reporter construct and normalized to renilla activity. Data points for fractionation and
Cosmological constraints from the convergence 1-point probability distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, Kenneth; Blazek, Jonathan; Honscheid, Klaus
2017-06-29
Here, we examine the cosmological information available from the 1-point probability density function (PDF) of the weak-lensing convergence field, utilizing fast l-picola simulations and a Fisher analysis. We find competitive constraints in the Ωm–σ8 plane from the convergence PDF with 188 arcmin 2 pixels compared to the cosmic shear power spectrum with an equivalent number of modes (ℓ < 886). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that parameter space. A joint analysis of the convergence PDF and shear 2-point function also reduces the impact of shape measurement systematics, to which the PDF is lessmore » susceptible, and improves the total figure of merit by a factor of 2–3, depending on the level of systematics. Finally, we present a correction factor necessary for calculating the unbiased Fisher information from finite differences using a limited number of cosmological simulations.« less
Cosmological constraints from the convergence 1-point probability distribution
NASA Astrophysics Data System (ADS)
Patton, Kenneth; Blazek, Jonathan; Honscheid, Klaus; Huff, Eric; Melchior, Peter; Ross, Ashley J.; Suchyta, Eric
2017-11-01
We examine the cosmological information available from the 1-point probability density function (PDF) of the weak-lensing convergence field, utilizing fast L-PICOLA simulations and a Fisher analysis. We find competitive constraints in the Ωm-σ8 plane from the convergence PDF with 188 arcmin2 pixels compared to the cosmic shear power spectrum with an equivalent number of modes (ℓ < 886). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that parameter space. A joint analysis of the convergence PDF and shear 2-point function also reduces the impact of shape measurement systematics, to which the PDF is less susceptible, and improves the total figure of merit by a factor of 2-3, depending on the level of systematics. Finally, we present a correction factor necessary for calculating the unbiased Fisher information from finite differences using a limited number of cosmological simulations.
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
NASA Technical Reports Server (NTRS)
Allen, G.
1972-01-01
The use of the theta-operator method and generalized hypergeometric functions in obtaining solutions to nth-order linear ordinary differential equations is explained. For completeness, the analysis of the differential equation to determine whether the point of expansion is an ordinary point or a regular singular point is included. The superiority of the two methods shown over the standard method is demonstrated by using all three of the methods to work out several examples. Also included is a compendium of formulae and properties of the theta operator and generalized hypergeometric functions which is complete enough to make the report self-contained.
Safe driving and executive functions in healthy middle-aged drivers.
León-Domínguez, Umberto; Solís-Marcos, Ignacio; Barrio-Álvarez, Elena; Barroso Y Martín, Juan Manuel; León-Carrión, José
2017-01-01
The introduction of the point system driver's license in several European countries could offer a valid framework for evaluating driving skills. This is the first study to use this framework to assess the functional integrity of executive functions in middle-aged drivers with full points, partial points or no points on their driver's license (N = 270). The purpose of this study is to find differences in executive functions that could be determinants in safe driving. Cognitive tests were used to assess attention processes, processing speed, planning, cognitive flexibility, and inhibitory control. Analyses for covariance (ANCOVAS) were used for group comparisons while adjusting for education level. The Bonferroni method was used for correcting for multiple comparisons. Overall, drivers with the full points on their license showed better scores than the other two groups. In particular, significant differences were found in reaction times on Simple and Conditioned Attention tasks (both p-values < 0.001) and in number of type-III errors on the Tower of Hanoi task (p = 0.026). Differences in reaction time on attention tasks could serve as neuropsychological markers for safe driving. Further analysis should be conducted in order to determine the behavioral impact of impaired executive functioning on driving ability.
Aircraft navigation and surveillance analysis for a spherical earth
DOT National Transportation Integrated Search
2014-10-01
This memorandum addresses a fundamental function in surveillance and navigation analysis : quantifying the geometry of two or more locations relative to each other and to a spherical earth. Here, geometry refers to: (a) points (idealized lo...
NASA Astrophysics Data System (ADS)
Leherte, L.; Allen, F. H.; Vercauteren, D. P.
1995-04-01
A computational method is described for mapping the volume within the DNA double helix accessible to a groove-binding antibiotic, netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to be a good representation of the electron density function at various resolutions; while at the atomic level the ellipsoid method gives results which are in close agreement with those from the conventional, spherical, van der Waals approach.
NASA Astrophysics Data System (ADS)
Leherte, Laurence; Allen, Frank H.
1994-06-01
A computational method is described for mapping the volume within the DNA double helix accessible to the groove-binding antibiotic netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to give a good representation of the electron density function at various resolutions. At the atomic level, the ellipsoid method gives results which are in close agreement with those from the conventional spherical van der Waals approach.
Detecting Brain State Changes via Fiber-Centered Functional Connectivity Analysis
Li, Xiang; Lim, Chulwoo; Li, Kaiming; Guo, Lei; Liu, Tianming
2013-01-01
Diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI) have been widely used to study structural and functional brain connectivity in recent years. A common assumption used in many previous functional brain connectivity studies is the temporal stationarity. However, accumulating literature evidence has suggested that functional brain connectivity is under temporal dynamic changes in different time scales. In this paper, a novel and intuitive approach is proposed to model and detect dynamic changes of functional brain states based on multimodal fMRI/DTI data. The basic idea is that functional connectivity patterns of all fiber-connected cortical voxels are concatenated into a descriptive functional feature vector to represent the brain’s state, and the temporal change points of brain states are decided by detecting the abrupt changes of the functional vector patterns via the sliding window approach. Our extensive experimental results have shown that meaningful brain state change points can be detected in task-based fMRI/DTI, resting state fMRI/DTI, and natural stimulus fMRI/DTI data sets. Particularly, the detected change points of functional brain states in task-based fMRI corresponded well to the external stimulus paradigm administered to the participating subjects, thus partially validating the proposed brain state change detection approach. The work in this paper provides novel perspective on the dynamic behaviors of functional brain connectivity and offers a starting point for future elucidation of the complex patterns of functional brain interactions and dynamics. PMID:22941508
Analysis of Screen Channel LAD Bubble Point Tests in Liquid Oxygen at Elevated Temperature
NASA Technical Reports Server (NTRS)
Hartwig, Jason; McQuillen, John
2011-01-01
The purpose of this paper is to examine the key parameters that affect the bubble point pressure for screen channel Liquid Acquisition Devices in cryogenic liquid oxygen at elevated pressures and temperatures. An in depth analysis of the effect of varying temperature, pressure, and pressurization gas on bubble point is presented. Testing of a 200 x 1400 and 325 x 2300 Dutch Twill screen sample was conducted in the Cryogenics Components Lab 7 facility at the NASA Glenn Research Center in Cleveland, Ohio. Test conditions ranged from 92 to 130K and 0.138 - 1.79 MPa. Bubble point is shown to be a strong function of temperature with a secondary dependence on pressure. The pressure dependence is believed to be a function of the amount of evaporation and condensation occurring at the screen. Good agreement exists between data and theory for normally saturated liquid but the model generally under predicts the bubble point in subcooled liquid. Better correlation with the data is obtained by using the liquid temperature at the screen to determine surface tension of the fluid, as opposed to the bulk liquid temperature.
Indications for a critical point in the phase diagram for hot and dense nuclear matter
NASA Astrophysics Data System (ADS)
Lacey, Roy A.
2016-12-01
Two-pion interferometry measurements are studied for a broad range of collision centralities in Au+Au (√{sNN} = 7.7- 200 GeV) and Pb+Pb (√{sNN} = 2.76 TeV) collisions. They indicate non-monotonic excitation functions for the Gaussian emission source radii difference (Rout -Rside), suggestive of reaction trajectories which spend a fair amount of time near a soft point in the equation of state (EOS) that coincides with the critical end point (CEP). A Finite-Size Scaling (FSS) analysis of these excitation functions, provides further validation tests for the CEP. It also indicates a second order phase transition at the CEP, and the values Tcep ∼ 165 MeV and μBcep ∼ 95 MeV for its location in the (T ,μB)-plane of the phase diagram. The static critical exponents (ν ≈ 0.66 and γ ≈ 1.2) extracted via the same FSS analysis, place this CEP in the 3D Ising model (static) universality class. A Dynamic Finite-Size Scaling analysis of the excitation functions, gives the estimate z ∼ 0.87 for the dynamic critical exponent, suggesting that the associated critical expansion dynamics is dominated by the hydrodynamic sound mode.
Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.
Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo
2018-04-01
In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.
Kierkegaard, Signe; Langeskov-Christensen, Martin; Lund, Bent; Naal, Florian D; Mechlenburg, Inger; Dalgas, Ulrik; Casartelli, Nicola C
2017-04-01
To investigate pain, activities of daily living (ADL) function, sport function, quality of life and satisfaction at different time points after hip arthroscopy in patients with femoroacetabular impingement (FAI). Systematic review with meta-analysis. Weighted mean differences between preoperative and postoperative outcomes were calculated and used for meta-analysis. EMBASE, MEDLINE, SportsDiscus, CINAHL, Cochrane Library, and PEDro. Studies that evaluated hip pain, ADL function, sport function and quality of life before and after hip arthroscopy and postoperative satisfaction in patients with symptomatic FAI. Twenty-six studies (22 case series, 3 cohort studies, 1 randomised controlled trial (RCT)) were included in the systematic review and 19 in the meta-analysis. Clinically relevant pain and ADL function improvements were first reported between 3 and 6 months, and sport function improvements between 6 months and 1 year after surgery. It is not clear when quality of life improvements were first achieved. On average, residual mild pain and ADL and sport function scores lower than their healthy counterparts were reported by patients following surgery. Postoperative patient satisfaction ranged from 68% to 100%. On average, patients reported earlier pain and ADL function improvements, and slower sport function improvements after hip arthroscopy for FAI. However, average scores from patients indicate residual mild hip pain and/or hip function lower than their healthy counterparts after surgery. Owing to the current low level of evidence, future RCTs and cohort studies should investigate the effectiveness of hip arthroscopy in patients with FAI. CRD42015019649. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
The Natural Neighbour Radial Point Interpolation Meshless Method Applied to the Non-Linear Analysis
NASA Astrophysics Data System (ADS)
Dinis, L. M. J. S.; Jorge, R. M. Natal; Belinha, J.
2011-05-01
In this work the Natural Neighbour Radial Point Interpolation Method (NNRPIM), is extended to large deformation analysis of elastic and elasto-plastic structures. The NNPRIM uses the Natural Neighbour concept in order to enforce the nodal connectivity and to create a node-depending background mesh, used in the numerical integration of the NNRPIM interpolation functions. Unlike the FEM, where geometrical restrictions on elements are imposed for the convergence of the method, in the NNRPIM there are no such restrictions, which permits a random node distribution for the discretized problem. The NNRPIM interpolation functions, used in the Galerkin weak form, are constructed using the Radial Point Interpolators, with some differences that modify the method performance. In the construction of the NNRPIM interpolation functions no polynomial base is required and the used Radial Basis Function (RBF) is the Multiquadric RBF. The NNRPIM interpolation functions posses the delta Kronecker property, which simplify the imposition of the natural and essential boundary conditions. One of the scopes of this work is to present the validation the NNRPIM in the large-deformation elasto-plastic analysis, thus the used non-linear solution algorithm is the Newton-Rapson initial stiffness method and the efficient "forward-Euler" procedure is used in order to return the stress state to the yield surface. Several non-linear examples, exhibiting elastic and elasto-plastic material properties, are studied to demonstrate the effectiveness of the method. The numerical results indicated that NNRPIM handles large material distortion effectively and provides an accurate solution under large deformation.
Mahableshwarkar, Atul R; Zajecka, John; Jacobson, William; Chen, Yinzhong; Keefe, Richard SE
2015-01-01
This multicenter, randomized, double-blind, placebo-controlled, active-referenced (duloxetine 60 mg), parallel-group study evaluated the short-term efficacy and safety of vortioxetine (10–20 mg) on cognitive function in adults (aged 18–65 years) diagnosed with major depressive disorder (MDD) who self-reported cognitive dysfunction. Efficacy was evaluated using ANCOVA for the change from baseline to week 8 in the digit symbol substitution test (DSST)–number of correct symbols as the prespecified primary end point. The patient-reported perceived deficits questionnaire (PDQ) and physician-assessed clinical global impression (CGI) were analyzed in a prespecified hierarchical testing sequence as key secondary end points. Additional predefined end points included the objective performance-based University of San Diego performance-based skills assessment (UPSA) (ANCOVA) to measure functionality, MADRS (MMRM) to assess efficacy in depression, and a prespecified multiple regression analysis (path analysis) to calculate direct vs indirect effects of vortioxetine on cognitive function. Safety and tolerability were assessed at all visits. Vortioxetine was statistically superior to placebo on the DSST (P<0.05), PDQ (P<0.01), CGI-I (P<0.001), MADRS (P<0.05), and UPSA (P<0.001). Path analysis indicated that vortioxetine's cognitive benefit was primarily a direct treatment effect rather than due to alleviation of depressive symptoms. Duloxetine was not significantly different from placebo on the DSST or UPSA, but was superior to placebo on the PDQ, CGI-I, and MADRS. Common adverse events (incidence ⩾5%) for vortioxetine were nausea, headache, and diarrhea. In this study of MDD adults who self-reported cognitive dysfunction, vortioxetine significantly improved cognitive function, depression, and functionality and was generally well tolerated. PMID:25687662
NASA Astrophysics Data System (ADS)
Judson, Richard S.; Rabitz, Herschel
1987-04-01
The relationship between structure in the potential surface and classical mechanical observables is examined by means of functional sensitivity analysis. Functional sensitivities provide maps of the potential surface, highlighting those regions that play the greatest role in determining the behavior of observables. A set of differential equations for the sensitivities of the trajectory components are derived. These are then solved using a Green's function method. It is found that the sensitivities become singular at the trajectory turning points with the singularities going as η-3/2, with η being the distance from the nearest turning point. The sensitivities are zero outside of the energetically and dynamically allowed region of phase space. A second set of equations is derived from which the sensitivities of observables can be directly calculated. An adjoint Green's function technique is employed, providing an efficient method for numerically calculating these quantities. Sensitivity maps are presented for a simple collinear atom-diatom inelastic scattering problem and for two Henon-Heiles type Hamiltonians modeling intramolecular processes. It is found that the positions of the trajectory caustics in the bound state problem determine regions of the highest potential surface sensitivities. In the scattering problem (which is impulsive, so that ``sticky'' collisions did not occur), the positions of the turning points of the individual trajectory components determine the regions of high sensitivity. In both cases, these lines of singularities are superimposed on a rich background structure. Most interesting is the appearance of classical interference effects. The interference features in the sensitivity maps occur most noticeably where two or more lines of turning points cross. The important practical motivation for calculating the sensitivities derives from the fact that the potential is a function, implying that any direct attempt to understand how local potential regions affect the behavior of the observables by repeatedly and systematically altering the potential will be prohibitively expensive. The functional sensitivity method enables one to perform this analysis at a fraction of the computational labor required for the direct method.
ASIC For Complex Fixed-Point Arithmetic
NASA Technical Reports Server (NTRS)
Petilli, Stephen G.; Grimm, Michael J.; Olson, Erlend M.
1995-01-01
Application-specific integrated circuit (ASIC) performs 24-bit, fixed-point arithmetic operations on arrays of complex-valued input data. High-performance, wide-band arithmetic logic unit (ALU) designed for use in computing fast Fourier transforms (FFTs) and for performing ditigal filtering functions. Other applications include general computations involved in analysis of spectra and digital signal processing.
Effects of directional uncertainty on visually-guided joystick pointing.
Berryhill, Marian; Kveraga, Kestutis; Hughes, Howard C
2005-02-01
Reaction times generally follow the predictions of Hick's law as stimulus-response uncertainty increases, although notable exceptions include the oculomotor system. Saccadic and smooth pursuit eye movement reaction times are independent of stimulus-response uncertainty. Previous research showed that joystick pointing to targets, a motor analog of saccadic eye movements, is only modestly affected by increased stimulus-response uncertainty; however, a no-uncertainty condition (simple reaction time to 1 possible target) was not included. Here, we re-evaluate manual joystick pointing including a no-uncertainty condition. Analysis indicated simple joystick pointing reaction times were significantly faster than choice reaction times. Choice reaction times (2, 4, or 8 possible target locations) only slightly increased as the number of possible targets increased. These data suggest that, as with joystick tracking (a motor analog of smooth pursuit eye movements), joystick pointing is more closely approximated by a simple/choice step function than the log function predicted by Hick's law.
Nash points, Ky Fan inequality and equilibria of abstract economies in Max-Plus and -convexity
NASA Astrophysics Data System (ADS)
Briec, Walter; Horvath, Charles
2008-05-01
-convexity was introduced in [W. Briec, C. Horvath, -convexity, Optimization 53 (2004) 103-127]. Separation and Hahn-Banach like theorems can be found in [G. Adilov, A.M. Rubinov, -convex sets and functions, Numer. Funct. Anal. Optim. 27 (2006) 237-257] and [W. Briec, C.D. Horvath, A. Rubinov, Separation in -convexity, Pacific J. Optim. 1 (2005) 13-30]. We show here that all the basic results related to fixed point theorems are available in -convexity. Ky Fan inequality, existence of Nash equilibria and existence of equilibria for abstract economies are established in the framework of -convexity. Monotone analysis, or analysis on Maslov semimodules [V.N. Kolokoltsov, V.P. Maslov, Idempotent Analysis and Its Applications, Math. Appl., volE 401, Kluwer Academic, 1997; V.P. Litvinov, V.P. Maslov, G.B. Shpitz, Idempotent functional analysis: An algebraic approach, Math. Notes 69 (2001) 696-729; V.P. Maslov, S.N. Samborski (Eds.), Idempotent Analysis, Advances in Soviet Mathematics, Amer. Math. Soc., Providence, RI, 1992], is the natural framework for these results. From this point of view Max-Plus convexity and -convexity are isomorphic Maslov semimodules structures over isomorphic semirings. Therefore all the results of this paper hold in the context of Max-Plus convexity.
Bayesian Inference for Functional Dynamics Exploring in fMRI Data.
Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao; Pan, Yi; Zhang, Jing
2016-01-01
This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.
N-point statistics of large-scale structure in the Zel'dovich approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin, E-mail: tassev@astro.princeton.edu
2014-06-01
Motivated by the results presented in a companion paper, here we give a simple analytical expression for the matter n-point functions in the Zel'dovich approximation (ZA) both in real and in redshift space (including the angular case). We present numerical results for the 2-dimensional redshift-space correlation function, as well as for the equilateral configuration for the real-space 3-point function. We compare those to the tree-level results. Our analysis is easily extendable to include Lagrangian bias, as well as higher-order perturbative corrections to the ZA. The results should be especially useful for modelling probes of large-scale structure in the linear regime,more » such as the Baryon Acoustic Oscillations. We make the numerical code used in this paper freely available.« less
Fiat lux! Phylogeny and bioinformatics shed light on GABA functions in plants.
Renault, Hugues
2013-06-01
The non-protein amino acid γ-aminobutyric acid (GABA) accumulates in plants in response to a wide variety of environmental cues. Recent data point toward an involvement of GABA in tricarboxylic acid (TCA) cycle activity and respiration, especially in stressed roots. To gain further insights into potential GABA functions in plants, phylogenetic and bioinformatic approaches were undertaken. Phylogenetic reconstruction of the GABA transaminase (GABA-T) protein family revealed the monophyletic nature of plant GABA-Ts. However, this analysis also pointed to the common origin of several plant aminotransferases families, which were found more similar to plant GABA-Ts than yeast and human GABA-Ts. A computational analysis of AtGABA-T co-expressed genes was performed in roots and in stress conditions. This second approach uncovered a strong connection between GABA metabolism and glyoxylate cycle during stress. Both in silico analyses open new perspectives and hypotheses for GABA metabolic functions in plants.
Study on the stability of adrenaline and on the determination of its acidity constants
NASA Astrophysics Data System (ADS)
Corona-Avendaño, S.; Alarcón-Angeles, G.; Rojas-Hernández, A.; Romero-Romo, M. A.; Ramírez-Silva, M. T.
2005-01-01
In this work, the results are presented concerning the influence of time on the spectral behaviour of adrenaline (C 9H 13NO 3) (AD) and of the determination of its acidity constants by means of spectrophotometry titrations and point-by-point analysis, using for the latter freshly prepared samples for each analysis at every single pH. As the catecholamines are sensitive to light, all samples were protected against it during the course of the experiments. Each method rendered four acidity constants corresponding each to the four acid protons belonging to the functional groups present in the molecule; for the point-by-point analysis the values found were: log β 1=38.25±0.21 , log β 2=29.65±0.17 , log β 3=21.01±0.14 , log β 4=11.34±0.071 .
Isovector charges of the nucleon from 2 + 1 -flavor QCD with clover fermions
Yoon, Boram; Jang, Yong -Chull; Gupta, Rajan; ...
2017-04-13
We present high-statistics estimates of the isovector charges of the nucleon from four 2+1-flavor ensembles generated using Wilson-clover fermions with stout smearing and tree-level tadpole improved Symanzik gauge action at lattice spacingsmore » $a=0.114$ and $0.080$ fm and with $$M_\\pi \\approx 315$$ and 200 MeV. The truncated solver method with bias correction and the coherent source sequential propagator construction are used to cost-effectively achieve $O(10^5)$ measurements on each ensemble. Using these data, the analysis of two-point correlation functions is extended to include four states in the fits and of three-point functions to three states. Control over excited-state contamination in the calculation of the nucleon mass, the mass gaps between excited states, and in the matrix elements is demonstrated by the consistency of estimates using this multistate analysis of the spectral decomposition of the correlation functions and from simulations of the three-point functions at multiple values of the source-sink separation. Lastly, the results for all three charges, $$g_A$$, $$g_S$$ and $$g_T$$, are in good agreement with calculations done using the clover-on-HISQ lattice formulation with similar values of the lattice parameters.« less
ERIC Educational Resources Information Center
Carr, Patrick L.
2017-01-01
This dissertation analyzes how North Carolina State University's (NCSU) James B. Hunt Jr. Library extends the ways in which the information architectures of academic research libraries can function as a technology, as discourse, and as rhetoric. The starting point for the analysis is the libraries of antiquity, which functioned technologically as…
González-José, Rolando; Charlin, Judith
2012-01-01
The specific using of different prehistoric weapons is mainly determined by its physical properties, which provide a relative advantage or disadvantage to perform a given, particular function. Since these physical properties are integrated to accomplish that function, examining design variables and their pattern of integration or modularity is of interest to estimate the past function of a point. Here we analyze a composite sample of lithic points from southern Patagonia likely formed by arrows, thrown spears and hand-held points to test if they can be viewed as a two-module system formed by the blade and the stem, and to evaluate the degree in which shape, size, asymmetry, blade: stem length ratio, and tip angle explain the observed variance and differentiation among points supposedly aimed to accomplish different functions. To do so we performed a geometric morphometric analysis on 118 lithic points, departing from 24 two-dimensional landmark and semi landmarks placed on the point's contour. Klingenberg's covariational modularity tests were used to evaluate different modularity hypotheses, and a composite PCA including shape, size, asymmetry, blade: stem length ratio, and tip angle was used to estimate the importance of each attribute to explaining variation patterns. Results show that the blade and the stem can be seen as "near decomposable units" in the points integrating the studied sample. However, this modular pattern changes after removing the effects of reduction. Indeed, a resharpened point tends to show a tip/rest of the point modular pattern. The composite PCA analyses evidenced three different patterns of morphometric attributes compatible with arrows, thrown spears, and hand-held tools. Interestingly, when analyzed independently, these groups show differences in their modular organization. Our results indicate that stone tools can be approached as flexible designs, characterized by a composite set of interacting morphometric attributes, and evolving on a modular way.
González-José, Rolando; Charlin, Judith
2012-01-01
The specific using of different prehistoric weapons is mainly determined by its physical properties, which provide a relative advantage or disadvantage to perform a given, particular function. Since these physical properties are integrated to accomplish that function, examining design variables and their pattern of integration or modularity is of interest to estimate the past function of a point. Here we analyze a composite sample of lithic points from southern Patagonia likely formed by arrows, thrown spears and hand-held points to test if they can be viewed as a two-module system formed by the blade and the stem, and to evaluate the degree in which shape, size, asymmetry, blade: stem length ratio, and tip angle explain the observed variance and differentiation among points supposedly aimed to accomplish different functions. To do so we performed a geometric morphometric analysis on 118 lithic points, departing from 24 two-dimensional landmark and semi landmarks placed on the point's contour. Klingenberg's covariational modularity tests were used to evaluate different modularity hypotheses, and a composite PCA including shape, size, asymmetry, blade: stem length ratio, and tip angle was used to estimate the importance of each attribute to explaining variation patterns. Results show that the blade and the stem can be seen as “near decomposable units” in the points integrating the studied sample. However, this modular pattern changes after removing the effects of reduction. Indeed, a resharpened point tends to show a tip/rest of the point modular pattern. The composite PCA analyses evidenced three different patterns of morphometric attributes compatible with arrows, thrown spears, and hand-held tools. Interestingly, when analyzed independently, these groups show differences in their modular organization. Our results indicate that stone tools can be approached as flexible designs, characterized by a composite set of interacting morphometric attributes, and evolving on a modular way. PMID:23094104
Attention, Task Difficulty, and ADHD
ERIC Educational Resources Information Center
Nigg, Joel T.
2005-01-01
Comments on analysis of attention tasks in Attention Deficit Hyperactivity Disorder (ADHD) provided by Wilding (2005)points out that whereas many regulatory functions, including alertness or arousal, appear to be impaired in ADHD, demonstrating basic attention deficits in selection or orienting functions in the disorder has proven difficult. Yet…
[Analysis of ancient literature on baliao points for pelvic floor diseases].
Liu, Hairong; Zhang, Jianbin
2016-12-12
The relationship between baliao points and pelvis floor diseases was explored based on the ancient literature review on these acupoints' targeted diseases. It is considered that baliao points are applied to treat various pelvis floor diseases and symptoms of different systems. Each point has similar function but with unique feature. Shangliao (BL 31) is mainly used to treat gynecologic diseases;Ciliao (BL 32) and Zhongliao (BL 33),urologic system and reproductive system diseases;Zhongliao (BL 33) and Xialiao (BL 34),reproductive system and anorectal system diseases.
Pointing Gestures as a Cognitive Tool in Young Children: Experimental Evidence
ERIC Educational Resources Information Center
Delgado, Begona; Gomez, Juan Carlos; Sarria, Encarnacion
2011-01-01
This article explores the possible cognitive function associated with pointing gestures from a Vygotskian perspective. In Study 1, 39 children who were 2-4 years of age were observed in a solitary condition while solving a mnemonic task with or without an explicit memory demand. A discriminant analysis showed that children used noncommunicative…
HYBRID NEURAL NETWORK AND SUPPORT VECTOR MACHINE METHOD FOR OPTIMIZATION
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor)
2005-01-01
System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.
Hybrid Neural Network and Support Vector Machine Method for Optimization
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor)
2007-01-01
System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.
Nadzirin, Nurul; Firdaus-Raih, Mohd
2012-10-08
Proteins of uncharacterized functions form a large part of many of the currently available biological databases and this situation exists even in the Protein Data Bank (PDB). Our analysis of recent PDB data revealed that only 42.53% of PDB entries (1084 coordinate files) that were categorized under "unknown function" are true examples of proteins of unknown function at this point in time. The remainder 1465 entries also annotated as such appear to be able to have their annotations re-assessed, based on the availability of direct functional characterization experiments for the protein itself, or for homologous sequences or structures thus enabling computational function inference.
NASA Astrophysics Data System (ADS)
Harmening, Corinna; Neuner, Hans
2016-09-01
Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.
Goede, Simon L; Leow, Melvin Khee-Shing
2013-01-01
This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.
Tyson, Mark Douglas; Koyama, Tatsuki; Lee, Dan; Hoffman, Karen E; Resnick, Matthew J; Wu, Xiao-Cheng; Cooperberg, Matthew R; Goodman, Michael; Greenfield, Sheldon; Hamilton, Ann S; Hashibe, Mia; Paddock, Lisa E; Stroup, Antoinette; Chen, Vivien; Conwill, Ralph; McCollum, Dan; Penson, David F; Barocas, Daniel A
2018-07-01
Whether prostate cancer severity modifies patient-reported functional outcomes after radical prostatectomy (RP) or external beam radiotherapy (EBRT) for localized cancer is unknown. The purpose of this study was to determine whether differences in predicted function over time between RP and EBRT varied by risk group. The Comparative Effectiveness Analysis of Surgery and Radiation (CEASAR) study is a prospective, population-based, observational study that enrolled men with localized prostate cancer in 2011-2012. Among 2117 CEASAR participants who underwent RP or EBRT, 817 had low-risk, 902 intermediate-risk, and 398 high-risk disease. Patient-reported, disease-specific function was measured using the 26-item Expanded Prostate Index Composite (at baseline and 6, 12, and 36 mo). Predicted function was estimated using regression models and compared by disease risk. Low-risk EBRT patients reported 3-yr sexual function scores 12 points higher than those of low-risk RP patients (RP, 39 points [95% confidence interval {CI}, 37-42] vs EBRT, 52 points [95% CI, 47-56]; p<0.001). The difference in 3-yr scores for high-risk patients was not clinically significant (RP, 32 points [95% CI, 28-35] vs EBRT, 38 points [95% CI, 33-42]; p=0.03). However, when using a commonly used binary definition of sexual function (erections firm enough for intercourse), no major differences were noted between RP and EBRT at 3 yr across low-, intermediate-, and high-risk disease strata. No clinically significant interactive effects between treatment and cancer severity were observed for incontinence, bowel, irritative voiding, and hormone domains. The primary limitation is the lack of firmly established thresholds for clinically significant differences in Expanded Prostate Index Composite domain scores. For men with low-risk prostate cancer, EBRT was associated with higher sexual function scores at 3 yr than RP; however, for men with high-risk prostate cancer, no clinically significant difference was noted. Men with high-risk prostate cancer should be counseled that EBRT and RP carry similar sexual function outcomes at 3 yr. In this report, we studied the urinary, sexual, bowel, and hormonal functions of patients 3 yr after undergoing prostate cancer surgery or radiation. We found that for patients with high-risk disease, sexual function was similar between surgery and radiation. We conclude that high-risk patients undergoing radiation therapy should be counseled that sexual function may not be as good as low-risk patients undergoing radiation. Copyright © 2018 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Muñoz–Negrete, Francisco J.; Oblanca, Noelia; Rebolleda, Gema
2018-01-01
Purpose To study the structure-function relationship in glaucoma and healthy patients assessed with Spectralis OCT and Humphrey perimetry using new statistical approaches. Materials and Methods Eighty-five eyes were prospectively selected and divided into 2 groups: glaucoma (44) and healthy patients (41). Three different statistical approaches were carried out: (1) factor analysis of the threshold sensitivities (dB) (automated perimetry) and the macular thickness (μm) (Spectralis OCT), subsequently applying Pearson's correlation to the obtained regions, (2) nonparametric regression analysis relating the values in each pair of regions that showed significant correlation, and (3) nonparametric spatial regressions using three models designed for the purpose of this study. Results In the glaucoma group, a map that relates structural and functional damage was drawn. The strongest correlation with visual fields was observed in the peripheral nasal region of both superior and inferior hemigrids (r = 0.602 and r = 0.458, resp.). The estimated functions obtained with the nonparametric regressions provided the mean sensitivity that corresponds to each given macular thickness. These functions allowed for accurate characterization of the structure-function relationship. Conclusions Both maps and point-to-point functions obtained linking structure and function damage contribute to a better understanding of this relationship and may help in the future to improve glaucoma diagnosis. PMID:29850196
NASA Astrophysics Data System (ADS)
Yang, Hongxin; Su, Fulin
2018-01-01
We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.
NASA Technical Reports Server (NTRS)
Lim, Sang G.; Brewe, David E.; Prahl, Joseph M.
1990-01-01
The transient analysis of hydrodynamic lubrication of a point-contact is presented. A body-fitted coordinate system is introduced to transform the physical domain to a rectangular computational domain, enabling the use of the Newton-Raphson method for determining pressures and locating the cavitation boundary, where the Reynolds boundary condition is specified. In order to obtain the transient solution, an explicit Euler method is used to effect a time march. The transient dynamic load is a sinusoidal function of time with frequency, fractional loading, and mean load as parameters. Results include the variation of the minimum film thickness and phase-lag with time as functions of excitation frequency. The results are compared with the analytic solution to the transient step bearing problem with the same dynamic loading function. The similarities of the results suggest an approximate model of the point contact minimum film thickness solution.
Fujita, Takaaki; Sato, Atsushi; Tsuchiya, Kenji; Ohashi, Takuro; Yamane, Kazuhiro; Yamamoto, Yuichi; Iokawa, Kazuaki; Ohira, Yoko; Otsuki, Koji; Tozato, Fusae
2017-12-01
This study aimed to elucidate the relationship between grooming performance of stroke patients and various motor and cognitive functions and to examine the cognitive and physical functional standards required for grooming independence. We retrospectively analyzed the data of 96 hospitalized patients with first stroke in a rehabilitation hospital ward. Logistic regression analysis and receiver operating characteristic curves were used to investigate the related cognitive and motor functions with grooming performance and to calculate the cutoff values for independence and supervision levels in grooming. For analysis between the independent and supervision-dependent groups, the only item with an area under the curve (AUC) of .9 or higher was the Berg Balance Scale, and the calculated cutoff value was 41/40 (sensitivity, 83.6%; specificity, 87.8%). For analysis between the independent-supervision and dependent groups, the items with an AUC of .9 or higher were the Simple Test for Evaluating Hand Function (STEF) on the nonaffected side, Vitality Index (VI), and FIM ® cognition. The cutoff values were 68/67 for the STEF (sensitivity, 100%; specificity, 72.2%), 9/8 points for the VI (sensitivity, 92.3%; specificity, 88.9%), and 23/22 points for FIM ® cognition (sensitivity, 91.0%; specificity, 88.9%). Our results suggest that upper-extremity functions on the nonaffected side, motivation, and cognitive functions are particularly important to achieve the supervision level and that balance is important to reach the independence level. The effective improvement of grooming performance is possible by performing therapeutic or compensatory intervention on functions that have not achieved these cutoff values. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grzybowska, Ewa A., E-mail: ewag@coi.waw.pl
2012-07-20
Highlights: Black-Right-Pointing-Pointer Functional characteristics of intronless genes (IGs). Black-Right-Pointing-Pointer Diseases associated with IGs. Black-Right-Pointing-Pointer Origin and evolution of IGs. Black-Right-Pointing-Pointer mRNA processing without splicing. -- Abstract: Intronless genes (IGs) constitute approximately 3% of the human genome. Human IGs are essentially different in evolution and functionality from the IGs of unicellular eukaryotes, which represent the majority in their genomes. Functional analysis of IGs has revealed a massive over-representation of signal transduction genes and genes encoding regulatory proteins important for growth, proliferation, and development. IGs also often display tissue-specific expression, usually in the nervous system and testis. These characteristics translate into IG-associatedmore » diseases, mainly neuropathies, developmental disorders, and cancer. IGs represent recent additions to the genome, created mostly by retroposition of processed mRNAs with retained functionality. Processing, nuclear export, and translation of these mRNAs should be hampered dramatically by the lack of splice factors, which normally tightly cover mature transcripts and govern their fate. However, natural IGs manage to maintain satisfactory expression levels. Different mechanisms by which IGs solve the problem of mRNA processing and nuclear export are discussed here, along with their possible impact on reporter studies.« less
Analysis of Screen Channel LAD Bubble Point Tests in Liquid Methane at Elevated Temperature
NASA Technical Reports Server (NTRS)
Hartwig, Jason; McQuillen, John
2012-01-01
This paper examines the effect of varying the liquid temperature and pressure on the bubble point pressure for screen channel Liquid Acquisition Devices in cryogenic liquid methane using gaseous helium across a wide range of elevated pressures and temperatures. Testing of a 325 x 2300 Dutch Twill screen sample was conducted in the Cryogenic Components Lab 7 facility at the NASA Glenn Research Center in Cleveland, Ohio. Test conditions ranged from 105 to 160K and 0.0965 - 1.78 MPa. Bubble point is shown to be a strong function of the liquid temperature and a weak function of the amount of subcooling at the LAD screen. The model predicts well for saturated liquid but under predicts the subcooled data.
van Eijk, Ruben PA; Eijkemans, Marinus JC; Rizopoulos, Dimitris
2018-01-01
Objective Amyotrophic lateral sclerosis (ALS) clinical trials based on single end points only partially capture the full treatment effect when both function and mortality are affected, and may falsely dismiss efficacious drugs as futile. We aimed to investigate the statistical properties of several strategies for the simultaneous analysis of function and mortality in ALS clinical trials. Methods Based on the Pooled Resource Open-Access ALS Clinical Trials (PRO-ACT) database, we simulated longitudinal patterns of functional decline, defined by the revised amyotrophic lateral sclerosis functional rating scale (ALSFRS-R) and conditional survival time. Different treatment scenarios with varying effect sizes were simulated with follow-up ranging from 12 to 18 months. We considered the following analytical strategies: 1) Cox model; 2) linear mixed effects (LME) model; 3) omnibus test based on Cox and LME models; 4) composite time-to-6-point decrease or death; 5) combined assessment of function and survival (CAFS); and 6) test based on joint modeling framework. For each analytical strategy, we calculated the empirical power and sample size. Results Both Cox and LME models have increased false-negative rates when treatment exclusively affects either function or survival. The joint model has superior power compared to other strategies. The composite end point increases false-negative rates among all treatment scenarios. To detect a 15% reduction in ALSFRS-R decline and 34% decline in hazard with 80% power after 18 months, the Cox model requires 524 patients, the LME model 794 patients, the omnibus test 526 patients, the composite end point 1,274 patients, the CAFS 576 patients and the joint model 464 patients. Conclusion Joint models have superior statistical power to analyze simultaneous effects on survival and function and may circumvent pitfalls encountered by other end points. Optimizing trial end points is essential, as selecting suboptimal outcomes may disguise important treatment clues. PMID:29593436
Automated analysis of plethysmograms for functional studies of hemodynamics
NASA Astrophysics Data System (ADS)
Zatrudina, R. Sh.; Isupov, I. B.; Gribkov, V. Yu.
2018-04-01
The most promising method for the quantitative determination of cardiovascular tone indicators and of cerebral hemodynamics indicators is the method of impedance plethysmography. The accurate determination of these indicators requires the correct identification of the characteristic points in the thoracic impedance plethysmogram and the cranial impedance plethysmogram respectively. An algorithm for automatic analysis of these plethysmogram is presented. The algorithm is based on the hard temporal relationships between the phases of the cardiac cycle and the characteristic points of the plethysmogram. The proposed algorithm does not require estimation of initial data and selection of processing parameters. Use of the method on healthy subjects showed a very low detection error of characteristic points.
An improved local radial point interpolation method for transient heat conduction analysis
NASA Astrophysics Data System (ADS)
Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang
2013-06-01
The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.
How Root Cause Analysis Can Improve the Value Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wixson, James Robert
2002-05-01
Root cause analysis (RCA) is an important methodology that can be integrated with the VE Job Plan to generate superior results from the VE Methodology. The point at which RCA is most appropriate is after the function analysis and FAST Model have been built and functions for improvement have been chosen. These functions are then subjected to a simple, but, rigorous RCA to get to the root cause of their deficiencies, whether it is high cost/poor value, poor quality, or poor reliability. Once the most probable causes for these problems have been arrived at, better solutions for improvement can bemore » developed in the creativity phase because the team better understands the problems associated with these functions.« less
NASA Astrophysics Data System (ADS)
Suryanto, Agus; Darti, Isnani
2017-12-01
In this paper we discuss a fractional order predator-prey model with ratio-dependent functional response. The dynamical properties of this model is analyzed. Here we determine all equilibrium points of this model including their existence conditions and their stability properties. It is found that the model has two type of equilibria, namely the predator-free point and the co-existence point. If there is no co-existence equilibrium, i.e. when the coefficient of conversion from the functional response into the growth rate of predator is less than the death rate of predator, then the predator-free point is asymptotically stable. On the other hand, if the co-existence point exists then this equilibrium is conditionally stable. We also construct a nonstandard Grnwald-Letnikov (NSGL) numerical scheme for the propose model. This scheme is a combination of the Grnwald-Letnikov approximation and the nonstandard finite difference scheme. This scheme is implemented in MATLAB and used to perform some simulations. It is shown that our numerical solutions are consistent with the dynamical properties of our fractional predator-prey model.
A Point Spread Function for the EPOXI Mission
NASA Technical Reports Server (NTRS)
Barry, Richard K.
2010-01-01
The Extrasolar Planet Observation Characterization and the Deep Impact Extended Investigation missions (EPOXI) are currently observing the transits of exoplanets, two comet nuclei at short range, and the Earth and Mars using the High Resolution Instrument (HRI) - a 0.3 m f/35 telescope on the Deep Impact probe. The HRI is in a permanently defocused state with the instrument pOint of focus about 0.6 cm before the focal plane due to the use of a reference flat mirror that took a power during ground thermal-vacuum testing. Consequently, the point spread function (PSF) covers approximately nine pixels FWHM and is characterized by a patch with three-fold symmetry due to the three-point support structures of the primary and secondary mirrors. The PSF is also strongly color dependent varying in shape and size with change in filtration and target color. While defocus is highly desirable for exoplanet transit observations to limit sensitivity to intra-pixel variation, it is suboptimal for observations of spatially resolved targets. Consequently, all images used in our analysis of such objects were deconvolved with an instrument PSF. The instrument PSF is also being used to optimize transit analysis. We discuss development and usage of an instrument PSF for these observations.
Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS), phase 1
NASA Technical Reports Server (NTRS)
Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.
1986-01-01
The large-signal behaviors of a regulator depend largely on the type of power circuit topology and control. Thus, for maximum flexibility, it is best to develop models for each functional block a independent modules. A regulator can then be configured by collecting appropriate pre-defined modules for each functional block. In order to complete the component model generation for a comprehensive spacecraft power system, the following modules were developed: solar array switching unit and control; shunt regulators; and battery discharger. The capability of each module is demonstrated using a simplified Direct Energy Transfer (DET) system. Large-signal behaviors of solar array power systems were analyzed. Stability of the solar array system operating points with a nonlinear load is analyzed. The state-plane analysis illustrates trajectories of the system operating point under various conditions. Stability and transient responses of the system operating near the solar array's maximum power point are also analyzed. The solar array system mode of operation is described using the DET spacecraft power system. The DET system is simulated for various operating conditions. Transfer of the software program CAMAPPS (Computer Aided Modeling and Analysis of Power Processing Systems) to NASA/GSFC (Goddard Space Flight Center) was accomplished.
Alizai, Patrick H; Haelsig, Annabel; Bruners, Philipp; Ulmer, Florian; Klink, Christian D; Dejong, Cornelis H C; Neumann, Ulf P; Schmeding, Maximilian
2018-01-01
Liver failure remains a life-threatening complication after liver resection, and is difficult to predict preoperatively. This retrospective cohort study evaluated different preoperative factors in regard to their impact on posthepatectomy liver failure (PHLF) after extended liver resection and previous portal vein embolization (PVE). Patient characteristics, liver function and liver volumes of patients undergoing PVE and subsequent liver resection were analyzed. Liver function was determined by the LiMAx test (enzymatic capacity of cytochrome P450 1A2). Factors associated with the primary end point PHLF (according to ISGLS definition) were identified through multivariable analysis. Secondary end points were 30-day mortality and morbidity. 95 patients received PVE, of which 64 patients underwent major liver resection. PHLF occurred in 7 patients (11%). Calculated postoperative liver function was significantly lower in patients with PHLF than in patients without PHLF (67 vs. 109 μg/kg/h; p = 0.01). Other factors associated with PHLF by univariable analysis were age, future liver remnant, MELD score, ASA score, renal insufficiency and heart insufficiency. By multivariable analysis, future liver remnant was the only factor significantly associated with PHLF (p = 0.03). Mortality and morbidity rates were 4.7% and 29.7% respectively. Future liver remnant is the only preoperative factor with a significant impact on PHLF. Assessment of preoperative liver function may additionally help identify patients at risk for PHLF.
NASA Astrophysics Data System (ADS)
Székely, Balázs; Kania, Adam; Varga, Katalin; Heilmeier, Hermann
2017-04-01
Lacunarity, a measure of the spatial distribution of the empty space is found to be a useful descriptive quantity of the forest structure. Its calculation, based on laser-scanned point clouds, results in a four-dimensional data set. The evaluation of results needs sophisticated tools and visualization techniques. To simplify the evaluation, it is straightforward to use approximation functions fitted to the results. The lacunarity function L(r), being a measure of scale-independent structural properties, has a power-law character. Previous studies showed that log(log(L(r))) transformation is suitable for analysis of spatial patterns. Accordingly, transformed lacunarity functions can be approximated by appropriate functions either in the original or in the transformed domain. As input data we have used a number of laser-scanned point clouds of various forests. The lacunarity distribution has been calculated along a regular horizontal grid at various (relative) elevations. The lacunarity data cube then has been logarithm-transformed and the resulting values became the input of parameter estimation at each point (point of interest, POI). This way at each POI a parameter set is generated that is suitable for spatial analysis. The expectation is that the horizontal variation and vertical layering of the vegetation can be characterized by this procedure. The results show that the transformed L(r) functions can be typically approximated by exponentials individually, and the residual values remain low in most cases. However, (1) in most cases the residuals may vary considerably, and (2) neighbouring POIs often give rather differing estimates both in horizontal and in vertical directions, of them the vertical variation seems to be more characteristic. In the vertical sense, the distribution of estimates shows abrupt changes at places, presumably related to the vertical structure of the forest. In low relief areas horizontal similarity is more typical, in higher relief areas horizontal similarity fades out in short distances. Some of the input data have been acquired in the framework of the ChangeHabitats2 project financed by the European Union. BS contributed as an Alexander von Humboldt Research Fellow.
Pantoja, Joe Luis; Ge, Liang; Zhang, Zhihong; Morrel, William G; Guccione, Julius M; Grossi, Eugene A; Ratcliffe, Mark B
2014-10-01
The role of posterior papillary muscle anchoring (PPMA) in the management of chronic ischemic mitral regurgitation (CIMR) is controversial. We studied the effect of anchoring point direction and relocation displacement on left ventricular (LV) regional myofiber stress and pump function. Previously described finite element models of sheep 16 weeks after posterolateral myocardial infarction (MI) were used. True-sized mitral annuloplasty (MA) ring insertion plus different PPM anchoring techniques were simulated. Anchoring points tested included both commissures and the central anterior mitral annulus; relocation displacement varied from 10% to 40% of baseline diastolic distance from the PPM to the anchor points on the annulus. For each reconstruction scenario, myofiber stress in the MI, border zone, and remote myocardium as well as pump function were calculated. PPMA caused reductions in myofiber stress at end-diastole and end-systole in all regions of the left ventricle that were proportional to the relocation displacement. Although stress reduction was greatest in the MI region, it also occurred in the remote region. The maximum 40% displacement caused a slight reduction in LV pump function. However, with the correction of regurgitation by MA plus PPMA, there was an overall increase in forward stroke volume. Finally, anchoring point direction had no effect on myofiber stress or pump function. PPMA reduces remote myofiber stress, which is proportional to the absolute distance of relocation and independent of anchoring point. Aggressive use of PPMA techniques to reduce remote myofiber stress may accelerate reverse LV remodeling without impairing LV function. Copyright © 2014 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Chen, Derek E; Willick, Darryl L; Ruckel, Joseph B; Floriano, Wely B
2015-01-01
Directed evolution is a technique that enables the identification of mutants of a particular protein that carry a desired property by successive rounds of random mutagenesis, screening, and selection. This technique has many applications, including the development of G protein-coupled receptor-based biosensors and designer drugs for personalized medicine. Although effective, directed evolution is not without challenges and can greatly benefit from the development of computational techniques to predict the functional outcome of single-point amino acid substitutions. In this article, we describe a molecular dynamics-based approach to predict the effects of single amino acid substitutions on agonist binding (salicin) to a human bitter taste receptor (hT2R16). An experimentally determined functional map of single-point amino acid substitutions was used to validate the whole-protein molecular dynamics-based predictive functions. Molecular docking was used to construct a wild-type agonist-receptor complex, providing a starting structure for single-point substitution simulations. The effects of each single amino acid substitution in the functional response of the receptor to its agonist were estimated using three binding energy schemes with increasing inclusion of solvation effects. We show that molecular docking combined with molecular mechanics simulations of single-point mutants of the agonist-receptor complex accurately predicts the functional outcome of single amino acid substitutions in a human bitter taste receptor.
Sisk, Matthew L.; Shea, John J.
2011-01-01
Despite a body of literature focusing on the functionality of modern and stylistically distinct projectile points, comparatively little attention has been paid to quantifying the functionality of the early stages of projectile use. Previous work identified a simple ballistics measure, the Tip Cross-Sectional Area, as a way of determining if a given class of stone points could have served as effective projectile armatures. Here we use this in combination with an alternate measure, the Tip Cross-Sectional Perimeter, a more accurate proxy of the force needed to penetrate a target to a lethal depth. The current study discusses this measure and uses it to analyze a collection of measurements from African Middle Stone Age pointed stone artifacts. Several point types that were rejected in previous studies are statistically indistinguishable from ethnographic projectile points using this new measure. The ramifications of this finding for a Middle Stone Age origin of complex projectile technology is discussed. PMID:21755048
Object-oriented productivity metrics
NASA Technical Reports Server (NTRS)
Connell, John L.; Eller, Nancy
1992-01-01
Software productivity metrics are useful for sizing and costing proposed software and for measuring development productivity. Estimating and measuring source lines of code (SLOC) has proven to be a bad idea because it encourages writing more lines of code and using lower level languages. Function Point Analysis is an improved software metric system, but it is not compatible with newer rapid prototyping and object-oriented approaches to software development. A process is presented here for counting object-oriented effort points, based on a preliminary object-oriented analysis. It is proposed that this approach is compatible with object-oriented analysis, design, programming, and rapid prototyping. Statistics gathered on actual projects are presented to validate the approach.
Is the Derivative a Function? If So, How Do We Teach It?
ERIC Educational Resources Information Center
Park, Jungeun
2015-01-01
This study investigated features of instructors' classroom discourse on the derivative with the commognitive lens. The analysis focused on how three calculus instructors addressed the derivative as a point-specific value and as a function in the beginning lessons about the derivative. The results show that (a) the instructors frequently used…
NASA Astrophysics Data System (ADS)
Javadi, Maryam; Shahrabi, Jamal
2014-03-01
The problems of facility location and the allocation of demand points to facilities are crucial research issues in spatial data analysis and urban planning. It is very important for an organization or governments to best locate its resources and facilities and efficiently manage resources to ensure that all demand points are covered and all the needs are met. Most of the recent studies, which focused on solving facility location problems by performing spatial clustering, have used the Euclidean distance between two points as the dissimilarity function. Natural obstacles, such as mountains and rivers, can have drastic impacts on the distance that needs to be traveled between two geographical locations. While calculating the distance between various supply chain entities (including facilities and demand points), it is necessary to take such obstacles into account to obtain better and more realistic results regarding location-allocation. In this article, new models were presented for location of urban facilities while considering geographical obstacles at the same time. In these models, three new distance functions were proposed. The first function was based on the analysis of shortest path in linear network, which was called SPD function. The other two functions, namely PD and P2D, were based on the algorithms that deal with robot geometry and route-based robot navigation in the presence of obstacles. The models were implemented in ArcGIS Desktop 9.2 software using the visual basic programming language. These models were evaluated using synthetic and real data sets. The overall performance was evaluated based on the sum of distance from demand points to their corresponding facilities. Because of the distance between the demand points and facilities becoming more realistic in the proposed functions, results indicated desired quality of the proposed models in terms of quality of allocating points to centers and logistic cost. Obtained results show promising improvements of the allocation, the logistics costs and the response time. It can also be inferred from this study that the P2D-based model and the SPD-based model yield similar results in terms of the facility location and the demand allocation. It is noted that the P2D-based model showed better execution time than the SPD-based model. Considering logistic costs, facility location and response time, the P2D-based model was appropriate choice for urban facility location problem considering the geographical obstacles.
Latour, Ewa; Latour, Marek; Arlet, Jarosław; Adach, Zdzisław; Bohatyrewicz, Andrzej
2011-07-01
Analysis of pedobarographical data requires geometric identification of specific anatomical areas extracted from recorded plantar pressures. This approach has led to ambiguity in measurements that may underlie the inconsistency of conclusions reported in pedobarographical studies. The goal of this study was to design a new analysis method less susceptible to the projection accuracy of anthropometric points and distance estimation, based on rarely used spatio-temporal indices. Six pedobarographic records per person (three per foot) from a group of 60 children aged 11-12 years were obtained and analyzed. The basis of the analysis was a mutual relationship between two spatio-temporal indices created by excursion of the peak pressure point and the center-of-pressure point on the dynamic pedobarogram. Classification of weight-shift patterns was elaborated and performed, and their frequencies of occurrence were assessed. This new method allows an assessment of body weight shift through the plantar pressure surface based on distribution analysis of spatio-temporal indices not affected by the shape of this surface. Analysis of the distribution of the created index confirmed the existence of typical ways of weight shifting through the plantar surface of the foot during gait, as well as large variability of the intrasubject occurrence. This method may serve as the basis for interpretation of foot functional features and may extend the clinical usefulness of pedobarography. Copyright © 2011 Elsevier B.V. All rights reserved.
Advanced Optimal Extraction for the Spitzer/IRS
NASA Astrophysics Data System (ADS)
Lebouteiller, V.; Bernard-Salas, J.; Sloan, G. C.; Barry, D. J.
2010-02-01
We present new advances in the spectral extraction of pointlike sources adapted to the Infrared Spectrograph (IRS) on board the Spitzer Space Telescope. For the first time, we created a supersampled point-spread function of the low-resolution modules. We describe how to use the point-spread function to perform optimal extraction of a single source and of multiple sources within the slit. We also examine the case of the optimal extraction of one or several sources with a complex background. The new algorithms are gathered in a plug-in called AdOpt which is part of the SMART data analysis software.
Partitioning of functional gene expression data using principal points.
Kim, Jaehee; Kim, Haseong
2017-10-12
DNA microarrays offer motivation and hope for the simultaneous study of variations in multiple genes. Gene expression is a temporal process that allows variations in expression levels with a characterized gene function over a period of time. Temporal gene expression curves can be treated as functional data since they are considered as independent realizations of a stochastic process. This process requires appropriate models to identify patterns of gene functions. The partitioning of the functional data can find homogeneous subgroups of entities for the massive genes within the inherent biological networks. Therefor it can be a useful technique for the analysis of time-course gene expression data. We propose a new self-consistent partitioning method of functional coefficients for individual expression profiles based on the orthonormal basis system. A principal points based functional partitioning method is proposed for time-course gene expression data. The method explores the relationship between genes using Legendre coefficients as principal points to extract the features of gene functions. Our proposed method provides high connectivity in connectedness after clustering for simulated data and finds a significant subsets of genes with the increased connectivity. Our approach has comparative advantages that fewer coefficients are used from the functional data and self-consistency of principal points for partitioning. As real data applications, we are able to find partitioned genes through the gene expressions found in budding yeast data and Escherichia coli data. The proposed method benefitted from the use of principal points, dimension reduction, and choice of orthogonal basis system as well as provides appropriately connected genes in the resulting subsets. We illustrate our method by applying with each set of cell-cycle-regulated time-course yeast genes and E. coli genes. The proposed method is able to identify highly connected genes and to explore the complex dynamics of biological systems in functional genomics.
Density estimation using the trapping web design: A geometric analysis
Link, W.A.; Barker, R.J.
1994-01-01
Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.
Detection of Subtle Cognitive Changes after mTBI Using a Novel Tablet-Based Task.
Fischer, Tara D; Red, Stuart D; Chuang, Alice Z; Jones, Elizabeth B; McCarthy, James J; Patel, Saumil S; Sereno, Anne B
2016-07-01
This study examined the potential for novel tablet-based tasks, modeled after eye tracking techniques, to detect subtle sensorimotor and cognitive deficits after mild traumatic brain injury (mTBI). Specifically, we examined whether performance on these tablet-based tasks (Pro-point and Anti-point) was able to correctly categorize concussed versus non-concussed participants, compared with performance on other standardized tests for concussion. Patients admitted to the emergency department with mTBI were tested on the Pro-point and Anti-point tasks, a current standard cognitive screening test (i.e., the Standard Assessment of Concussion [SAC]), and another eye movement-based tablet test, the King-Devick(®) (KD). Within hours after injury, mTBI patients showed significant slowing in response times, compared with both orthopedic and age-matched control groups, in the Pro-point task, demonstrating deficits in sensorimotor function. Mild TBI patients also showed significant slowing, compared with both control groups, on the Anti-point task, even when controlling for sensorimotor slowing, indicating deficits in cognitive function. Performance on the SAC test revealed similar deficits of cognitive function in the mTBI group, compared with the age-matched control group; however, the KD test showed no evidence of cognitive slowing in mTBI patients, compared with either control group. Further, measuring the sensitivity and specificity of these tasks to accurately predict mTBI with receiver operating characteristic analysis indicated that the Anti-point and Pro-point tasks reached excellent levels of accuracy and fared better than current standardized tools for assessment of concussion. Our findings suggest that these rapid tablet-based tasks are able to reliably detect and measure functional impairment in cognitive and sensorimotor control within hours after mTBI. These tasks may provide a more sensitive diagnostic measure for functional deficits that could prove key to earlier detection of concussion, evaluation of interventions, or even prediction of persistent symptoms.
Detection of Subtle Cognitive Changes after mTBI Using a Novel Tablet-Based Task
Red, Stuart D.; Chuang, Alice Z.; Jones, Elizabeth B.; McCarthy, James J.; Patel, Saumil S.; Sereno, Anne B.
2016-01-01
Abstract This study examined the potential for novel tablet-based tasks, modeled after eye tracking techniques, to detect subtle sensorimotor and cognitive deficits after mild traumatic brain injury (mTBI). Specifically, we examined whether performance on these tablet-based tasks (Pro-point and Anti-point) was able to correctly categorize concussed versus non-concussed participants, compared with performance on other standardized tests for concussion. Patients admitted to the emergency department with mTBI were tested on the Pro-point and Anti-point tasks, a current standard cognitive screening test (i.e., the Standard Assessment of Concussion [SAC]), and another eye movement–based tablet test, the King-Devick® (KD). Within hours after injury, mTBI patients showed significant slowing in response times, compared with both orthopedic and age-matched control groups, in the Pro-point task, demonstrating deficits in sensorimotor function. Mild TBI patients also showed significant slowing, compared with both control groups, on the Anti-point task, even when controlling for sensorimotor slowing, indicating deficits in cognitive function. Performance on the SAC test revealed similar deficits of cognitive function in the mTBI group, compared with the age-matched control group; however, the KD test showed no evidence of cognitive slowing in mTBI patients, compared with either control group. Further, measuring the sensitivity and specificity of these tasks to accurately predict mTBI with receiver operating characteristic analysis indicated that the Anti-point and Pro-point tasks reached excellent levels of accuracy and fared better than current standardized tools for assessment of concussion. Our findings suggest that these rapid tablet-based tasks are able to reliably detect and measure functional impairment in cognitive and sensorimotor control within hours after mTBI. These tasks may provide a more sensitive diagnostic measure for functional deficits that could prove key to earlier detection of concussion, evaluation of interventions, or even prediction of persistent symptoms. PMID:26398492
Shafer, Steven L; Lemmer, Bjoern; Boselli, Emmanuel; Boiste, Fabienne; Bouvet, Lionel; Allaouchiche, Bernard; Chassard, Dominique
2010-10-01
The duration of analgesia from epidural administration of local anesthetics to parturients has been shown to follow a rhythmic pattern according to the time of drug administration. We studied whether there was a similar pattern after intrathecal administration of bupivacaine in parturients. In the course of the analysis, we came to believe that some data points coincident with provider shift changes were influenced by nonbiological, health care system factors, thus incorrectly suggesting a periodic signal in duration of labor analgesia. We developed graphical and analytical tools to help assess the influence of individual points on the chronobiological analysis. Women with singleton term pregnancies in vertex presentation, cervical dilation 3 to 5 cm, pain score >50 mm (of 100 mm), and requesting labor analgesia were enrolled in this study. Patients received 2.5 mg of intrathecal bupivacaine in 2 mL using a combined spinal-epidural technique. Analgesia duration was the time from intrathecal injection until the first request for additional analgesia. The duration of analgesia was analyzed by visual inspection of the data, application of smoothing functions (Supersmoother; LOWESS and LOESS [locally weighted scatterplot smoothing functions]), analysis of variance, Cosinor (Chronos-Fit), Excel, and NONMEM (nonlinear mixed effect modeling). Confidence intervals (CIs) were determined by bootstrap analysis (1000 replications with replacement) using PLT Tools. Eighty-two women were included in the study. Examination of the raw data using 3 smoothing functions revealed a bimodal pattern, with a peak at approximately 0630 and a subsequent peak in the afternoon or evening, depending on the smoother. Analysis of variance did not identify any statistically significant difference between the duration of analgesia when intrathecal injection was given from midnight to 0600 compared with the duration of analgesia after intrathecal injection at other times. Chronos-Fit, Excel, and NONMEM produced identical results, with a mean duration of analgesia of 38.4 minutes (95% CI: 35.4-41.6 minutes), an 8-hour periodic waveform with an amplitude of 5.8 minutes (95% CI: 2.1-10.7 minutes), and a phase offset of 6.5 hours (95% CI: 5.4-8.0 hours) relative to midnight. The 8-hour periodic model did not reach statistical significance in 40% of bootstrap analyses, implying that statistical significance of the 8-hour periodic model was dependent on a subset of the data. Two data points before the change of shift at 0700 contributed most strongly to the statistical significance of the periodic waveform. Without these data points, there was no evidence of an 8-hour periodic waveform for intrathecal bupivacaine analgesia. Chronobiology includes the influence of external daily rhythms in the environment (e.g., nursing shifts) as well as human biological rhythms. We were able to distinguish the influence of an external rhythm by combining several novel analyses: (1) graphical presentation superimposing the raw data, external rhythms (e.g., nursing and anesthesia provider shifts), and smoothing functions; (2) graphical display of the contribution of each data point to the statistical significance; and (3) bootstrap analysis to identify whether the statistical significance was highly dependent on a data subset. These approaches suggested that 2 data points were likely artifacts of the change in nursing and anesthesia shifts. When these points were removed, there was no suggestion of biological rhythm in the duration of intrathecal bupivacaine analgesia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobulnicky, Henry A.; Alexander, Michael J.; Babler, Brian L.
We characterize the completeness of point source lists from Spitzer Space Telescope surveys in the four Infrared Array Camera (IRAC) bandpasses, emphasizing the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE) programs (GLIMPSE I, II, 3D, 360; Deep GLIMPSE) and their resulting point source Catalogs and Archives. The analysis separately addresses effects of incompleteness resulting from high diffuse background emission and incompleteness resulting from point source confusion (i.e., crowding). An artificial star addition and extraction analysis demonstrates that completeness is strongly dependent on local background brightness and structure, with high-surface-brightness regions suffering up to five magnitudes of reduced sensitivity to pointmore » sources. This effect is most pronounced at the IRAC 5.8 and 8.0 {mu}m bands where UV-excited polycyclic aromatic hydrocarbon emission produces bright, complex structures (photodissociation regions). With regard to diffuse background effects, we provide the completeness as a function of stellar magnitude and diffuse background level in graphical and tabular formats. These data are suitable for estimating completeness in the low-source-density limit in any of the four IRAC bands in GLIMPSE Catalogs and Archives and some other Spitzer IRAC programs that employ similar observational strategies and are processed by the GLIMPSE pipeline. By performing the same analysis on smoothed images we show that the point source incompleteness is primarily a consequence of structure in the diffuse background emission rather than photon noise. With regard to source confusion in the high-source-density regions of the Galactic Plane, we provide figures illustrating the 90% completeness levels as a function of point source density at each band. We caution that completeness of the GLIMPSE 360/Deep GLIMPSE Catalogs is suppressed relative to the corresponding Archives as a consequence of rejecting stars that lie in the point-spread function wings of saturated sources. This effect is minor in regions of low saturated star density, such as toward the Outer Galaxy; this effect is significant along sightlines having a high density of saturated sources, especially for Deep GLIMPSE and other programs observing closer to the Galactic center using 12 s or longer exposure times.« less
NASA Technical Reports Server (NTRS)
Mclennan, G. A.
1986-01-01
This report describes, and is a User's Manual for, a computer code (ANL/RBC) which calculates cycle performance for Rankine bottoming cycles extracting heat from a specified source gas stream. The code calculates cycle power and efficiency and the sizes for the heat exchangers, using tabular input of the properties of the cycle working fluid. An option is provided to calculate the costs of system components from user defined input cost functions. These cost functions may be defined in equation form or by numerical tabular data. A variety of functional forms have been included for these functions and they may be combined to create very general cost functions. An optional calculation mode can be used to determine the off-design performance of a system when operated away from the design-point, using the heat exchanger areas calculated for the design-point.
Asymptotic behaviour of two-point functions in multi-species models
NASA Astrophysics Data System (ADS)
Kozlowski, Karol K.; Ragoucy, Eric
2016-05-01
We extract the long-distance asymptotic behaviour of two-point correlation functions in massless quantum integrable models containing multi-species excitations. For such a purpose, we extend to these models the method of a large-distance regime re-summation of the form factor expansion of correlation functions. The key feature of our analysis is a technical hypothesis on the large-volume behaviour of the form factors of local operators in such models. We check the validity of this hypothesis on the example of the SU (3)-invariant XXX magnet by means of the determinant representations for the form factors of local operators in this model. Our approach confirms the structure of the critical exponents obtained previously for numerous models solvable by the nested Bethe Ansatz.
Spectator motives and points of attachment: an investigation on professional basketball.
Gencer, R Timucin; Kiremitci, Olcay; Boyacioglu, Hayal
2011-12-01
Spectator attendance to professional basketball in Turkey is significantly less than desired. Keeping in mind how important spectators are for team sports, understanding factors that affect game attendance will offer essential clues in terms of increasing spectator attendance. The main purpose of this study was to determine the relationships between basketball spectators' motives and points of attachment. With consideration to this purpose, the present study has tested the validity and reliability of the Motivation Scale for Sport Consumption and the Points of Attachment Index for Turkish basketball spectators. 197 basketball spectators participated in the study. Confirmatory factor analysis results demonstrated that the original models of the measurement tools employed for the study showed an acceptable degree of fit with the data. The internal consistency coefficients of the scales were found to be between 0.59 and 0.80 for the Motivation Scale for Sport Consumption and between 0.53 and 0.88 for the Points of Attachment Index. The canonical correlation analysis only returned a single significant function. The motives aesthetics and escape stood out in terms of the significant function, while the sport type (basketball in this study) stood out in the sense of attachment. Relationships identified between basketball spectators' motives and points of attachment could help sports managers and marketing experts to develop strategies focusing on increasing spectator attendance to their teams' games.
Spectator Motives and Points of Attachment: an Investigation on Professional Basketball
Gencer, R. Timucin; Kiremitci, Olcay; Boyacioglu, Hayal
2011-01-01
Spectator attendance to professional basketball in Turkey is significantly less than desired. Keeping in mind how important spectators are for team sports, understanding factors that affect game attendance will offer essential clues in terms of increasing spectator attendance. The main purpose of this study was to determine the relationships between basketball spectators’ motives and points of attachment. With consideration to this purpose, the present study has tested the validity and reliability of the Motivation Scale for Sport Consumption and the Points of Attachment Index for Turkish basketball spectators. 197 basketball spectators participated in the study. Confirmatory factor analysis results demonstrated that the original models of the measurement tools employed for the study showed an acceptable degree of fit with the data. The internal consistency coefficients of the scales were found to be between 0.59 and 0.80 for the Motivation Scale for Sport Consumption and between 0.53 and 0.88 for the Points of Attachment Index. The canonical correlation analysis only returned a single significant function. The motives aesthetics and escape stood out in terms of the significant function, while the sport type (basketball in this study) stood out in the sense of attachment. Relationships identified between basketball spectators’ motives and points of attachment could help sports managers and marketing experts to develop strategies focusing on increasing spectator attendance to their teams’ games. PMID:23487414
Dynamical analysis of continuous higher-order hopfield networks for combinatorial optimization.
Atencia, Miguel; Joya, Gonzalo; Sandoval, Francisco
2005-08-01
In this letter, the ability of higher-order Hopfield networks to solve combinatorial optimization problems is assessed by means of a rigorous analysis of their properties. The stability of the continuous network is almost completely clarified: (1) hyperbolic interior equilibria, which are unfeasible, are unstable; (2) the state cannot escape from the unitary hypercube; and (3) a Lyapunov function exists. Numerical methods used to implement the continuous equation on a computer should be designed with the aim of preserving these favorable properties. The case of nonhyperbolic fixed points, which occur when the Hessian of the target function is the null matrix, requires further study. We prove that these nonhyperbolic interior fixed points are unstable in networks with three neurons and order two. The conjecture that interior equilibria are unstable in the general case is left open.
Analysis and experiments for composite laminates with holes and subjected to 4-point bending
NASA Technical Reports Server (NTRS)
Shuart, M. J.; Prasad, C. B.
1990-01-01
Analytical and experimental results are presented for composite laminates with a hole and subjected to four-point bending. A finite-plate analysis is used to predict moment and strain distributions for six-layer quasi-isotropic laminates and transverse-ply laminates. Experimental data are compared with the analytical results. Experimental and analytical strain results show good agreement for the quasi-isotropic laminates. Failure of the two types of composite laminates is described, and failure strain results are presented as a function of normalized hole diameter. The failure results suggest that the initial failure mechanism for laminates subjected to four-point bending are similar to the initial failure mechanisms for corresponding laminates subjected to uniaxial inplane loadings.
NASA Astrophysics Data System (ADS)
Paliathanasis, A.; Tsamparlis, M.; Mustafa, M. T.
2018-02-01
A complete classification of the Lie and Noether point symmetries for the Klein-Gordon and the wave equation in pp-wave spacetimes is obtained. The classification analysis is carried out by reducing the problem of the determination of the point symmetries to the problem of existence of conformal killing vectors on the pp-wave spacetimes. Employing the existing results for the isometry classes of the pp-wave spacetimes, the functional form of the potential is determined for which the Klein-Gordon equation admits point symmetries and Noetherian conservation law. Finally the Lie and Noether point symmetries of the wave equation are derived.
Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej
2015-01-01
The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.
Riley, Richard D; Elia, Eleni G; Malin, Gemma; Hemming, Karla; Price, Malcolm P
2015-07-30
A prognostic factor is any measure that is associated with the risk of future health outcomes in those with existing disease. Often, the prognostic ability of a factor is evaluated in multiple studies. However, meta-analysis is difficult because primary studies often use different methods of measurement and/or different cut-points to dichotomise continuous factors into 'high' and 'low' groups; selective reporting is also common. We illustrate how multivariate random effects meta-analysis models can accommodate multiple prognostic effect estimates from the same study, relating to multiple cut-points and/or methods of measurement. The models account for within-study and between-study correlations, which utilises more information and reduces the impact of unreported cut-points and/or measurement methods in some studies. The applicability of the approach is improved with individual participant data and by assuming a functional relationship between prognostic effect and cut-point to reduce the number of unknown parameters. The models provide important inferential results for each cut-point and method of measurement, including the summary prognostic effect, the between-study variance and a 95% prediction interval for the prognostic effect in new populations. Two applications are presented. The first reveals that, in a multivariate meta-analysis using published results, the Apgar score is prognostic of neonatal mortality but effect sizes are smaller at most cut-points than previously thought. In the second, a multivariate meta-analysis of two methods of measurement provides weak evidence that microvessel density is prognostic of mortality in lung cancer, even when individual participant data are available so that a continuous prognostic trend is examined (rather than cut-points). © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Binder, Martin; Coad, Alex
2011-01-01
There is an ambiguity in Amartya Sen's capability approach as to what constitutes an individual's resources, conversion factors and valuable functionings. What we here call the "circularity problem" points to the fact that all three concepts seem to be mutually endogenous and interdependent. To econometrically account for this…
Validation of accelerometer cut points in toddlers with and without cerebral palsy.
Oftedal, Stina; Bell, Kristie L; Davies, Peter S W; Ware, Robert S; Boyd, Roslyn N
2014-09-01
The purpose of this study was to validate uni- and triaxial ActiGraph cut points for sedentary time in toddlers with cerebral palsy (CP) and typically developing children (TDC). Children (n = 103, 61 boys, mean age = 2 yr, SD = 6 months, range = 1 yr 6 months-3 yr) were divided into calibration (n = 65) and validation (n = 38) samples with separate analyses for TDC (n = 28) and ambulant (Gross Motor Function Classification System I-III, n = 51) and nonambulant (Gross Motor Function Classification System IV-V, n = 25) children with CP. An ActiGraph was worn during a videotaped assessment. Behavior was coded as sedentary or nonsedentary. Receiver operating characteristic-area under the curve analysis determined the classification accuracy of accelerometer data. Predictive validity was determined using the Bland-Altman analysis. Classification accuracy for uniaxial data was fair for the ambulatory CP and TDC group but poor for the nonambulatory CP group. Triaxial data showed good classification accuracy for all groups. The uniaxial ambulatory CP and TDC cut points significantly overestimated sedentary time (bias = -10.5%, 95% limits of agreement [LoA] = -30.2% to 9.1%; bias = -17.3%, 95% LoA = -44.3% to 8.3%). The triaxial ambulatory and nonambulatory CP and TDC cut points provided accurate group-level measures of sedentary time (bias = -1.5%, 95% LoA = -20% to 16.8%; bias = 2.1%, 95% LoA = -17.3% to 21.5%; bias = -5.1%, 95% LoA = -27.5% to 16.1%). Triaxial accelerometers provide useful group-level measures of sedentary time in children with CP across the spectrum of functional abilities and TDC. Uniaxial cut points are not recommended.
The impact of bariatric surgery on pulmonary function: a meta-analysis.
Alsumali, Adnan; Al-Hawag, Ali; Bairdain, Sigrid; Eguale, Tewodros
2018-02-01
Morbid obesity may affect several body systems and cause ill effects to the cardiovascular, hepatobiliary, endocrine, and mental health systems. However, the impact on the pulmonary system and pulmonary function has been debated in the literature. A systematic review and meta-analysis for studies that have evaluated the impact of bariatric surgery on pulmonary function were pooled for this analysis. PubMed, Cochrane, and Embase databases were evaluated through September 31, 2016. They were used as the primary search engine for studies evaluating the impact pre- and post-bariatric surgery on pulmonary function. Pooled effect estimates were calculated using random-effects model. Twenty-three studies with 1013 participants were included in the final meta-analysis. Only 8 studies had intervention and control groups with different time points, but 15 studies had matched groups with different time points. Overall, pulmonary function score was significantly improved after bariatric surgery, with a pooled standardized mean difference of .59 (95% confidence interval: .46-.73). Heterogeneity test was performed by using Cochran's Q test (I 2 = 46%; P heterogeneity = .10). Subgroup analysis and univariate meta-regression based on study quality, age, presurgery body mass index, postsurgery body mass index, study design, female patients only, study continent, asthmatic patients in the study, and the type of bariatric surgery confirmed no statistically significant difference among these groups (P value>.05 for all). A multivariate meta-regression model, which adjusted simultaneously for these same covariates, did not change the results (P value > .05 overall). Assessment of publication bias was done visually and by Begg's rank correlation test and indicated the absence of publication bias (asymmetric shape was observed and P = .34). This meta-analysis shows that bariatric surgery significantly improved overall pulmonary functions score for morbid obesity. Copyright © 2018 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.
Su, Ho-Ming; Tsai, Wei-Chung; Lin, Tsung-Hsien; Hsu, Po-Chao; Lee, Wen-Hsien; Lin, Ming-Yen; Chen, Szu-Chia; Lee, Chee-Siong; Voon, Wen-Chol; Lai, Wen-Ter; Sheu, Sheng-Hsiung
2012-01-01
The P wave parameters measured by 12-lead electrocardiogram (ECG) are commonly used as noninvasive tools to assess for left atrial enlargement. There are limited studies to evaluate whether P wave parameters are independently associated with decline in renal function. Accordingly, the aim of this study is to assess whether P wave parameters are independently associated with progression to renal end point of ≥25% decline in estimated glomerular filtration rate (eGFR). This longitudinal study included 166 patients. The renal end point was defined as ≥25% decline in eGFR. We measured two ECG P wave parameters corrected by heart rate, i.e. corrected P wave dispersion (PWdisperC) and corrected P wave maximum duration (PWdurMaxC). Heart function and structure were measured from echocardiography. Clinical data, P wave parameters, and echocardiographic measurements were compared and analyzed. Forty-three patients (25.9%) reached renal end point. Kaplan-Meier curves for renal end point-free survival showed PWdisperC > median (63.0 ms) (log-rank P = 0.004) and PWdurMaxC > median (117.9 ms) (log-rank P<0.001) were associated with progression to renal end point. Multivariate forward Cox-regression analysis identified increased PWdisperC (hazard ratio [HR], 1.024; P = 0.001) and PWdurMaxC (HR, 1.029; P = 0.001) were independently associated with progression to renal end point. Our results demonstrate that increased PWdisperC and PWdurMaxC were independently associated with progression to renal end point. Screening patients by means of PWdisperC and PWdurMaxC on 12 lead ECG may help identify a high risk group of rapid renal function decline.
Moho map of South America from receiver functions and surface waves
NASA Astrophysics Data System (ADS)
Lloyd, Simon; van der Lee, Suzan; FrançA, George Sand; AssumpçãO, Marcelo; Feng, Mei
2010-11-01
We estimate crustal structure and thickness of South America north of roughly 40°S. To this end, we analyzed receiver functions from 20 relatively new temporary broadband seismic stations deployed across eastern Brazil. In the analysis we include teleseismic and some regional events, particularly for stations that recorded few suitable earthquakes. We first estimate crustal thickness and average Poisson's ratio using two different stacking methods. We then combine the new crustal constraints with results from previous receiver function studies. To interpolate the crustal thickness between the station locations, we jointly invert these Moho point constraints, Rayleigh wave group velocities, and regional S and Rayleigh waveforms for a continuous map of Moho depth. The new tomographic Moho map suggests that Moho depth and Moho relief vary slightly with age within the Precambrian crust. Whether or not a positive correlation between crustal thickness and geologic age is derived from the pre-interpolation point constraints depends strongly on the selected subset of receiver functions. This implies that using only pre-interpolation point constraints (receiver functions) inadequately samples the spatial variation in geologic age. The new Moho map also reveals an anomalously deep Moho beneath the oldest core of the Amazonian Craton.
The use of copula functions for predictive analysis of correlations between extreme storm tides
NASA Astrophysics Data System (ADS)
Domino, Krzysztof; Błachowicz, Tomasz; Ciupak, Maurycy
2014-11-01
In this paper we present a method used in quantitative description of weakly predictable hydrological, extreme events at inland sea. Investigations for correlations between variations of individual measuring points, employing combined statistical methods, were carried out. As a main tool for this analysis we used a two-dimensional copula function sensitive for correlated extreme effects. Additionally, a new proposed methodology, based on Detrended Fluctuations Analysis (DFA) and Anomalous Diffusion (AD), was used for the prediction of negative and positive auto-correlations and associated optimum choice of copula functions. As a practical example we analysed maximum storm tides data recorded at five spatially separated places at the Baltic Sea. For the analysis we used Gumbel, Clayton, and Frank copula functions and introduced the reversed Clayton copula. The application of our research model is associated with modelling the risk of high storm tides and possible storm flooding.
Performance analysis of a dual-tree algorithm for computing spatial distance histograms
Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni
2011-01-01
Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753
Tharmalingam, Tharmala; Adamczyk, Barbara; Doherty, Margaret A; Royle, Louise; Rudd, Pauline M
2013-02-01
Many post-translational modifications, including glycosylation, are pivotal for the structural integrity, location and functional activity of glycoproteins. Sub-populations of proteins that are relocated or functionally changed by such modifications can change resting proteins into active ones, mediating specific effector functions, as in the case of monoclonal antibodies. To ensure safe and efficacious drugs it is essential to employ appropriate robust, quantitative analytical strategies that can (i) perform detailed glycan structural analysis, (ii) characterise specific subsets of glycans to assess known critical features of therapeutic activities (iii) rapidly profile glycan pools for at-line monitoring or high level batch to batch screening. Here we focus on these aspects of glycan analysis, showing how state-of-the-art technologies are required at all stages during the production of recombinant glycotherapeutics. These data can provide insights into processing pathways and suggest markers for intervention at critical control points in bioprocessing and also critical decision points in disease and drug monitoring in patients. Importantly, these tools are now enabling the first glycome/genome studies in large populations, allowing the integration of glycomics into other 'omics platforms in a systems biology context.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wixson, J. R.
Root cause analysis (RCA) is an important methodology that can be integrated with the VE Job Plan to generate superior results from the VE Methodology. The point at which RCA is most appropriate is after the function analysis and FAST Model have been built and functions for improvement have been chosen. These functions are then subjected to a simple, but, rigorous RCA to get to the root cause of their deficiencies, whether it is high cost/poor value, poor quality, or poor reliability. Once the most probable causes for these problems have been arrived at, better solutions for improvement can bemore » developed in the creativity phase because the team better understands the problems associated with these functions.« less
NASA Astrophysics Data System (ADS)
Erfanifard, Y.; Rezayan, F.
2014-10-01
Vegetation heterogeneity biases second-order summary statistics, e.g., Ripley's K-function, applied for spatial pattern analysis in ecology. Second-order investigation based on Ripley's K-function and related statistics (i.e., L- and pair correlation function g) is widely used in ecology to develop hypothesis on underlying processes by characterizing spatial patterns of vegetation. The aim of this study was to demonstrate effects of underlying heterogeneity of wild pistachio (Pistacia atlantica Desf.) trees on the second-order summary statistics of point pattern analysis in a part of Zagros woodlands, Iran. The spatial distribution of 431 wild pistachio trees was accurately mapped in a 40 ha stand in the Wild Pistachio & Almond Research Site, Fars province, Iran. Three commonly used second-order summary statistics (i.e., K-, L-, and g-functions) were applied to analyse their spatial pattern. The two-sample Kolmogorov-Smirnov goodness-of-fit test showed that the observed pattern significantly followed an inhomogeneous Poisson process null model in the study region. The results also showed that heterogeneous pattern of wild pistachio trees biased the homogeneous form of K-, L-, and g-functions, demonstrating a stronger aggregation of the trees at the scales of 0-50 m than actually existed and an aggregation at scales of 150-200 m, while regularly distributed. Consequently, we showed that heterogeneity of point patterns may bias the results of homogeneous second-order summary statistics and we also suggested applying inhomogeneous summary statistics with related null models for spatial pattern analysis of heterogeneous vegetations.
Object recognition and localization from 3D point clouds by maximum-likelihood estimation
NASA Astrophysics Data System (ADS)
Dantanarayana, Harshana G.; Huntley, Jonathan M.
2017-08-01
We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.
Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis
NASA Astrophysics Data System (ADS)
Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song
2018-01-01
To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.
Analysis of titanium content in titanium tetrachloride solution
NASA Astrophysics Data System (ADS)
Bi, Xiaoguo; Dong, Yingnan; Li, Shanshan; Guan, Duojiao; Wang, Jianyu; Tang, Meiling
2018-03-01
Strontium titanate, barium titan and lead titanate are new type of functional ceramic materials with good prospect, and titanium tetrachloride is a commonly in the production such products. Which excellent electrochemical performance of ferroelectric tempreature coefficient effect.In this article, three methods are used to calibrate the samples of titanium tetrachloride solution by back titration method, replacement titration method and gravimetric analysis method. The results show that the back titration method has many good points, for example, relatively simple operation, easy to judgment the titration end point, better accuracy and precision of analytical results, the relative standard deviation not less than 0.2%. So, it is the ideal of conventional analysis methods in the mass production.
Probabilistic analysis of structures involving random stress-strain behavior
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Thacker, B. H.; Harren, S. V.
1991-01-01
The present methodology for analysis of structures with random stress strain behavior characterizes the uniaxial stress-strain curve in terms of (1) elastic modulus, (2) engineering stress at initial yield, (3) initial plastic-hardening slope, (4) engineering stress at point of ultimate load, and (5) engineering strain at point of ultimate load. The methodology is incorporated into the Numerical Evaluation of Stochastic Structures Under Stress code for probabilistic structural analysis. The illustrative problem of a thick cylinder under internal pressure, where both the internal pressure and the stress-strain curve are random, is addressed by means of the code. The response value is the cumulative distribution function of the equivalent plastic strain at the inner radius.
Barros, Marcos Alexandre; Cervone, Gabriel Lopes de Faria; Costa, André Luis Serigatti
2015-01-01
Objective To objectively and subjectively evaluate the functional result from before to after surgery among patients with a diagnosis of an isolated avulsion fracture of the posterior cruciate ligament who were treated surgically. Method Five patients were evaluated by means of reviewing the medical files, applying the Lysholm questionnaire, physical examination and radiological examination. For the statistical analysis, a significance level of 0.10 and 95% confidence interval were used. Results According to the Lysholm criteria, all the patients were classified as poor (<64 points) before the operation and evolved to a mean of 96 points six months after the operation. We observed that 100% of the posterior drawer cases became negative, taking values less than 5 mm to be negative. Conclusion Surgical methods with stable fixation for treating avulsion fractures at the tibial insertion of the posterior cruciate ligament produce acceptable functional results from the surgical and radiological points of view, with a significance level of 0.042. PMID:27218073
Analyzing coastal environments by means of functional data analysis
NASA Astrophysics Data System (ADS)
Sierra, Carlos; Flor-Blanco, Germán; Ordoñez, Celestino; Flor, Germán; Gallego, José R.
2017-07-01
Here we used Functional Data Analysis (FDA) to examine particle-size distributions (PSDs) in a beach/shallow marine sedimentary environment in Gijón Bay (NW Spain). The work involved both Functional Principal Components Analysis (FPCA) and Functional Cluster Analysis (FCA). The grainsize of the sand samples was characterized by means of laser dispersion spectroscopy. Within this framework, FPCA was used as a dimension reduction technique to explore and uncover patterns in grain-size frequency curves. This procedure proved useful to describe variability in the structure of the data set. Moreover, an alternative approach, FCA, was applied to identify clusters and to interpret their spatial distribution. Results obtained with this latter technique were compared with those obtained by means of two vector approaches that combine PCA with CA (Cluster Analysis). The first method, the point density function (PDF), was employed after adapting a log-normal distribution to each PSD and resuming each of the density functions by its mean, sorting, skewness and kurtosis. The second applied a centered-log-ratio (clr) to the original data. PCA was then applied to the transformed data, and finally CA to the retained principal component scores. The study revealed functional data analysis, specifically FPCA and FCA, as a suitable alternative with considerable advantages over traditional vector analysis techniques in sedimentary geology studies.
Optimization and Comparison of Different Digital Mammographic Tomosynthesis Reconstruction Methods
2008-04-01
physical measurements of impulse response analysis, modulation transfer function (MTF) and noise power spectrum (NPS). (Months 5- 12). This task has...and 2 impulse -added: projection images with simulated impulse and the I /r2 shading difference. Other system blur and noise issues are not...blur, and suppressed high frequency noise . Point-by-point BP rather than traditional SAA should be considered as the basis of further deblurring
DSS 43 antenna gain analysis for Voyager Uranus encounter: 8.45-GHz radio science data correction
NASA Technical Reports Server (NTRS)
Slobin, S. D.; Imbriale, W. A.
1987-01-01
A malfunction of the Deep Space Network (DSN) 64-meter antenna in Australia forced the antenna to operate with a mispositioned subreflector during the Voyager Uranus encounter period (January 24, 1986). Because of changing main reflector shape and quadripod position as a function of elevation angle, the antenna gain and pointing were not as expected, and the 8.45 GHz received signal level changed during the pass. The study described here used the Geometrical Theory of Diffraction (GTD) analysis to determine actual antenna gain and pointing during that period in an attempt to reconstruct the radio science data. It is found that the 1.4 dB of signal variation can be accounted for by antenna geometry changes and pointing error. Suggested modifications to the values measured during the pass are presented. Additionally, an extremely useful tool for the analysis of gravity deformed reflectors was developed for use in future antenna design and analysis projects.
Hostettler, Isabel Charlotte; Muroi, Carl; Richter, Johannes Konstantin; Schmid, Josef; Neidert, Marian Christoph; Seule, Martin; Boss, Oliver; Pangalu, Athina; Germans, Menno Robbert; Keller, Emanuela
2018-01-19
OBJECTIVE The aim of this study was to create prediction models for outcome parameters by decision tree analysis based on clinical and laboratory data in patients with aneurysmal subarachnoid hemorrhage (aSAH). METHODS The database consisted of clinical and laboratory parameters of 548 patients with aSAH who were admitted to the Neurocritical Care Unit, University Hospital Zurich. To examine the model performance, the cohort was randomly divided into a derivation cohort (60% [n = 329]; training data set) and a validation cohort (40% [n = 219]; test data set). The classification and regression tree prediction algorithm was applied to predict death, functional outcome, and ventriculoperitoneal (VP) shunt dependency. Chi-square automatic interaction detection was applied to predict delayed cerebral infarction on days 1, 3, and 7. RESULTS The overall mortality was 18.4%. The accuracy of the decision tree models was good for survival on day 1 and favorable functional outcome at all time points, with a difference between the training and test data sets of < 5%. Prediction accuracy for survival on day 1 was 75.2%. The most important differentiating factor was the interleukin-6 (IL-6) level on day 1. Favorable functional outcome, defined as Glasgow Outcome Scale scores of 4 and 5, was observed in 68.6% of patients. Favorable functional outcome at all time points had a prediction accuracy of 71.1% in the training data set, with procalcitonin on day 1 being the most important differentiating factor at all time points. A total of 148 patients (27%) developed VP shunt dependency. The most important differentiating factor was hyperglycemia on admission. CONCLUSIONS The multiple variable analysis capability of decision trees enables exploration of dependent variables in the context of multiple changing influences over the course of an illness. The decision tree currently generated increases awareness of the early systemic stress response, which is seemingly pertinent for prognostication.
Nontrivial thermodynamics in 't Hooft's large-N limit
NASA Astrophysics Data System (ADS)
Cubero, Axel Cortés
2015-05-01
We study the finite volume/temperature correlation functions of the (1 +1 )-dimensional SU (N ) principal chiral sigma model in the planar limit. The exact S-matrix of the sigma model is known to simplify drastically at large N , and this leads to trivial thermodynamic Bethe ansatz (TBA) equations. The partition function, if derived using the TBA, can be shown to be that of free particles. We show that the correlation functions and expectation values of operators at finite volume/temperature are not those of the free theory, and that the TBA does not give enough information to calculate them. Our analysis is done using the Leclair-Mussardo formula for finite-volume correlators, and knowledge of the exact infinite-volume form factors. We present analytical results for the one-point function of the energy-momentum tensor, and the two-point function of the renormalized field operator. The results for the energy-momentum tensor can be used to define a nontrivial partition function.
A PDF closure model for compressible turbulent chemically reacting flows
NASA Technical Reports Server (NTRS)
Kollmann, W.
1992-01-01
The objective of the proposed research project was the analysis of single point closures based on probability density function (pdf) and characteristic functions and the development of a prediction method for the joint velocity-scalar pdf in turbulent reacting flows. Turbulent flows of boundary layer type and stagnation point flows with and without chemical reactions were be calculated as principal applications. Pdf methods for compressible reacting flows were developed and tested in comparison with available experimental data. The research work carried in this project was concentrated on the closure of pdf equations for incompressible and compressible turbulent flows with and without chemical reactions.
UFO (UnFold Operator) user guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kissel, L.; Biggs, F.; Marking, T.R.
UFO is a collection of interactive utility programs for estimating unknown functions of one variable using a wide-ranging class of information as input, for miscellaneous data-analysis applications, for performing feasibility studies, and for supplementing our other software. Inverse problems, which include spectral unfolds, inverse heat-transfer problems, time-domain deconvolution, and unusual or difficult curve-fit problems, are classes of applications for which UFO is well suited. Extensive use of B-splines and (X,Y)-datasets is made to represent functions. The (X,Y)-dataset representation is unique in that it is not restricted to equally-spaced data. This feature is used, for example, in a table-generating algorithm thatmore » evaluates a function to a user-specified interpolation accuracy while minimizing the number of points stored in the corresponding dataset. UFO offers a variety of miscellaneous data-analysis options such as plotting, comparing, transforming, scaling, integrating; and adding, subtracting, multiplying, and dividing functions together. These options are often needed as intermediate steps in analyzing and solving difficult inverse problems, but they also find frequent use in other applications. Statistical options are available to calculate goodness-of-fit to measurements, specify error bands on solutions, give confidence limits on calculated quantities, and to point out the statistical consequences of operations such as smoothing. UFO is designed to do feasibility studies on a variety of engineering measurements. It is also tailored to supplement our Test Analysis and Design codes, SRAD Test-Data Archive software, and Digital Signal Analysis routines.« less
Interactions Between Secondhand Smoke and Genes That Affect Cystic Fibrosis Lung Disease
Collaco, J. Michael; Vanscoy, Lori; Bremer, Lindsay; McDougal, Kathryn; Blackman, Scott M.; Bowers, Amanda; Naughton, Kathleen; Jennings, Jacky; Ellen, Jonathan; Cutting, Garry R.
2011-01-01
Context Disease variation can be substantial even in conditions with a single gene etiology such as cystic fibrosis (CF). Simultaneously studying the effects of genes and environment may provide insight into the causes of variation. Objective To determine whether secondhand smoke exposure is associated with lung function and other outcomes in individuals with CF, whether socioeconomic status affects the relationship between secondhand smoke exposure and lung disease severity, and whether specific gene-environment interactions influence the effect of secondhand smoke exposure on lung function. Design, Setting, and Participants Retrospective assessment of lung function, stratified by environmental and genetic factors. Data were collected by the US Cystic Fibrosis Twin and Sibling Study with missing data supplemented by the Cystic Fibrosis Foundation Data Registry. All participants were diagnosed with CF, were recruited between October 2000 and October 2006, and were primarily from the United States. Main Outcome Measures Disease-specific cross-sectional and longitudinal measures of lung function. Results Of 812 participants with data on secondhand smoke in the home, 188 (23.2%) were exposed. Of 780 participants with data on active maternal smoking during gestation, 129 (16.5%) were exposed. Secondhand smoke exposure in the home was associated with significantly lower cross-sectional (9.8 percentile point decrease; P<.001) and longitudinal lung function (6.1 percentile point decrease; P=.007) compared with those not exposed. Regression analysis demonstrated that socioeconomic status did not confound the adverse effect of secondhand smoke exposure on lung function. Interaction between gene variants and secondhand smoke exposure resulted in significant percentile point decreases in lung function, namely in CFTR non-ΔF508 homozygotes (12.8 percentile point decrease; P=.001), TGFβ1-509 TT homozygotes (22.7 percentile point decrease; P=.006), and TGFβ1 codon 10 CC homozygotes (20.3 percentile point decrease; P=.005). Conclusions Any exposure to secondhand smoke adversely affects both cross-sectional and longitudinal measures of lung function in individuals with CF. Variations in the gene that causes CF (CFTR) and a CF-modifier gene (TGFβ1) amplify the negative effects of secondhand smoke exposure. PMID:18230779
Methods for scalar-on-function regression.
Reiss, Philip T; Goldsmith, Jeff; Shang, Han Lin; Ogden, R Todd
2017-08-01
Recent years have seen an explosion of activity in the field of functional data analysis (FDA), in which curves, spectra, images, etc. are considered as basic functional data units. A central problem in FDA is how to fit regression models with scalar responses and functional data points as predictors. We review some of the main approaches to this problem, categorizing the basic model types as linear, nonlinear and nonparametric. We discuss publicly available software packages, and illustrate some of the procedures by application to a functional magnetic resonance imaging dataset.
Homeostasis in a feed forward loop gene regulatory motif.
Antoneli, Fernando; Golubitsky, Martin; Stewart, Ian
2018-05-14
The internal state of a cell is affected by inputs from the extra-cellular environment such as external temperature. If some output, such as the concentration of a target protein, remains approximately constant as inputs vary, the system exhibits homeostasis. Special sub-networks called motifs are unusually common in gene regulatory networks (GRNs), suggesting that they may have a significant biological function. Potentially, one such function is homeostasis. In support of this hypothesis, we show that the feed-forward loop GRN produces homeostasis. Here the inputs are subsumed into a single parameter that affects only the first node in the motif, and the output is the concentration of a target protein. The analysis uses the notion of infinitesimal homeostasis, which occurs when the input-output map has a critical point (zero derivative). In model equations such points can be located using implicit differentiation. If the second derivative of the input-output map also vanishes, the critical point is a chair: the output rises roughly linearly, then flattens out (the homeostasis region or plateau), and then starts to rise again. Chair points are a common cause of homeostasis. In more complicated equations or networks, numerical exploration would have to augment analysis. Thus, in terms of finding chairs, this paper presents a proof of concept. We apply this method to a standard family of differential equations modeling the feed-forward loop GRN, and deduce that chair points occur. This function determines the production of a particular mRNA and the resulting chair points are found analytically. The same method can potentially be used to find homeostasis regions in other GRNs. In the discussion and conclusion section, we also discuss why homeostasis in the motif may persist even when the rest of the network is taken into account. Copyright © 2018 Elsevier Ltd. All rights reserved.
Liu, Li; Setse, Rosanna; Grogan, Ruby; Powe, Neil R; Nicholson, Wanda K
2013-06-03
Lower physical and social functioning in pregnancy has been linked to an increased risk of preterm delivery and low birth weight infants, butt few studies have examined racial differences in pregnant women's perception of their functioning. Even fewer studies have elucidated the demographic and clinical factors contributing to racial differences in functioning. Our objective was to determine whether there are racial differences in health-related quality of life (HRQoL) in early pregnancy; and if so, to identify the contributions of socio-demographic characteristics, depression symptoms, social support and clinical factors to these differences. Cross-sectional study of 175 women in early pregnancy attending prenatal clinics in urban setting. In multivariate analysis, we assessed the independent relation of black race (compared to white) to HRQoL scores from the eight domains of the Medical Outcomes (SF-36) SURVEY: Physical Functioning, Role-Physical, Bodily Pain, Vitality, General Health, Social Functioning, Role-Emotional, and Mental Health. We compared socio-demographic and clinical factors and depression symptoms between black and white women and assessed the relative importance of these factors in explaining racial differences in physical and social functioning. Black women comprised 59% of the sample; white women comprised 41%. Before adjustment, black women had scores that were 14 points lower in Physical Function and Bodily Pain, 8 points lower in General Health, 4 points lower in Vitality and 7 points lower in Social Functioning. After adjustment for depression symptoms, social support and clinical factors, black women still had HRQoL scores that were 4 to 10 points lower than white women, but the differences were no longer statistically significant. Level of social support and payment source accounted for most of the variation in Physical Functioning, Bodily Pain and General Health. Social support accounted for most of the differences in Vitality and Social Functioning. Payment source and social support accounted for much of the racial differences in physical and social function scores. Efforts to reduce racial differences might focus on improving social support networks and Socio-economic barriers.
A method of PSF generation for 3D brightfield deconvolution.
Tadrous, P J
2010-02-01
This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.
A Semiparametric Change-Point Regression Model for Longitudinal Observations.
Xing, Haipeng; Ying, Zhiliang
2012-12-01
Many longitudinal studies involve relating an outcome process to a set of possibly time-varying covariates, giving rise to the usual regression models for longitudinal data. When the purpose of the study is to investigate the covariate effects when experimental environment undergoes abrupt changes or to locate the periods with different levels of covariate effects, a simple and easy-to-interpret approach is to introduce change-points in regression coefficients. In this connection, we propose a semiparametric change-point regression model, in which the error process (stochastic component) is nonparametric and the baseline mean function (functional part) is completely unspecified, the observation times are allowed to be subject-specific, and the number, locations and magnitudes of change-points are unknown and need to be estimated. We further develop an estimation procedure which combines the recent advance in semiparametric analysis based on counting process argument and multiple change-points inference, and discuss its large sample properties, including consistency and asymptotic normality, under suitable regularity conditions. Simulation results show that the proposed methods work well under a variety of scenarios. An application to a real data set is also given.
NASA Astrophysics Data System (ADS)
Hess, M. R.; Petrovic, V.; Kuester, F.
2017-08-01
Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.
Microarray characterization of gene expression changes in blood during acute ethanol exposure
2013-01-01
Background As part of the civil aviation safety program to define the adverse effects of ethanol on flying performance, we performed a DNA microarray analysis of human whole blood samples from a five-time point study of subjects administered ethanol orally, followed by breathalyzer analysis, to monitor blood alcohol concentration (BAC) to discover significant gene expression changes in response to the ethanol exposure. Methods Subjects were administered either orange juice or orange juice with ethanol. Blood samples were taken based on BAC and total RNA was isolated from PaxGene™ blood tubes. The amplified cDNA was used in microarray and quantitative real-time polymerase chain reaction (RT-qPCR) analyses to evaluate differential gene expression. Microarray data was analyzed in a pipeline fashion to summarize and normalize and the results evaluated for relative expression across time points with multiple methods. Candidate genes showing distinctive expression patterns in response to ethanol were clustered by pattern and further analyzed for related function, pathway membership and common transcription factor binding within and across clusters. RT-qPCR was used with representative genes to confirm relative transcript levels across time to those detected in microarrays. Results Microarray analysis of samples representing 0%, 0.04%, 0.08%, return to 0.04%, and 0.02% wt/vol BAC showed that changes in gene expression could be detected across the time course. The expression changes were verified by qRT-PCR. The candidate genes of interest (GOI) identified from the microarray analysis and clustered by expression pattern across the five BAC points showed seven coordinately expressed groups. Analysis showed function-based networks, shared transcription factor binding sites and signaling pathways for members of the clusters. These include hematological functions, innate immunity and inflammation functions, metabolic functions expected of ethanol metabolism, and pancreatic and hepatic function. Five of the seven clusters showed links to the p38 MAPK pathway. Conclusions The results of this study provide a first look at changing gene expression patterns in human blood during an acute rise in blood ethanol concentration and its depletion because of metabolism and excretion, and demonstrate that it is possible to detect changes in gene expression using total RNA isolated from whole blood. The analysis approach for this study serves as a workflow to investigate the biology linked to expression changes across a time course and from these changes, to identify target genes that could serve as biomarkers linked to pilot performance. PMID:23883607
Synthesis and optimization of four bar mechanism with six design parameters
NASA Astrophysics Data System (ADS)
Jaiswal, Ankur; Jawale, H. P.
2018-04-01
Function generation is synthesis of mechanism for specific task, involves complexity for specially synthesis above five precision of coupler points. Thus pertains to large structural error. The methodology for arriving to better precision solution is to use the optimization technique. Work presented herein considers methods of optimization of structural error in closed kinematic chain with single degree of freedom, for generating functions like log(x), ex, tan(x), sin(x) with five precision points. The equation in Freudenstein-Chebyshev method is used to develop five point synthesis of mechanism. The extended formulation is proposed and results are obtained to verify existing results in literature. Optimization of structural error is carried out using least square approach. Comparative structural error analysis is presented on optimized error through least square method and extended Freudenstein-Chebyshev method.
Reider, L; Hawkes, W; Hebel, J R; D'Adamo, C; Magaziner, J; Miller, R; Orwig, D; Alley, D E
2013-01-01
To determine whether body mass index (BMI) at the time of hospitalization or weight change in the period immediately following hospitalization predict physical function in the year after hip fracture. Prospective observational study. Two hospitals in Baltimore, Maryland. Female hip fracture patients age 65 years or older (N=136 for BMI analysis, N=41 for analysis of weight change). Body mass index was calculated based on weight and height from the medical chart. Weight change was based on DXA scans at 3 and 10 days post fracture. Physical function was assessed at 2, 6 and 12 months following fracture using the lower extremity gain scale (LEGS), walking speed and grip strength. LEGS score and walking speed did not differ across BMI tertiles. However, grip strength differed significantly across BMI tertiles (p=0.029), with underweight women having lower grip strength than normal weight women at all time points. Women experiencing the most weight loss (>4.8%) had significantly lower LEGS scores at all time points, slower walking speed at 6 months, and weaker grip strength at 12 months post-fracture relative to women with more modest weight loss. In adjusted models, overall differences in function and functional change across all time points were not significant. However, at 12 months post fracture,women with the most weight loss had an average grip strength 7.0 kg lower than women with modest weight loss (p=0.030). Adjustment for confounders accounts for much of the relationships between BMI and function and weight change and function in the year after fracture. However, weight loss is associated with weakness during hip fracture recovery. Weight loss during and immediately after hospitalization appears to identify women at risk of poor function and may represent an important target for future interventions.
Analysis of random signal combinations for spacecraft pointing stability
NASA Technical Reports Server (NTRS)
Howell, L.
1983-01-01
Methods for obtaining the probability density function of random signal combustions are discussed. These methods provide a realistic criteria for the design of control systems subjected to external noise with several important applications for aerospace problems.
Single toxin dose-response models revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demidenko, Eugene, E-mail: eugened@dartmouth.edu
The goal of this paper is to offer a rigorous analysis of the sigmoid shape single toxin dose-response relationship. The toxin efficacy function is introduced and four special points, including maximum toxin efficacy and inflection points, on the dose-response curve are defined. The special points define three phases of the toxin effect on mortality: (1) toxin concentrations smaller than the first inflection point or (2) larger then the second inflection point imply low mortality rate, and (3) concentrations between the first and the second inflection points imply high mortality rate. Probabilistic interpretation and mathematical analysis for each of the fourmore » models, Hill, logit, probit, and Weibull is provided. Two general model extensions are introduced: (1) the multi-target hit model that accounts for the existence of several vital receptors affected by the toxin, and (2) model with a nonzero mortality at zero concentration to account for natural mortality. Special attention is given to statistical estimation in the framework of the generalized linear model with the binomial dependent variable as the mortality count in each experiment, contrary to the widespread nonlinear regression treating the mortality rate as continuous variable. The models are illustrated using standard EPA Daphnia acute (48 h) toxicity tests with mortality as a function of NiCl or CuSO{sub 4} toxin. - Highlights: • The paper offers a rigorous study of a sigmoid dose-response relationship. • The concentration with highest mortality rate is rigorously defined. • A table with four special points for five morality curves is presented. • Two new sigmoid dose-response models have been introduced. • The generalized linear model is advocated for estimation of sigmoid dose-response relationship.« less
Renault, Hugues
2013-01-01
The non-protein amino acid γ-aminobutyric acid (GABA) accumulates in plants in response to a wide variety of environmental cues. Recent data point toward an involvement of GABA in tricarboxylic acid (TCA) cycle activity and respiration, especially in stressed roots. To gain further insights into potential GABA functions in plants, phylogenetic and bioinformatic approaches were undertaken. Phylogenetic reconstruction of the GABA transaminase (GABA-T) protein family revealed the monophyletic nature of plant GABA-Ts. However, this analysis also pointed to the common origin of several plant aminotransferases families, which were found more similar to plant GABA-Ts than yeast and human GABA-Ts. A computational analysis of AtGABA-T co-expressed genes was performed in roots and in stress conditions. This second approach uncovered a strong connection between GABA metabolism and glyoxylate cycle during stress. Both in silico analyses open new perspectives and hypotheses for GABA metabolic functions in plants. PMID:23518583
A complex analysis approach to the motion of uniform vortices
NASA Astrophysics Data System (ADS)
Riccardi, Giorgio
2018-02-01
A new mathematical approach to kinematics and dynamics of planar uniform vortices in an incompressible inviscid fluid is presented. It is based on an integral relation between Schwarz function of the vortex boundary and induced velocity. This relation is firstly used for investigating the kinematics of a vortex having its Schwarz function with two simple poles in a transformed plane. The vortex boundary is the image of the unit circle through the conformal map obtained by conjugating its Schwarz function. The resulting analysis is based on geometric and algebraic properties of that map. Moreover, it is shown that the steady configurations of a uniform vortex, possibly in presence of point vortices, can be also investigated by means of the integral relation. The vortex equilibria are divided in two classes, depending on the behavior of the velocity on the boundary, measured in a reference system rotating with this curve. If it vanishes, the analysis is rather simple. However, vortices having nonvanishing relative velocity are also investigated, in presence of a polygonal symmetry. In order to study the vortex dynamics, the definition of Schwarz function is then extended to a Lagrangian framework. This Lagrangian Schwarz function solves a nonlinear integrodifferential Cauchy problem, that is transformed in a singular integral equation. Its analytical solution is here approached in terms of successive approximations. The self-induced dynamics, as well as the interactions with a point vortex, or between two uniform vortices are analyzed.
Regional climate change predictions from the Goddard Institute for Space Studies high resolution GCM
NASA Technical Reports Server (NTRS)
Crane, Robert G.; Hewitson, B. C.
1991-01-01
A new diagnostic tool is developed for examining relationships between the synoptic scale circulation and regional temperature distributions in GCMs. The 4 x 5 deg GISS GCM is shown to produce accurate simulations of the variance in the synoptic scale sea level pressure distribution over the U.S. An analysis of the observational data set from the National Meteorological Center (NMC) also shows a strong relationship between the synoptic circulation and grid point temperatures. This relationship is demonstrated by deriving transfer functions between a time-series of circulation parameters and temperatures at individual grid points. The circulation parameters are derived using rotated principal components analysis, and the temperature transfer functions are based on multivariate polynomial regression models. The application of these transfer functions to the GCM circulation indicates that there is considerable spatial bias present in the GCM temperature distributions. The transfer functions are also used to indicate the possible changes in U.S. regional temperatures that could result from differences in synoptic scale circulation between a 1XCO2 and a 2xCO2 climate, using a doubled CO2 version of the same GISS GCM.
Image formation of volume holographic microscopy using point spread functions
NASA Astrophysics Data System (ADS)
Luo, Yuan; Oh, Se Baek; Kou, Shan Shan; Lee, Justin; Sheppard, Colin J. R.; Barbastathis, George
2010-04-01
We present a theoretical formulation to quantify the imaging properties of volume holographic microscopy (VHM). Volume holograms are formed by exposure of a photosensitive recording material to the interference of two mutually coherent optical fields. Recently, it has been shown that a volume holographic pupil has spatial and spectral sectioning capability for fluorescent samples. Here, we analyze the point spread function (PSF) to assess the imaging behavior of the VHM with a point source and detector. The coherent PSF of the VHM is derived, and the results are compared with those from conventional microscopy, and confocal microscopy with point and slit apertures. According to our analysis, the PSF of the VHM can be controlled in the lateral direction by adjusting the parameters of the VH. Compared with confocal microscopes, the performance of the VHM is comparable or even potentially better, and the VHM is also able to achieve real-time and three-dimensional (3D) imaging due to its multiplexing ability.
Kayano, Mitsunori; Matsui, Hidetoshi; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru
2016-04-01
High-throughput time course expression profiles have been available in the last decade due to developments in measurement techniques and devices. Functional data analysis, which treats smoothed curves instead of originally observed discrete data, is effective for the time course expression profiles in terms of dimension reduction, robustness, and applicability to data measured at small and irregularly spaced time points. However, the statistical method of differential analysis for time course expression profiles has not been well established. We propose a functional logistic model based on elastic net regularization (F-Logistic) in order to identify the genes with dynamic alterations in case/control study. We employ a mixed model as a smoothing method to obtain functional data; then F-Logistic is applied to time course profiles measured at small and irregularly spaced time points. We evaluate the performance of F-Logistic in comparison with another functional data approach, i.e. functional ANOVA test (F-ANOVA), by applying the methods to real and synthetic time course data sets. The real data sets consist of the time course gene expression profiles for long-term effects of recombinant interferon β on disease progression in multiple sclerosis. F-Logistic distinguishes dynamic alterations, which cannot be found by competitive approaches such as F-ANOVA, in case/control study based on time course expression profiles. F-Logistic is effective for time-dependent biomarker detection, diagnosis, and therapy. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Silveira, Hevely Saray Lima; Simões-Zenari, Marcia; Kulcsar, Marco Aurélio; Cernea, Claudio Roberto; Nemr, Kátia
2017-10-27
The supracricoid partial laryngectomy allows the preservation of laryngeal functions with good local cancer control. To assess laryngeal configuration and voice analysis data following the performance of a combination of two vocal exercises: the prolonged /b/vocal exercise combined with the vowel /e/ using chest and arm pushing with different durations among individuals who have undergone supracricoid laryngectomy. Eleven patients undergoing partial laryngectomy supracricoid with cricohyoidoepiglottopexy (CHEP) were evaluated using voice recording. Four judges performed separately a perceptive-vocal analysis of hearing voices, with random samples. For the analysis of intrajudge reliability, repetitions of 70% of the voices were done. Intraclass correlation coefficient was used to analyze the reliability of the judges. For an analysis of each judge to the comparison between zero time (time point 0), after the first series of exercises (time point 1), after the second series (time point 2), after the third series (time point 3), after the fourth series (time point 4), and after the fifth and final series (time point 5), the Friedman test was used with a significance level of 5%. The data relative to the configuration of the larynx were subjected to a descriptive analysis. In the evaluation, were considered the judge results 1 which have greater reliability. There was an improvement in the general level of vocal, roughness, and breathiness deviations from time point 4 [T4]. The prolonged /b/vocal exercise, combined with the vowel /e/ using chest- and arm-pushing exercises, was associated with an improvement in the overall grade of vocal deviation, roughness, and breathiness starting at minute 4 among patients who had undergone supracricoid laryngectomy with CHEP reconstruction. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data
ERIC Educational Resources Information Center
Xu, Shu; Blozis, Shelley A.
2011-01-01
Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…
Strategies for efficient resolution analysis in full-waveform inversion
NASA Astrophysics Data System (ADS)
Fichtner, A.; van Leeuwen, T.; Trampert, J.
2016-12-01
Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.
A new approach to assess COPD by identifying lung function break-points
Eriksson, Göran; Jarenbäck, Linnea; Peterson, Stefan; Ankerst, Jaro; Bjermer, Leif; Tufvesson, Ellen
2015-01-01
Purpose COPD is a progressive disease, which can take different routes, leading to great heterogeneity. The aim of the post-hoc analysis reported here was to perform continuous analyses of advanced lung function measurements, using linear and nonlinear regressions. Patients and methods Fifty-one COPD patients with mild to very severe disease (Global Initiative for Chronic Obstructive Lung Disease [GOLD] Stages I–IV) and 41 healthy smokers were investigated post-bronchodilation by flow-volume spirometry, body plethysmography, diffusion capacity testing, and impulse oscillometry. The relationship between COPD severity, based on forced expiratory volume in 1 second (FEV1), and different lung function parameters was analyzed by flexible nonparametric method, linear regression, and segmented linear regression with break-points. Results Most lung function parameters were nonlinear in relation to spirometric severity. Parameters related to volume (residual volume, functional residual capacity, total lung capacity, diffusion capacity [diffusion capacity of the lung for carbon monoxide], diffusion capacity of the lung for carbon monoxide/alveolar volume) and reactance (reactance area and reactance at 5Hz) were segmented with break-points at 60%–70% of FEV1. FEV1/forced vital capacity (FVC) and resonance frequency had break-points around 80% of FEV1, while many resistance parameters had break-points below 40%. The slopes in percent predicted differed; resistance at 5 Hz minus resistance at 20 Hz had a linear slope change of −5.3 per unit FEV1, while residual volume had no slope change above and −3.3 change per unit FEV1 below its break-point of 61%. Conclusion Continuous analyses of different lung function parameters over the spirometric COPD severity range gave valuable information additional to categorical analyses. Parameters related to volume, diffusion capacity, and reactance showed break-points around 65% of FEV1, indicating that air trapping starts to dominate in moderate COPD (FEV1 =50%–80%). This may have an impact on the patient’s management plan and selection of patients and/or outcomes in clinical research. PMID:26508849
Constructing graph models for software system development and analysis
NASA Astrophysics Data System (ADS)
Pogrebnoy, Andrey V.
2017-01-01
We propose a concept for creating the instrumentation for functional and structural decisions rationale during the software system (SS) development. We propose to develop SS simultaneously on two models - functional (FM) and structural (SM). FM is a source code of the SS. Adequate representation of the FM in the form of a graph model (GM) is made automatically and called SM. The problem of creating and visualizing GM is considered from the point of applying it as a uniform platform for the adequate representation of the SS source code. We propose three levels of GM detailing: GM1 - for visual analysis of the source code and for SS version control, GM2 - for resources optimization and analysis of connections between SS components, GM3 - for analysis of the SS functioning in dynamics. The paper includes examples of constructing all levels of GM.
Meta-analysis of chicken--salmonella infection experiments.
Te Pas, Marinus F W; Hulsegge, Ina; Schokker, Dirkjan; Smits, Mari A; Fife, Mark; Zoorob, Rima; Endale, Marie-Laure; Rebel, Johanna M J
2012-04-24
Chicken meat and eggs can be a source of human zoonotic pathogens, especially Salmonella species. These food items contain a potential hazard for humans. Chickens lines differ in susceptibility for Salmonella and can harbor Salmonella pathogens without showing clinical signs of illness. Many investigations including genomic studies have examined the mechanisms how chickens react to infection. Apart from the innate immune response, many physiological mechanisms and pathways are reported to be involved in the chicken host response to Salmonella infection. The objective of this study was to perform a meta-analysis of diverse experiments to identify general and host specific mechanisms to the Salmonella challenge. Diverse chicken lines differing in susceptibility to Salmonella infection were challenged with different Salmonella serovars at several time points. Various tissues were sampled at different time points post-infection, and resulting host transcriptional differences investigated using different microarray platforms. The meta-analysis was performed with the R-package metaMA to create lists of differentially regulated genes. These gene lists showed many similarities for different chicken breeds and tissues, and also for different Salmonella serovars measured at different times post infection. Functional biological analysis of these differentially expressed gene lists revealed several common mechanisms for the chicken host response to Salmonella infection. The meta-analysis-specific genes (i.e. genes found differentially expressed only in the meta-analysis) confirmed and expanded the biological functional mechanisms. The meta-analysis combination of heterogeneous expression profiling data provided useful insights into the common metabolic pathways and functions of different chicken lines infected with different Salmonella serovars.
Meta-analysis of Chicken – Salmonella infection experiments
2012-01-01
Background Chicken meat and eggs can be a source of human zoonotic pathogens, especially Salmonella species. These food items contain a potential hazard for humans. Chickens lines differ in susceptibility for Salmonella and can harbor Salmonella pathogens without showing clinical signs of illness. Many investigations including genomic studies have examined the mechanisms how chickens react to infection. Apart from the innate immune response, many physiological mechanisms and pathways are reported to be involved in the chicken host response to Salmonella infection. The objective of this study was to perform a meta-analysis of diverse experiments to identify general and host specific mechanisms to the Salmonella challenge. Results Diverse chicken lines differing in susceptibility to Salmonella infection were challenged with different Salmonella serovars at several time points. Various tissues were sampled at different time points post-infection, and resulting host transcriptional differences investigated using different microarray platforms. The meta-analysis was performed with the R-package metaMA to create lists of differentially regulated genes. These gene lists showed many similarities for different chicken breeds and tissues, and also for different Salmonella serovars measured at different times post infection. Functional biological analysis of these differentially expressed gene lists revealed several common mechanisms for the chicken host response to Salmonella infection. The meta-analysis-specific genes (i.e. genes found differentially expressed only in the meta-analysis) confirmed and expanded the biological functional mechanisms. Conclusions The meta-analysis combination of heterogeneous expression profiling data provided useful insights into the common metabolic pathways and functions of different chicken lines infected with different Salmonella serovars. PMID:22531008
ERIC Educational Resources Information Center
Shrestha, Akriti; Anderson, Angelika; Moore, Dennis W.
2013-01-01
This study examined the effectiveness of point-of-view video modeling in a forward-chaining procedure to teach a 4-year-old boy with autism to serve himself an afternoon snack. Task analysis was undertaken, and the task was divided into 3 phases with 1 video produced for each phase. A changing criterion design was used to evaluate the effects of…
On Special Functions in the Context of Clifford Analysis
NASA Astrophysics Data System (ADS)
Malonek, H. R.; Falcão, M. I.
2010-09-01
Considering the foundation of Quaternionic Analysis by R. Fueter and his collaborators in the beginning of the 1930s as starting point of Clifford Analysis, we can look back to 80 years of work in this field. However the interest in multivariate analysis using Clifford algebras only started to grow significantly in the 70s. Since then a great amount of papers on Clifford Analysis referring different classes of Special Functions have appeared. This situation may have been triggered by a more systematic treatment of monogenic functions by their multiple series development derived from Gegenbauer or associated Legendre polynomials (and not only by their integral representation). Also approaches to Special Functions by means of algebraic methods, either Lie algebras or through Lie groups and symmetric spaces gained by that time importance and influenced their treatment in Clifford Analysis. In our talk we will rely on the generalization of the classical approach to Special Functions through differential equations with respect to the hypercomplex derivative, which is a more recently developed tool in Clifford Analysis. In this context special attention will be payed to the role of Special Functions as intermediator between continuous and discrete mathematics. This corresponds to a more recent trend in combinatorics, since it has been revealed that many algebraic structures have hidden combinatorial underpinnings.
Point processes in arbitrary dimension from fermionic gases, random matrix theory, and number theory
NASA Astrophysics Data System (ADS)
Torquato, Salvatore; Scardicchio, A.; Zachary, Chase E.
2008-11-01
It is well known that one can map certain properties of random matrices, fermionic gases, and zeros of the Riemann zeta function to a unique point process on the real line \\mathbb {R} . Here we analytically provide exact generalizations of such a point process in d-dimensional Euclidean space \\mathbb {R}^d for any d, which are special cases of determinantal processes. In particular, we obtain the n-particle correlation functions for any n, which completely specify the point processes in \\mathbb {R}^d . We also demonstrate that spin-polarized fermionic systems in \\mathbb {R}^d have these same n-particle correlation functions in each dimension. The point processes for any d are shown to be hyperuniform, i.e., infinite wavelength density fluctuations vanish, and the structure factor (or power spectrum) S(k) has a non-analytic behavior at the origin given by S(k)~|k| (k \\rightarrow 0 ). The latter result implies that the pair correlation function g2(r) tends to unity for large pair distances with a decay rate that is controlled by the power law 1/rd+1, which is a well-known property of bosonic ground states and more recently has been shown to characterize maximally random jammed sphere packings. We graphically display one-and two-dimensional realizations of the point processes in order to vividly reveal their 'repulsive' nature. Indeed, we show that the point processes can be characterized by an effective 'hard core' diameter that grows like the square root of d. The nearest-neighbor distribution functions for these point processes are also evaluated and rigorously bounded. Among other results, this analysis reveals that the probability of finding a large spherical cavity of radius r in dimension d behaves like a Poisson point process but in dimension d+1, i.e., this probability is given by exp[-κ(d)rd+1] for large r and finite d, where κ(d) is a positive d-dependent constant. We also show that as d increases, the point process behaves effectively like a sphere packing with a coverage fraction of space that is no denser than 1/2d. This coverage fraction has a special significance in the study of sphere packings in high-dimensional Euclidean spaces.
Economic Analysis of Redesign Alternatives for the RESFMS Information System
1992-09-01
input parameters to produce the variations of output ( Pressman , 1992). Thus, if system maintenance dictates that the procedure needs to be modified, it...easily counted, and that "a large body of literature and data predicated on LOC already exists." ( Pressman , 1992) Another term for LOC, used by Boehm... Pressman , 1992). 54 D. FUNCTION POINTS An alternative to size metrics such as LOC is the measurement of software "functionality" or "utility." Function
Zheng, Jun; Yu, Zhiyuan; Ma, Lu; Guo, Rui; Lin, Sen; You, Chao; Li, Hao
2018-03-16
Intracerebral hemorrhage (ICH) is a devastating subtype of stroke. Patients with ICH have poor functional outcomes. The association between blood glucose level and functional outcome in ICH remains unclear. This systematic review and meta-analysis aimed to investigate the association between blood glucose level and functional outcomes in patients with ICH. Literature was searched systemically in PubMed, EMBASE, Web of Science, and Cochrane Library. Published cohort studies evaluating the association between blood glucose and functional outcome in patients with ICH were included. This meta-analysis was performed using odds ratios (ORs) and 95% confidence intervals (CIs). A total of 16 studies were included in our meta-analysis. Our data show that hyperglycemia defined by cutoff values was significantly associated with unfavorable functional outcome (OR, 1.80; 95% CI, 1.36-2.39; P < 0.001). Our analysis also suggested a significant association between increased blood glucose levels and functional outcomes (OR, 1.05; 95% CI, 1.03-1.07; P < 0.001). High blood glucose level is significantly associated with poor functional outcome in ICH. Further studies with larger sample sizes, more time points, and longer follow-up times are necessary to confirm this association. Copyright © 2018 Elsevier Inc. All rights reserved.
[Image processing applying in analysis of motion features of cultured cardiac myocyte in rat].
Teng, Qizhi; He, Xiaohai; Luo, Daisheng; Wang, Zhengrong; Zhou, Beiyi; Yuan, Zhirun; Tao, Dachang
2007-02-01
Study of mechanism of medicine actions, by quantitative analysis of cultured cardiac myocyte, is one of the cutting edge researches in myocyte dynamics and molecular biology. The characteristics of cardiac myocyte auto-beating without external stimulation make the research sense. Research of the morphology and cardiac myocyte motion using image analysis can reveal the fundamental mechanism of medical actions, increase the accuracy of medicine filtering, and design the optimal formula of medicine for best medical treatments. A system of hardware and software has been built with complete sets of functions including living cardiac myocyte image acquisition, image processing, motion image analysis, and image recognition. In this paper, theories and approaches are introduced for analysis of living cardiac myocyte motion images and implementing quantitative analysis of cardiac myocyte features. A motion estimation algorithm is used for motion vector detection of particular points and amplitude and frequency detection of a cardiac myocyte. Beatings of cardiac myocytes are sometimes very small. In such case, it is difficult to detect the motion vectors from the particular points in a time sequence of images. For this reason, an image correlation theory is employed to detect the beating frequencies. Active contour algorithm in terms of energy function is proposed to approximate the boundary and detect the changes of edge of myocyte.
Factors influencing sustainability of communally-managed water facilities in rural areas of Zimbabwe
NASA Astrophysics Data System (ADS)
Kativhu, T.; Mazvimavi, D.; Tevera, D.; Nhapi, I.
2017-08-01
Sustainability of point water facilities is a major development challenge in many rural settings of developing countries not sparing those in the Sub-Saharan Africa region. This study was done in Zimbabwe to investigate the factors influencing sustainability of rural water supply systems. A total of 399 water points were studied in Nyanga, Chivi and Gwanda districts. Data was collected using a questionnaire, observation checklist and key informant interview guide. Multi-Criteria analysis was used to assess the sustainability of water points and inferential statistical analysis such as Chi square tests and Analysis of Variance (ANOVA) were used to determine if there were significant differences on selected variables across districts and types of lifting devices used in the study area. The thematic approach was used to analyze qualitative data. Results show that most water points were not functional and only 17% across the districts were found to be sustainable. A fusion of social, technical, financial, environmental and institutional factors was found to be influencing sustainability. On technical factors the ANOVA results show that the type of lifting device fitted at a water point significantly influences sustainability (F = 37.4, p < 0.01). Availability of spare parts at community level was found to be determining the downtime period of different lifting devices in the studied wards. Absence of user committees was found to be central in influencing sustainability as water points that did not have user committees were not sustainable and most of them were not functional during the time of the survey. Active participation by communities at the planning stage of water projects was also found to be critical for sustainability although field results showed passive participation by communities at this critical project stage. Financial factors of adequacy of financial contributions and establishment of operation and maintenance funds were also found to be of great importance in sustaining water supply systems. It is recommended that all factors should be considered when assessing sustainability since they are interrelated.
Effective structural descriptors for natural and engineered radioactive waste confinement barriers
NASA Astrophysics Data System (ADS)
Lemmens, Laurent; Rogiers, Bart; De Craen, Mieke; Laloy, Eric; Jacques, Diederik; Huysmans, Marijke; Swennen, Rudy; Urai, Janos L.; Desbois, Guillaume
2017-04-01
The microstructure of a radioactive waste confinement barrier strongly influences its flow and transport properties. Numerical flow and transport simulations for these porous media at the pore scale therefore require input data that describe the microstructure as accurately as possible. To date, no imaging method can resolve all heterogeneities within important radioactive waste confinement barrier materials as hardened cement paste and natural clays at the micro scale (nm-cm). Therefore, it is necessary to merge information from different 2D and 3D imaging methods using porous media reconstruction techniques. To qualitatively compare the results of different reconstruction techniques, visual inspection might suffice. To quantitatively compare training-image based algorithms, Tan et al. (2014) proposed an algorithm using an analysis of distance. However, the ranking of the algorithm depends on the choice of the structural descriptor, in their case multiple-point or cluster-based histograms. We present here preliminary work in which we will review different structural descriptors and test their effectiveness, for capturing the main structural characteristics of radioactive waste confinement barrier materials, to determine the descriptors to use in the analysis of distance. The investigated descriptors are particle size distributions, surface area distributions, two point probability functions, multiple point histograms, linear functions and two point cluster functions. The descriptor testing consists of stochastically generating realizations from a reference image using the simulated annealing optimization procedure introduced by Karsanina et al. (2015). This procedure basically minimizes the differences between pre-specified descriptor values associated with the training image and the image being produced. The most efficient descriptor set can therefore be identified by comparing the image generation quality among the tested descriptor combinations. The assessment of the quality of the simulations will be made by combining all considered descriptors. Once the set of the most efficient descriptors is determined, they can be used in the analysis of distance, to rank different reconstruction algorithms in a more objective way in future work. Karsanina MV, Gerke KM, Skvortsova EB, Mallants D (2015) Universal Spatial Correlation Functions for Describing and Reconstructing Soil Microstructure. PLoS ONE 10(5): e0126515. doi:10.1371/journal.pone.0126515 Tan, Xiaojin, Pejman Tahmasebi, and Jef Caers. "Comparing training-image based algorithms using an analysis of distance." Mathematical Geosciences 46.2 (2014): 149-169.
Safe landing area determination for a Moon lander by reachability analysis
NASA Astrophysics Data System (ADS)
Arslantaş, Yunus Emre; Oehlschlägel, Thimo; Sagliano, Marco
2016-11-01
In the last decades developments in space technology paved the way to more challenging missions like asteroid mining, space tourism and human expansion into the Solar System. These missions result in difficult tasks such as guidance schemes for re-entry, landing on celestial bodies and implementation of large angle maneuvers for spacecraft. There is a need for a safety system to increase the robustness and success of these missions. Reachability analysis meets this requirement by obtaining the set of all achievable states for a dynamical system starting from an initial condition with given admissible control inputs of the system. This paper proposes an algorithm for the approximation of nonconvex reachable sets (RS) by using optimal control. Therefore subset of the state space is discretized by equidistant points and for each grid point a distance function is defined. This distance function acts as an objective function for a related optimal control problem (OCP). Each infinite dimensional OCP is transcribed into a finite dimensional Nonlinear Programming Problem (NLP) by using Pseudospectral Methods (PSM). Finally, the NLPs are solved using available tools resulting in approximated reachable sets with information about the states of the dynamical system at these grid points. The algorithm is applied on a generic Moon landing mission. The proposed method computes approximated reachable sets and the attainable safe landing region with information about propellant consumption and time.
Regulator dependence of fixed points in quantum Einstein gravity with R 2 truncation
NASA Astrophysics Data System (ADS)
Nagy, S.; Fazekas, B.; Peli, Z.; Sailer, K.; Steib, I.
2018-03-01
We performed a functional renormalization group analysis for the quantum Einstein gravity including a quadratic term in the curvature. The ultraviolet non-gaussian fixed point and its critical exponent for the correlation length are identified for different forms of regulators in case of dimension 3. We searched for that optimized regulator where the physical quantities show the least regulator parameter dependence. It is shown that the Litim regulator satisfies this condition. The infrared fixed point has also been investigated, it is found that the exponent is insensitive to the third coupling introduced by the R 2 term.
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2005-04-01
We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.
Output-Sensitive Construction of Reeb Graphs.
Doraiswamy, H; Natarajan, V
2012-01-01
The Reeb graph of a scalar function represents the evolution of the topology of its level sets. This paper describes a near-optimal output-sensitive algorithm for computing the Reeb graph of scalar functions defined over manifolds or non-manifolds in any dimension. Key to the simplicity and efficiency of the algorithm is an alternate definition of the Reeb graph that considers equivalence classes of level sets instead of individual level sets. The algorithm works in two steps. The first step locates all critical points of the function in the domain. Critical points correspond to nodes in the Reeb graph. Arcs connecting the nodes are computed in the second step by a simple search procedure that works on a small subset of the domain that corresponds to a pair of critical points. The paper also describes a scheme for controlled simplification of the Reeb graph and two different graph layout schemes that help in the effective presentation of Reeb graphs for visual analysis of scalar fields. Finally, the Reeb graph is employed in four different applications-surface segmentation, spatially-aware transfer function design, visualization of interval volumes, and interactive exploration of time-varying data.
Covic, Tanya; Pallant, Julie F; Conaghan, Philip G; Tennant, Alan
2007-01-01
Background The aim of this study was to test the internal validity of the total Center for Epidemiologic Studies-Depression (CES-D) scale using Rasch analysis in a rheumatoid arthritis (RA) population. Methods CES-D was administered to 157 patients with RA over three time points within a 12 month period. Rasch analysis was applied using RUMM2020 software to assess the overall fit of the model, the response scale used, individual item fit, differential item functioning (DIF) and person separation. Results Pooled data across three time points was shown to fit the Rasch model with removal of seven items from the original 20-item CES-D scale. It was necessary to rescore the response format from four to three categories in order to improve the scale's fit. Two items demonstrated some DIF for age and gender but were retained within the 13-item CES-D scale. A new cut point for depression score of 9 was found to correspond to the original cut point score of 16 in the full CES-D scale. Conclusion This Rasch analysis of the CES-D in a longstanding RA cohort resulted in the construction of a modified 13-item scale with good internal validity. Further validation of the modified scale is recommended particularly in relation to the new cut point for depression. PMID:17629902
Rodgers, Helen; Shaw, Lisa; Bosomworth, Helen; Aird, Lydia; Alvarado, Natasha; Andole, Sreeman; Cohen, David L; Dawson, Jesse; Eyre, Janet; Finch, Tracy; Ford, Gary A; Hislop, Jennifer; Hogg, Steven; Howel, Denise; Hughes, Niall; Krebs, Hermano Igo; Price, Christopher; Rochester, Lynn; Stamp, Elaine; Ternent, Laura; Turner, Duncan; Vale, Luke; Warburton, Elizabeth; van Wijck, Frederike; Wilkes, Scott
2017-07-20
Loss of arm function is a common and distressing consequence of stroke. We describe the protocol for a pragmatic, multicentre randomised controlled trial to determine whether robot-assisted training improves upper limb function following stroke. Study design: a pragmatic, three-arm, multicentre randomised controlled trial, economic analysis and process evaluation. NHS stroke services. adults with acute or chronic first-ever stroke (1 week to 5 years post stroke) causing moderate to severe upper limb functional limitation. Randomisation groups: 1. Robot-assisted training using the InMotion robotic gym system for 45 min, three times/week for 12 weeks 2. Enhanced upper limb therapy for 45 min, three times/week for 12 weeks 3. Usual NHS care in accordance with local clinical practice Randomisation: individual participant randomisation stratified by centre, time since stroke, and severity of upper limb impairment. upper limb function measured by the Action Research Arm Test (ARAT) at 3 months post randomisation. upper limb impairment (Fugl-Meyer Test), activities of daily living (Barthel ADL Index), quality of life (Stroke Impact Scale, EQ-5D-5L), resource use, cost per quality-adjusted life year and adverse events, at 3 and 6 months. Blinding: outcomes are undertaken by blinded assessors. Economic analysis: micro-costing and economic evaluation of interventions compared to usual NHS care. A within-trial analysis, with an economic model will be used to extrapolate longer-term costs and outcomes. Process evaluation: semi-structured interviews with participants and professionals to seek their views and experiences of the rehabilitation that they have received or provided, and factors affecting the implementation of the trial. allowing for 10% attrition, 720 participants provide 80% power to detect a 15% difference in successful outcome between each of the treatment pairs. Successful outcome definition: baseline ARAT 0-7 must improve by 3 or more points; baseline ARAT 8-13 improve by 4 or more points; baseline ARAT 14-19 improve by 5 or more points; baseline ARAT 20-39 improve by 6 or more points. The results from this trial will determine whether robot-assisted training improves upper limb function post stroke. ISRCTN, identifier: ISRCTN69371850 . Registered 4 October 2013.
Teaching Robust Methods for Exploratory Data Analysis.
1980-10-01
of adding a new point x to a sample x1*9...sX n* The Influence Function of the estimate 0 at the value x is defined to be For example, if 0 is the...mean (Ex )/n, we can calculate II+(x,iZ) x-ix Plotting I+, ’I- -9- we see that the mean has an unbounded Influence Function , and is therefore not robust
nSTAT: Open-Source Neural Spike Train Analysis Toolbox for Matlab
Cajigas, I.; Malik, W.Q.; Brown, E.N.
2012-01-01
Over the last decade there has been a tremendous advance in the analytical tools available to neuroscientists to understand and model neural function. In particular, the point process - Generalized Linear Model (PPGLM) framework has been applied successfully to problems ranging from neuro-endocrine physiology to neural decoding. However, the lack of freely distributed software implementations of published PP-GLM algorithms together with problem-specific modifications required for their use, limit wide application of these techniques. In an effort to make existing PP-GLM methods more accessible to the neuroscience community, we have developed nSTAT – an open source neural spike train analysis toolbox for Matlab®. By adopting an Object-Oriented Programming (OOP) approach, nSTAT allows users to easily manipulate data by performing operations on objects that have an intuitive connection to the experiment (spike trains, covariates, etc.), rather than by dealing with data in vector/matrix form. The algorithms implemented within nSTAT address a number of common problems including computation of peri-stimulus time histograms, quantification of the temporal response properties of neurons, and characterization of neural plasticity within and across trials. nSTAT provides a starting point for exploratory data analysis, allows for simple and systematic building and testing of point process models, and for decoding of stimulus variables based on point process models of neural function. By providing an open-source toolbox, we hope to establish a platform that can be easily used, modified, and extended by the scientific community to address limitations of current techniques and to extend available techniques to more complex problems. PMID:22981419
Non-conventional optomechanical choppers: analysis and design of novel prototypes
NASA Astrophysics Data System (ADS)
Duma, Virgil-Florin; Demian, Dorin; Csukas, Eduard Sebastian; Pop, Nicolina; Cira, Octavian
2017-10-01
Optical choppers are widely used in laser systems - for light modulation and/or attenuation. In their most used and wellknown configuration, they are built as a rotational wheel with windows, which transforms a continuous-wave laser beam into a series of impulses with a certain frequency and profile. We briefly present the analysis and design we have completed for the classical chopper wheels (i.e., with windows with linear margins) for both top-hat and Gaussian laser beams. Further on, novel chopper wheels configurations, with outward or inward semi-circular (or with other non-linear shaped) margins of the windows is pointed out; we completed for them both analytic functions and simulations, for both top-hat and Gaussian beams, in order to deduce their transmission functions (i.e., the time profile of the laser impulses generated by the device). The stress of the presentation is put on the novel choppers with shafts (patent pending); their transmission functions are pointed out for top-hat laser beams. Finally, an example of such choppers is considered, with regard to the necessary Finite Element Analysis (FEA) that has to be performed for their rotational shaft. Both the mechanical stress and the deformations in the shaft have to be taken into account, especially at high rotational speeds of the mobile element.
Classification of attractors for systems of identical coupled Kuramoto oscillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelbrecht, Jan R.; Mirollo, Renato
2014-03-15
We present a complete classification of attractors for networks of coupled identical Kuramoto oscillators. In such networks, each oscillator is driven by the same first-order trigonometric function, with coefficients given by symmetric functions of the entire oscillator ensemble. For N≠3 oscillators, there are four possible types of attractors: completely synchronized fixed points or limit cycles, and fixed points or limit cycles where all but one of the oscillators are synchronized. The case N = 3 is exceptional; systems of three identical Kuramoto oscillators can also posses attracting fixed points or limit cycles with all three oscillators out of sync, as well asmore » chaotic attractors. Our results rely heavily on the invariance of the flow for such systems under the action of the three-dimensional group of Möbius transformations, which preserve the unit disc, and the analysis of the possible limiting configurations for this group action.« less
Age-related changes in the function and structure of the peripheral sensory pathway in mice.
Canta, Annalisa; Chiorazzi, Alessia; Carozzi, Valentina Alda; Meregalli, Cristina; Oggioni, Norberto; Bossi, Mario; Rodriguez-Menendez, Virginia; Avezza, Federica; Crippa, Luca; Lombardi, Raffaella; de Vito, Giuseppe; Piazza, Vincenzo; Cavaletti, Guido; Marmiroli, Paola
2016-09-01
This study is aimed at describing the changes occurring in the entire peripheral nervous system sensory pathway along a 2-year observation period in a cohort of C57BL/6 mice. The neurophysiological studies evidenced significant differences in the selected time points corresponding to childhood, young adulthood, adulthood, and aging (i.e., 1, 7, 15, and 25 months of age), with a parabolic course as function of time. The pathological assessment allowed to demonstrate signs of age-related changes since the age of 7 months, with a remarkable increase in both peripheral nerves and dorsal root ganglia at the subsequent time points. These changes were mainly in the myelin sheaths, as also confirmed by the Rotating-Polarization Coherent-Anti-stokes-Raman-scattering microscopy analysis. Evident changes were also present at the morphometric analysis performed on the peripheral nerves, dorsal root ganglia neurons, and skin biopsies. This extensive, multimodal characterization of the peripheral nervous system changes in aging provides the background for future mechanistic studies allowing the selection of the most appropriate time points and readouts according to the investigation aims. Copyright © 2016 Elsevier Inc. All rights reserved.
Spiral bacterial foraging optimization method: Algorithm, evaluation and convergence analysis
NASA Astrophysics Data System (ADS)
Kasaiezadeh, Alireza; Khajepour, Amir; Waslander, Steven L.
2014-04-01
A biologically-inspired algorithm called Spiral Bacterial Foraging Optimization (SBFO) is investigated in this article. SBFO, previously proposed by the same authors, is a multi-agent, gradient-based algorithm that minimizes both the main objective function (local cost) and the distance between each agent and a temporary central point (global cost). A random jump is included normal to the connecting line of each agent to the central point, which produces a vortex around the temporary central point. This random jump is also suitable to cope with premature convergence, which is a feature of swarm-based optimization methods. The most important advantages of this algorithm are as follows: First, this algorithm involves a stochastic type of search with a deterministic convergence. Second, as gradient-based methods are employed, faster convergence is demonstrated over GA, DE, BFO, etc. Third, the algorithm can be implemented in a parallel fashion in order to decentralize large-scale computation. Fourth, the algorithm has a limited number of tunable parameters, and finally SBFO has a strong certainty of convergence which is rare in existing global optimization algorithms. A detailed convergence analysis of SBFO for continuously differentiable objective functions has also been investigated in this article.
Statistical indicators of collective behavior and functional clusters in gene networks of yeast
NASA Astrophysics Data System (ADS)
Živković, J.; Tadić, B.; Wick, N.; Thurner, S.
2006-03-01
We analyze gene expression time-series data of yeast (S. cerevisiae) measured along two full cell-cycles. We quantify these data by using q-exponentials, gene expression ranking and a temporal mean-variance analysis. We construct gene interaction networks based on correlation coefficients and study the formation of the corresponding giant components and minimum spanning trees. By coloring genes according to their cell function we find functional clusters in the correlation networks and functional branches in the associated trees. Our results suggest that a percolation point of functional clusters can be identified on these gene expression correlation networks.
Duan, Fenghai; Xu, Ye
2017-01-01
To analyze a microarray experiment to identify the genes with expressions varying after the diagnosis of breast cancer. A total of 44 928 probe sets in an Affymetrix microarray data publicly available on Gene Expression Omnibus from 249 patients with breast cancer were analyzed by the nonparametric multivariate adaptive splines. Then, the identified genes with turning points were grouped by K-means clustering, and their network relationship was subsequently analyzed by the Ingenuity Pathway Analysis. In total, 1640 probe sets (genes) were reliably identified to have turning points along with the age at diagnosis in their expression profiling, of which 927 expressed lower after turning points and 713 expressed higher after the turning points. K-means clustered them into 3 groups with turning points centering at 54, 62.5, and 72, respectively. The pathway analysis showed that the identified genes were actively involved in various cancer-related functions or networks. In this article, we applied the nonparametric multivariate adaptive splines method to a publicly available gene expression data and successfully identified genes with expressions varying before and after breast cancer diagnosis.
NASA Astrophysics Data System (ADS)
Nassiri, Isar; Lombardo, Rosario; Lauria, Mario; Morine, Melissa J.; Moyseos, Petros; Varma, Vijayalakshmi; Nolen, Greg T.; Knox, Bridgett; Sloper, Daniel; Kaput, Jim; Priami, Corrado
2016-07-01
The investigation of the complex processes involved in cellular differentiation must be based on unbiased, high throughput data processing methods to identify relevant biological pathways. A number of bioinformatics tools are available that can generate lists of pathways ranked by statistical significance (i.e. by p-value), while ideally it would be desirable to functionally score the pathways relative to each other or to other interacting parts of the system or process. We describe a new computational method (Network Activity Score Finder - NASFinder) to identify tissue-specific, omics-determined sub-networks and the connections with their upstream regulator receptors to obtain a systems view of the differentiation of human adipocytes. Adipogenesis of human SBGS pre-adipocyte cells in vitro was monitored with a transcriptomic data set comprising six time points (0, 6, 48, 96, 192, 384 hours). To elucidate the mechanisms of adipogenesis, NASFinder was used to perform time-point analysis by comparing each time point against the control (0 h) and time-lapse analysis by comparing each time point with the previous one. NASFinder identified the coordinated activity of seemingly unrelated processes between each comparison, providing the first systems view of adipogenesis in culture. NASFinder has been implemented into a web-based, freely available resource associated with novel, easy to read visualization of omics data sets and network modules.
Rodriguez, Alberto; Vasquez, Louella J; Römer, Rudolf A
2009-03-13
The probability density function (PDF) for critical wave function amplitudes is studied in the three-dimensional Anderson model. We present a formal expression between the PDF and the multifractal spectrum f(alpha) in which the role of finite-size corrections is properly analyzed. We show the non-Gaussian nature and the existence of a symmetry relation in the PDF. From the PDF, we extract information about f(alpha) at criticality such as the presence of negative fractal dimensions and the possible existence of termination points. A PDF-based multifractal analysis is shown to be a valid alternative to the standard approach based on the scaling of inverse participation ratios.
Ernst, Dominique; Köhler, Jürgen
2013-01-21
We provide experimental results on the accuracy of diffusion coefficients obtained by a mean squared displacement (MSD) analysis of single-particle trajectories. We have recorded very long trajectories comprising more than 1.5 × 10(5) data points and decomposed these long trajectories into shorter segments providing us with ensembles of trajectories of variable lengths. This enabled a statistical analysis of the resulting MSD curves as a function of the lengths of the segments. We find that the relative error of the diffusion coefficient can be minimized by taking an optimum number of points into account for fitting the MSD curves, and that this optimum does not depend on the segment length. Yet, the magnitude of the relative error for the diffusion coefficient does, and achieving an accuracy in the order of 10% requires the recording of trajectories with about 1000 data points. Finally, we compare our results with theoretical predictions and find very good qualitative and quantitative agreement between experiment and theory.
Quasi-Sun-Pointing of Spacecraft Using Radiation Pressure
NASA Technical Reports Server (NTRS)
Spilker, Thomas
2003-01-01
A report proposes a method of utilizing solar-radiation pressure to keep the axis of rotation of a small spin-stabilized spacecraft pointed approximately (typically, within an angle of 10 deg to 20 deg) toward the Sun. Axisymmetry is not required. Simple tilted planar vanes would be attached to the outer surface of the body, so that the resulting spacecraft would vaguely resemble a rotary fan, windmill, or propeller. The vanes would be painted black for absorption of Solar radiation. A theoretical analysis based on principles of geometric optics and mechanics has shown that torques produced by Solar-radiation pressure would cause the axis of rotation to precess toward Sun-pointing. The required vane size would be a function of the angular momentum of the spacecraft and the maximum acceptable angular deviation from Sun-pointing. The analysis also shows that the torques produced by the vanes would slowly despin the spacecraft -- an effect that could be counteracted by adding specularly reflecting "spin-up" vanes.
Klaas, H S; Clémence, A; Marion-Veyron, R; Antonietti, J-P; Alameda, L; Golay, P; Conus, P
2017-03-01
Awareness of illness (insight) has been found to have contradictory effects for different functional outcomes after the early course of psychosis. Whereas it is related to psychotic symptom reduction and medication adherence, it is also associated with increased depressive symptoms. In this line, the specific effects of insight on the evolution of functioning over time have not been identified, and social indicators, such as socio-occupational functioning have barely been considered. Drawing from social identity theory we investigated the impact of insight on the development of psychosocial outcomes and the interactions of these variables over time. The participants, 240 patients in early phase of psychosis from the Treatment and Early Intervention in Psychosis Program (TIPP) of the University Hospital of Lausanne, Switzerland, were assessed at eight time points over 3 years. Cross-lagged panel analyses and multilevel analyses were conducted on socio-occupational and general functioning [Social and Occupational Functioning Assessment Scale (SOFAS) and Global Assessment of Functioning (GAF)] with insight, time and depressive symptoms as independent variables. Results from multilevel analyses point to an overall positive impact of insight on psychosocial functioning, which increases over time. Yet the cross-lagged panel analysis did not reveal a systematic positive and causal effect of insight on SOFAS and GAF scores. Depressive symptoms seem only to be relevant in the beginning of the treatment process. Our results point to a complex process in which the positive impact of insight on psychosocial functioning increases over time, even when considering depressive symptoms. Future studies and treatment approaches should consider the procedural aspect of insight.
Dias, Katrin A; Coombes, Jeff S; Green, Daniel J; Gomersall, Sjaan R; Keating, Shelley E; Tjonna, Arnt Erik; Hollekim-Strand, Siri Marte; Hosseini, Mansoureh Sadat; Ro, Torstein Baade; Haram, Margrete; Huuse, Else Marie; Davies, Peter S W; Cain, Peter A; Leong, Gary M; Ingul, Charlotte B
2016-01-01
Introduction The prevalence of paediatric obesity is increasing, and with it, lifestyle-related diseases in children and adolescents. High-intensity interval training (HIIT) has recently been explored as an alternate to traditional moderate-intensity continuous training (MICT) in adults with chronic disease and has been shown to induce a rapid reversal of subclinical disease markers in obese children and adolescents. The primary aim of this study is to compare the effects of HIIT with MICT on myocardial function in obese children and adolescents. Methods and analysis Multicentre randomised controlled trial of 100 obese children and adolescents in the cities of Trondheim (Norway) and Brisbane (Australia). The trial will examine the efficacy of HIIT to improve cardiometabolic outcomes in obese children and adolescents. Participants will be randomised to (1) HIIT and nutrition advice, (2) MICT and nutrition advice or (3) nutrition advice. Participants will partake in supervised exercise training and/or nutrition sessions for 3 months. Measurements for study end points will occur at baseline, 3 months (postintervention) and 12 months (follow-up). The primary end point is myocardial function (peak systolic tissue velocity). Secondary end points include vascular function (flow-mediated dilation assessment), quantity of visceral and subcutaneous adipose tissue, myocardial structure and function, body composition, cardiorespiratory fitness, autonomic function, blood biochemistry, physical activity and nutrition. Lean, healthy children and adolescents will complete measurements for all study end points at one time point for comparative cross-sectional analyses. Ethics and dissemination This randomised controlled trial will generate substantial information regarding the effects of exercise intensity on paediatric obesity, specifically the cardiometabolic health of this at-risk population. It is expected that communication of results will allow for the development of more effective evidence-based exercise prescription guidelines in this population while investigating the benefits of HIIT on subclinical markers of disease. Trial registration number NCT01991106. PMID:27044585
Yuan, Yi-Ming; Xin, Zhong-Cheng; Jiang, Hui; Guo, Yan-Jie; Liu, Wu-Jiang; Tian, Long; Zhu, Ji-Chuan
2004-06-01
To assess the psychometric properties of the Chinese Index of Premature Ejaculation (CIPE). The sexual function of 167 patients with and 114 normal controls without premature ejaculation (PE) were evaluated with CIPE. All subjects were married and had regular sexual activity. The CIPE has 10 questions, focusing on libido, erectile function, ejaculatory latency, sexual satisfaction and difficulty in delaying ejaculation, self-confidence and depression. Each question was responded to on a 5 point Likert-type scale. The individual question score and the total scale score were analyzed between the two groups. There were no significant differences between the age, duration of marriage and educational level (P> 0.05) of patients with and without PE and normal controls. The mean latency of patients with PE and normal controls were 1.6 +/- 1.2 and 10.2 +/- 9.5 minutes, respectively. Significant differences between patients with (26.7 +/- 4.6) PE and normal controls (41.9 +/- 4.0) were observed on the total score of CIPE (P< 0.01). Using binary logistic regression analysis, PE was significantly related to five questions of the original measure. They are the so-called the CIPE-5 and include: ejaculatory latency, sexual satisfaction of patients and sexual partner, difficulty in delaying ejaculation, anxiety and depression. Receiver Operating Characteristic (ROC) curve analysis of CIPE-5 questionnaire indicated that the sensitivity and specificity of CIPE were 97.60 % and 94.74 %, respectively. Employing the total score of CIPE-5, patients with PE could be divided into three groups: mild (>15 point) 19.8 %, moderate (10-14 point) 62.8 % and severe (< 9 point) 16.7 %. The CIPE-5 is a useful method for the evaluation of sexual function of patients with PE and can be used as a clinical endpoint for clinical trials studying the efficacy of pharmacological intervention.
Correlation function for generalized Pólya urns: Finite-size scaling analysis
NASA Astrophysics Data System (ADS)
Mori, Shintaro; Hisakado, Masato
2015-11-01
We describe a universality class for the transitions of a generalized Pólya urn by studying the asymptotic behavior of the normalized correlation function C (t ) using finite-size scaling analysis. X (1 ),X (2 ),... are the successive additions of a red (blue) ball [X (t )=1 (0 )] at stage t and C (t )≡Cov[X (1 ),X (t +1 )]/Var[X (1 )] . Furthermore, z (t ) =∑s=1tX (s ) /t represents the successive proportions of red balls in an urn to which, at the (t +1 )th stage, a red ball is added [X (t +1 )=1 ] with probability q [z (t )]=(tanh{J [2 z (t )-1 ]+h }+1 )/2 ,J ≥0 , and a blue ball is added [X (t +1 )=0 ] with probability 1 -q [z (t )] . A boundary [Jc(h ) ,h ] exists in the (J ,h ) plane between a region with one stable fixed point and another region with two stable fixed points for q (z ) . C (t ) ˜c +c'.tl -1 with c =0 (>0 ) for J
Two-Point Microrheology of Phase-Separated Domains in Lipid Bilayers
Hormel, Tristan T.; Reyer, Matthew A.; Parthasarathy, Raghuveer
2015-01-01
Though the importance of membrane fluidity for cellular function has been well established for decades, methods for measuring lipid bilayer viscosity remain challenging to devise and implement. Recently, approaches based on characterizing the Brownian dynamics of individual tracers such as colloidal particles or lipid domains have provided insights into bilayer viscosity. For fluids in general, however, methods based on single-particle trajectories provide a limited view of hydrodynamic response. The technique of two-point microrheology, in which correlations between the Brownian dynamics of pairs of tracers report on the properties of the intervening medium, characterizes viscosity at length-scales that are larger than that of individual tracers and has less sensitivity to tracer-induced distortions, but has never been applied to lipid membranes. We present, to our knowledge, the first two-point microrheological study of lipid bilayers, examining the correlated motion of domains in phase-separated lipid vesicles and comparing one- and two-point results. We measure two-point correlation functions in excellent agreement with the forms predicted by two-dimensional hydrodynamic models, analysis of which reveals a viscosity intermediate between those of the two lipid phases, indicative of global fluid properties rather than the viscosity of the local neighborhood of the tracer. PMID:26287625
Kinetics of diffusional droplet growth in a liquid/liquid two-phase system
NASA Technical Reports Server (NTRS)
Baird, James K.; Cain, Judith B.
1993-01-01
This report contains experimental results for the interdiffusion coefficient of the system, succinonitrile plus water, at a number of compositions and temperatures in the single phase region of the phase diagram. The concentration and temperature dependence of the measured diffusion coefficient has been analyzed in terms of Landau - Ginzburg theory, which assumes that the Gibb free energy is an analytic function of its variables, and can be expanded in a Taylor series about any point in the phase diagram. At most points in the single phase region this is adequate. Near the consolute point (critical point of solution), however, the free energy is non-analytic, and the Landau - Ginzburg theory fails. The solution to this problem dictates that the Landau - Ginzburg form of the free energy be replaced by Widom scaling functions with irrational values for the scaling exponents. As our measurements of the diffusion coefficient near the critical point reflect this non-analytic character, we are preparing for publication in a refereed journal a separate analysis of some of the data contained herein as well as some additional measurements we have just completed. When published, reprints of this article will be furnished to NASA.
Chen, Wei; Liu, Bo; Lv, Hongzhi; Su, Yanling; Chen, Xiao; Zhu, Yanbin; Du, Chenguang; Zhang, Xiaolin; Zhang, Yingze
2017-09-01
Early post-operative exercise and weight-bearing activities are found to improve the functional recovery of patients with displaced intra-articular calcaneal fractures (DIACFs). We hypothesized that early functional exercise after surgery might have a secondary reduction effect on the subtalar joint, in particular the smaller fracture fragments that were not fixed firmly. A prospective study was conducted to verify this hypothesis. From December 2012 to September 2013, patients with unilateral DIACFs were enrolled and received a treatment consisting of percutaneous leverage and minimally invasive fixation. After surgery, patients in the study group started exercising on days two to three, using partial weight bearing starting week three, and full weight bearing starting week 12. Patients in the control group followed a conventional post-operative protocol of partial weight bearing after week six and full weight bearing after the bone healed. Computed tomography (CT) scanning was performed at post-operative day one, week four, week eight, and week 12 to reconstruct coronal, sagittal, and axial images, on which the maximal residual displacements of the fractures were measured. Function was evaluated using the American Orthopaedic Foot and Ankle Society (AOFAS) scoring scale at the 12th post-operative month. Twenty-eight patients in the study group and 32 in the control group were followed up for more than 12 months; their data were collected and used for the final analysis. Repeated-measures analysis of variance (ANOVA) of the maximal residual displacements of the fracture measured on CT images revealed significant differences between the study and the control groups. There were interaction effects between group and time point. Except for the first time point, the differences between the groups at all studied time points were significant. In the study group, the differences between all studied time points were significant. Strong correlations were observed between the AOFAS score at post-operative month 12 and the maximal residual displacement of the fractures on the CT images at postoperative week 12. Early functional exercise and weight bearing activity can smooth and shape the subtalar joint and reduce the residual displacement of the articular surface, improving functional recovery of the affected foot. Therefore, early rehabilitation functional exercise can be recommended in clinical practice.
Point pattern analysis applied to flood and landslide damage events in Switzerland (1972-2009)
NASA Astrophysics Data System (ADS)
Barbería, Laura; Schulte, Lothar; Carvalho, Filipe; Peña, Juan Carlos
2017-04-01
Damage caused by meteorological and hydrological extreme events depends on many factors, not only on hazard, but also on exposure and vulnerability. In order to reach a better understanding of the relation of these complex factors, their spatial pattern and underlying processes, the spatial dependency between values of damage recorded at sites of different distances can be investigated by point pattern analysis. For the Swiss flood and landslide damage database (1972-2009) first steps of point pattern analysis have been carried out. The most severe events have been selected (severe, very severe and catastrophic, according to GEES classification, a total number of 784 damage points) and Ripley's K-test and L-test have been performed, amongst others. For this purpose, R's library spatstat has been used. The results confirm that the damage points present a statistically significant clustered pattern, which could be connected to prevalence of damages near watercourses and also to rainfall distribution of each event, together with other factors. On the other hand, bivariate analysis shows there is no segregated pattern depending on process type: flood/debris flow vs landslide. This close relation points to a coupling between slope and fluvial processes, connectivity between small-size and middle-size catchments and the influence of spatial distribution of precipitation, temperature (snow melt and snow line) and other predisposing factors such as soil moisture, land-cover and environmental conditions. Therefore, further studies will investigate the relationship between the spatial pattern and one or more covariates, such as elevation, distance from watercourse or land use. The final goal will be to perform a regression model to the data, so that the adjusted model predicts the intensity of the point process as a function of the above mentioned covariates.
Analysis and control of the METC fluid-bed gasifier. Quarterly report, October 1994--January 1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farell, A.E.; Reddy, S.
1995-03-01
This document summarizes work performed for the period 10/1/94 to 2/1/95. The initial phase of the work focuses on developing a simple transfer function model of the Fluidized Bed Gasifier (FBG). This transfer function model will be developed based purely on the gasifier responses to step changes in gasifier inputs (including reactor air, convey air, cone nitrogen, FBG pressure, and coal feedrate). This transfer function model will represent a linear, dynamic model that is valid near the operating point at which the data was taken. In addition, a similar transfer function model will be developed using MGAS in order tomore » assess MGAS for use as a model of the FBG for control systems analysis.« less
Noise Removal on Ocean Scalars by Means of Singularity-Based Fusion
NASA Astrophysics Data System (ADS)
Umbert, M.; Turiel, A.; Hoareau, N.; Ballabrera, J.; Martinez, J.; guimbard, S.; Font, J.
2013-12-01
Thanks to new remote sensing platforms as SMOS and Aquarius we have now access to synoptic maps of Sea Surface Salinity (SSS) at global scale. Both missions require a non-negligible amount of development in order to meet pre-launch requirements on the quality of the retrieved variables. Development efforts have been so far mainly concentrated in improving the accuracy of the acquired signals from the radiometric point of view, which is a point-wise characteristic, that is, the qualities of each point in the snapshot or swath are considered separately. However, some spatial redundancy (i.e., spatial correlation) is implicit in geophysical signals, and particularly in SSS. This redundancy is known since the beginning of the remote sensing age: eddies and fronts are visually evident in images of different variables, including Sea Surface Temperature (SST), Sea Surface Height (SSH), Ocean Color (OC), Synthetic Aperture Radars (SAR) and Brightness Temperatures (BT) at different bands. An assessment on the quality of SSS products accounting for this kind of spatial redundancy would be very interesting. So far, the structure of those correlations have been evidenced using correlation functions, but correlation functions vary from one variable to other; additionally, they are not characteristic to the points of the image but to a given large enough area. The introduction of singularity analysis for remote sensing maps of the ocean has shown that the correspondence among different scalars can be rigorously stated in terms of the correspondence of the values of their associated singularity exponents. The singularity exponents of a scalar at a given point is a unitless measure of the degree of regularity or irregularity of this function at that given point. Hence, singularity exponents can be directly compared disregarding the physical meaning of the variable from which they were derived. Using singularity analysis we can assess the quality of any scalar, as singularity exponents align in fronts following the streamlines of the flow, while noise breaks up the coherence of singularity fronts. The analysis of the output of numerical models show that up to the numerical accuracy singularity exponents of different scalars take the same values at every point. Taking the correspondence of the singularity exponents into account, it can be proved that two scalars having the same singularity exponents have a relation of functional dependence (a matricial identity involving their gradients). That functional relation can be approximated by a local linear regression under some hypothesis, which simplifies and speeds up the calculations and leads to a simple algorithm to reduce noise on a given ocean scalar using another higher- quality variable as template. This simple algorithm has been applied to SMOS data with a considerable quality gain. As a template, high-level SST maps from different sources have been used, while SMOS L2 and L3 SSS maps, and even brightness temperature maps play the role of the noisy data to be corrected. In all instances the noise level is divided by a factor of two at least. This quality gain opens the use of SMOS data for new applications, including the instant identification of ocean fronts, rain lenses, hurricane tracks, etc.
Likelihood ratio meta-analysis: New motivation and approach for an old method.
Dormuth, Colin R; Filion, Kristian B; Platt, Robert W
2016-03-01
A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience. Copyright © 2016 Elsevier Inc. All rights reserved.
Dascălu, Cristina Gena; Antohe, Magda Ecaterina
2009-01-01
Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis.
Ni, Jianhua; Qian, Tianlu; Xi, Changbai; Rui, Yikang; Wang, Jiechen
2016-08-18
The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities.
NASA Astrophysics Data System (ADS)
Hengl, Tomislav
2015-04-01
Efficiency of spatial sampling largely determines success of model building. This is especially important for geostatistical mapping where an initial sampling plan should provide a good representation or coverage of both geographical (defined by the study area mask map) and feature space (defined by the multi-dimensional covariates). Otherwise the model will need to extrapolate and, hence, the overall uncertainty of the predictions will be high. In many cases, geostatisticians use point data sets which are produced using unknown or inconsistent sampling algorithms. Many point data sets in environmental sciences suffer from spatial clustering and systematic omission of feature space. But how to quantify these 'representation' problems and how to incorporate this knowledge into model building? The author has developed a generic function called 'spsample.prob' (Global Soil Information Facilities package for R) and which simultaneously determines (effective) inclusion probabilities as an average between the kernel density estimation (geographical spreading of points; analysed using the spatstat package in R) and MaxEnt analysis (feature space spreading of points; analysed using the MaxEnt software used primarily for species distribution modelling). The output 'iprob' map indicates whether the sampling plan has systematically missed some important locations and/or features, and can also be used as an input for geostatistical modelling e.g. as a weight map for geostatistical model fitting. The spsample.prob function can also be used in combination with the accessibility analysis (cost of field survey are usually function of distance from the road network, slope and land cover) to allow for simultaneous maximization of average inclusion probabilities and minimization of total survey costs. The author postulates that, by estimating effective inclusion probabilities using combined geographical and feature space analysis, and by comparing survey costs to representation efficiency, an optimal initial sampling plan can be produced which satisfies both criteria: (a) good representation (i.e. within a tolerance threshold), and (b) minimized survey costs. This sampling analysis framework could become especially interesting for generating sampling plans in new areas e.g. for which no previous spatial prediction model exists. The presentation includes data processing demos with standard soil sampling data sets Ebergotzen (Germany) and Edgeroi (Australia), also available via the GSIF package.
Meta-analysis: exercise therapy for nonspecific low back pain.
Hayden, Jill A; van Tulder, Maurits W; Malmivaara, Antti V; Koes, Bart W
2005-05-03
Exercise therapy is widely used as an intervention in low back pain. To evaluate the effectiveness of exercise therapy in adult nonspecific acute, subacute, and chronic low back pain versus no treatment and other conservative treatments. MEDLINE, EMBASE, PsychInfo, CINAHL, and Cochrane Library databases to October 2004; citation searches and bibliographic reviews of previous systematic reviews. Randomized, controlled trials evaluating exercise therapy for adult nonspecific low back pain and measuring pain, function, return to work or absenteeism, and global improvement outcomes. Two reviewers independently selected studies and extracted data on study characteristics, quality, and outcomes at short-, intermediate-, and long-term follow-up. 61 randomized, controlled trials (6390 participants) met inclusion criteria: acute (11 trials), subacute (6 trials), and chronic (43 trials) low back pain (1 trial was unclear). Evidence suggests that exercise therapy is effective in chronic back pain relative to comparisons at all follow-up periods. Pooled mean improvement (of 100 points) was 7.3 points (95% CI, 3.7 to 10.9 points) for pain and 2.5 points (CI, 1.0 to 3.9 points) for function at earliest follow-up. In studies investigating patients (people seeking care for back pain), mean improvement was 13.3 points (CI, 5.5 to 21.1 points) for pain and 6.9 points (CI, 2.2 to 11.7 points) for function, compared with studies where some participants had been recruited from a general population (for example, with advertisements). Some evidence suggests effectiveness of a graded-activity exercise program in subacute low back pain in occupational settings, although the evidence for other types of exercise therapy in other populations is inconsistent. In acute low back pain, exercise therapy and other programs were equally effective (pain, 0.03 point [CI, -1.3 to 1.4 points]). Limitations of the literature, including low-quality studies with heterogeneous outcome measures inconsistent and poor reporting, and possibility of publication bias. Exercise therapy seems to be slightly effective at decreasing pain and improving function in adults with chronic low back pain, particularly in health care populations. In subacute low back pain populations, some evidence suggests that a graded-activity program improves absenteeism outcomes, although evidence for other types of exercise is unclear. In acute low back pain populations, exercise therapy is as effective as either no treatment or other conservative treatments.
Speed of recovery after arthroscopic rotator cuff repair.
Kurowicki, Jennifer; Berglund, Derek D; Momoh, Enesi; Disla, Shanell; Horn, Brandon; Giveans, M Russell; Levy, Jonathan C
2017-07-01
The purpose of this study was to delineate the time taken to achieve maximum improvement (plateau of recovery) and the degree of recovery observed at various time points (speed of recovery) for pain and function after arthroscopic rotator cuff repair. An institutional shoulder surgery registry query identified 627 patients who underwent arthroscopic rotator cuff repair between 2006 and 2015. Measured range of motion, patient satisfaction, and patient-reported outcome measures were analyzed for preoperative, 3-month, 6-month, 1-year, and 2-year intervals. Subgroup analysis was performed on the basis of tear size by retraction grade and number of anchors used. As an entire group, the plateau of maximum recovery for pain, function, and motion occurred at 1 year. Satisfaction with surgery was >96% at all time points. At 3 months, 74% of improvement in pain and 45% to 58% of functional improvement were realized. However, only 22% of elevation improvement was achieved (P < .001). At 6 months, 89% of improvement in pain, 81% to 88% of functional improvement, and 78% of elevation improvement were achieved (P < .001). Larger tears had a slower speed of recovery for Single Assessment Numeric Evaluation scores, forward elevation, and external rotation. Smaller tears had higher motion and functional scores across all time points. Tear size did not influence pain levels. The plateau of maximum recovery after rotator cuff repair occurred at 1 year with high satisfaction rates at all time points. At 3 months, approximately 75% of pain relief and 50% of functional recovery can be expected. Larger tears have a slower speed of recovery. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Jones, Conor M; DeWalt, Darren A; Huang, I-Chan
Poor asthma control in children is related to impaired patient-reported outcomes (PROs; eg, fatigue, depressive symptoms, anxiety), but less well studied is the effect of PROs on children's school performance and sleep outcomes. In this study we investigated whether the consistency status of PROs over time affected school functioning and daytime sleepiness in children with asthma. Of the 238 children with asthma enrolled in the Patient-Reported Outcomes Measurement Information System (PROMIS) Pediatric Asthma Study, 169 children who provided survey data for all 4 time points were used in the analysis. The child's PROs, school functioning, and daytime sleepiness were measured 4 times within a 15-month period. PRO domains included asthma impact, pain interference, fatigue, depressive symptoms, anxiety, and mobility. Each child was classified as having poor/fair versus good PROs per meaningful cut points. The consistency status of each domain was classified as consistently poor/fair if poor/fair status was present for at least 3 time points; otherwise, the status was classified as consistently good. Seemingly unrelated regression was performed to test if consistently poor/fair PROs predicted impaired school functioning and daytime sleepiness at the fourth time point. Consistently poor/fair in all PRO domains was significantly associated with impaired school functioning and excessive daytime sleepiness (Ps < .01) after controlling for the influence of the child's age, sex, and race/ethnicity. Children with asthma with consistently poor/fair PROs are at risk of poor school functioning and daytime sleepiness. Developing child-friendly PRO assessment systems to track PROs can inform potential problems in the school setting. Copyright © 2017 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Acupuncture for treating fibromyalgia
Deare, John C; Zheng, Zhen; Xue, Charlie CL; Liu, Jian Ping; Shang, Jingsheng; Scott, Sean W; Littlejohn, Geoff
2014-01-01
Background One in five fibromyalgia sufferers use acupuncture treatment within two years of diagnosis. Objectives To examine the benefits and safety of acupuncture treatment for fibromyalgia. Search methods We searched CENTRAL, PubMed, EMBASE, CINAHL, National Research Register, HSR Project and Current Contents, as well as the Chinese databases VIP and Wangfang to January 2012 with no language restrictions. Selection criteria Randomised and quasi-randomised studies evaluating any type of invasive acupuncture for fibromyalgia diagnosed according to the American College of Rheumatology (ACR) criteria, and reporting any main outcome: pain, physical function, fatigue, sleep, total well-being, stiffness and adverse events. Data collection and analysis Two author pairs selected trials, extracted data and assessed risk of bias. Treatment effects were reported as standardised mean differences (SMD) and 95%confidence intervals (CI) for continuous outcomes using different measurement tools (pain, physical function, fatigue, sleep, total well-being and stiffness) and risk ratio (RR) and 95% CI for dichotomous outcomes (adverse events).We pooled data using the random-effects model. Main results Nine trials (395 participants) were included. All studies except one were at low risk of selection bias; five were at risk of selective reporting bias (favouring either treatment group); two were subject to attrition bias (favouring acupuncture); three were subject to performance bias (favouring acupuncture) and one to detection bias (favouring acupuncture). Three studies utilised electro-acupuncture (EA) with the remainder using manual acupuncture (MA) without electrical stimulation. All studies used ’formula acupuncture’ except for one, which used trigger points. Low quality evidence from one study (13 participants) showed EA improved symptoms with no adverse events at one month following treatment. Mean pain in the non-treatment control group was 70 points on a 100 point scale; EA reduced pain by a mean of 22 points (95% confidence interval (CI) 4 to 41), or 22% absolute improvement. Control group global well-being was 66.5 points on a 100 point scale; EA improved well-being by a mean of 15 points (95% CI 5 to 26 points). Control group stiffness was 4.8 points on a 0 to 10 point; EA reduced stiffness by a mean of 0.9 points (95% CI 0.1 to 2 points; absolute reduction 9%, 95% CI 4% to 16%). Fatigue was 4.5 points (10 point scale) without treatment; EA reduced fatigue by a mean of 1 point (95% CI 0.22 to 2 points), absolute reduction 11% (2% to 20%). There was no difference in sleep quality (MD 0.4 points, 95% CI −1 to 0.21 points, 10 point scale), and physical function was not reported. Moderate quality evidence from six studies (286 participants) indicated that acupuncture (EA or MA) was no better than sham acupuncture, except for less stiffness at one month. Subgroup analysis of two studies (104 participants) indicated benefits of EA. Mean pain was 70 points on 0 to 100 point scale with sham treatment; EA reduced pain by 13% (5% to 22%); (SMD −0.63, 95% CI −1.02 to −0.23). Global well-being was 5.2 points on a 10 point scale with sham treatment; EA improved well-being: SMD 0.65, 95% CI 0.26 to 1.05; absolute improvement 11% (4% to 17%). EA improved sleep, from 3 points on a 0 to 10 point scale in the sham group: SMD 0.40 (95% CI 0.01 to 0.79); absolute improvement 8% (0.2% to 16%). Low-quality evidence from one study suggested that MA group resulted in poorer physical function: mean function in the sham group was 28 points (100 point scale); treatment worsened function by a mean of 6 points (95% CI −10.9 to −0.7). Low-quality evidence from three trials (289 participants) suggested no difference in adverse events between real (9%) and sham acupuncture (35%); RR 0.44 (95% CI 0.12 to 1.63). Moderate quality evidence from one study (58 participants) found that compared with standard therapy alone (antidepressants and exercise), adjunct acupuncture therapy reduced pain at one month after treatment: mean pain was 8 points on a 0 to 10 point scale in the standard therapy group; treatment reduced pain by 3 points (95% CI −3.9 to −2.1), an absolute reduction of 30% (21% to 39%). Two people treated with acupuncture reported adverse events; there were none in the control group (RR 3.57; 95% CI 0.18 to 71.21). Global well-being, sleep, fatigue and stiffness were not reported. Physical function data were not usable. Low quality evidence from one study (38 participants) showed a short-term benefit of acupuncture over antidepressants in pain relief: mean pain was 29 points (0 to 100 point scale) in the antidepressant group; acupuncture reduced pain by 17 points (95% CI −24.1 to −10.5). Other outcomes or adverse events were not reported. Moderate-quality evidence from one study (41 participants) indicated that deep needling with or without deqi did not differ in pain, fatigue, function or adverse events. Other outcomes were not reported. Four studies reported no differences between acupuncture and control or other treatments described at six to seven months follow-up. No serious adverse events were reported, but there were insufficient adverse events to be certain of the risks. Authors’ conclusions There is low tomoderate-level evidence that compared with no treatment and standard therapy, acupuncture improves pain and stiffness in people with fibromyalgia. There is moderate-level evidence that the effect of acupuncture does not differ from sham acupuncture in reducing pain or fatigue, or improving sleep or global well-being. EA is probably better than MA for pain and stiffness reduction and improvement of global well-being, sleep and fatigue. The effect lasts up to one month, but is not maintained at six months follow-up. MA probably does not improve pain or physical functioning. Acupuncture appears safe. People with fibromyalgia may consider using EA alone or with exercise and medication. The small sample size, scarcity of studies for each comparison, lack of an ideal sham acupuncture weaken the level of evidence and its clinical implications. Larger studies are warranted. PMID:23728665
Limsrivilai, Julajak; Charatcharoenwitthaya, Phunchai; Pausawasdi, Nonthalee; Leelakusolvong, Somchai
2016-02-01
Tricyclic antidepressants could be effective in the treatment of symptoms related to hypersensitive esophagus through their pain-modulating effect. We therefore assessed the benefit of imipramine in patients with esophageal hypersensitivity and functional heartburn. Patients with normal endoscopy findings and typical reflux symptoms despite standard-dose proton-pump inhibitor therapy underwent 24-h pH-impedance monitoring. Patients with established esophageal hypersensitivity or functional heartburn were randomly assigned to receive 8 weeks of either once-daily imipramine 25 mg (n=43) or placebo (n=40). The primary end point was satisfactory relief of reflux symptoms, defined as a >50% reduction in the gastroesophageal reflux disease score. The secondary end point was improvement in quality-of-life (QoL) as assessed by the 36-Item Short Form Health Survey score. Patients receiving imipramine did not achieve a higher rate of satisfactory relief of reflux symptoms than did patients receiving placebo (intention-to-treat (ITT) analysis: 37.2 vs. 37.5%, respectively; odds ratio (OR), 0.99; 95% confidence interval (CI), 0.41-2.41; per-protocol (PP) analysis: 45.5 vs. 41.2%, respectively; OR, 1.19; 95% CI, 0.45-3.13). Subgroup analysis to assess the efficacy of imipramine for either esophageal hypersensitivity or functional heartburn yielded similar results. Treatment with imipramine provided significant improvement of QoL by PP analysis (72±17 and 61±19, respectively; P=0.048), but ITT analysis did not reveal any differences between imipramine and placebo (68±19 and 61±19, respectively; P=0.26). Adverse events were similar in both groups; however, constipation was more common with imipramine than placebo (51.2 vs. 22.5%, respectively; P=0.01). Although low-dose imipramine shows potential QoL benefits, it does not relieve symptoms more effectively than does placebo in patients with either esophageal hypersensitivity or functional heartburn.
Dong, Ren G; Welcome, Daniel E; McDowell, Thomas W; Wu, John Z
2013-11-25
The relationship between the vibration transmissibility and driving-point response functions (DPRFs) of the human body is important for understanding vibration exposures of the system and for developing valid models. This study identified their theoretical relationship and demonstrated that the sum of the DPRFs can be expressed as a linear combination of the transmissibility functions of the individual mass elements distributed throughout the system. The relationship is verified using several human vibration models. This study also clarified the requirements for reliably quantifying transmissibility values used as references for calibrating the system models. As an example application, this study used the developed theory to perform a preliminary analysis of the method for calibrating models using both vibration transmissibility and DPRFs. The results of the analysis show that the combined method can theoretically result in a unique and valid solution of the model parameters, at least for linear systems. However, the validation of the method itself does not guarantee the validation of the calibrated model, because the validation of the calibration also depends on the model structure and the reliability and appropriate representation of the reference functions. The basic theory developed in this study is also applicable to the vibration analyses of other structures.
Franchi, Lorenzo; Pavoni, Chiara; Faltin, Kurt; Bigliazzi, Renato; Gazzani, Francesca; Cozza, Paola
2016-09-01
The purpose of this work was to evaluate the long-term morphological mandibular changes induced by functional treatment of Class II malocclusion with mandibular retrusion. Forty patients (20 females, 20 males) with Class II malocclusion consecutively treated with either a Bionator or an Activator followed by fixed appliances were compared with a control group of 40 subjects (19 females, 21 males) with untreated Class II malocclusion. Lateral cephalograms were available at the start of treatment (T1, mean age 9.9 years), at the end of treatment with functional appliances (T2, mean age 12.2 years), and for long-term follow-up (T3, mean age 18.3 years). Mandibular shape changes were analyzed on lateral cephalograms of the subjects in both groups via thin-plate spline (TPS) analysis. Shape differences were statistically analyzed by conducting permutation tests on Goodall F statistics. In the long term, both the treated and control groups exhibited significant longitudinal mandibular shape changes characterized by upward and forward dislocation of point Co associated with a vertical extension in the gonial region and backward dislocation of point B. Functional appliances induced mandible's significant posterior morphogenetic rotation over the short term. The treated and control groups demonstrated similar mandibular shape over the long term.
Text Mining Improves Prediction of Protein Functional Sites
Cohn, Judith D.; Ravikumar, Komandur E.
2012-01-01
We present an approach that integrates protein structure analysis and text mining for protein functional site prediction, called LEAP-FS (Literature Enhanced Automated Prediction of Functional Sites). The structure analysis was carried out using Dynamics Perturbation Analysis (DPA), which predicts functional sites at control points where interactions greatly perturb protein vibrations. The text mining extracts mentions of residues in the literature, and predicts that residues mentioned are functionally important. We assessed the significance of each of these methods by analyzing their performance in finding known functional sites (specifically, small-molecule binding sites and catalytic sites) in about 100,000 publicly available protein structures. The DPA predictions recapitulated many of the functional site annotations and preferentially recovered binding sites annotated as biologically relevant vs. those annotated as potentially spurious. The text-based predictions were also substantially supported by the functional site annotations: compared to other residues, residues mentioned in text were roughly six times more likely to be found in a functional site. The overlap of predictions with annotations improved when the text-based and structure-based methods agreed. Our analysis also yielded new high-quality predictions of many functional site residues that were not catalogued in the curated data sources we inspected. We conclude that both DPA and text mining independently provide valuable high-throughput protein functional site predictions, and that integrating the two methods using LEAP-FS further improves the quality of these predictions. PMID:22393388
Spatial Variations of DOM Compositions in the River with Multi-functional Weir
NASA Astrophysics Data System (ADS)
Yoon, S. M.; Choi, J. H.
2017-12-01
With the global trend to construct artificial impoundments over the last decades, there was a Large River Restoration Project conducted in South Korea from 2010 to 2011. The project included enlargement of river channel capacity and construction of multi-functional weirs, which can alter the hydrological flow of the river and cause spatial variations of water quality indicators, especially DOM (Dissolved Organic Matter) compositions. In order to analyze the spatial variations of organic matter, water samples were collected longitudinally (5 points upstream from the weir), horizontally (left, center, right at each point) and vertically (1m interval at each point). The specific UV-visible absorbance (SUVA) and fluorescence excitation-emission matrices (EEMs) have been used as rapid and non-destructive analytical methods for DOM compositions. In addition, parallel factor analysis (PARAFAC) has adopted for extracting a set of representative fluorescence components from EEMs. It was assumed that autochthonous DOM would be dominant near the weir due to the stagnation of hydrological flow. However, the results showed that the values of fluorescence index (FI) were 1.29-1.47, less than 2, indicating DOM of allochthonous origin dominated in the water near the weir. PARAFAC analysis also showed the peak at 450 nm of emission and < 250 nm of excitation which represented the humic substances group with terrestrial origins. There was no significant difference in the values of Biological index (BIX), however, values of humification index (HIX) spatially increased toward the weir. From the results of the water sample analysis, the river with multi-functional weir is influenced by the allochthonous DOM instead of autochthonous DOM and seems to accumulate humic substances near the weir.
Use of Hearing Aids and Functional Capacity in Middle-Aged and Elderly Individuals
Carioli, Juliana; Teixeira, Adriane Ribeiro
2014-01-01
Introduction Hearing loss is among the sensory changes strongly associated with loss of functional capacity. Objective It aims to determine whether the use of hearing aid contributes to the improvement of instrumental activities of daily living (IADL) for middle aged and elderly hearing-impaired individuals. Methods This is a descriptive, longitudinal, and interventional study. We evaluated 17 subjects, 13 (76.5%) female, aged between 58 and 96 years old (mean 77.1 ± 10.4 years). All were new users of hearing aids. Evaluation included social history, pure tone audiometry, and scale of IADL developed by Lawton and Brody. The subjects were presented daily life situations and were expected to respond if they could do them without assistance (3 points), partially assisted (2 points) or if they were unable to perform them (1 point). IADL was applied before the use of hearing aids adaptation and after a three- and six-month period of use. Results Data analysis revealed that before the use of hearing aids the average score obtained by the subjects was 22.94 ± 4.04 points. Three months after beginning the use the average score was 23.29 ± 4.12 and after six months the average score was 23.71 ± 3.69 points. Statistical analysis revealed a significant difference between scores obtained before the use of hearing aids and six months post-fitting (p = 0.015*) Conclusion The use of hearing aids among the subjects evaluated promoted positive changes in performing IADL, especially to using the telephone. PMID:25992101
Network propagation in the cytoscape cyberinfrastructure.
Carlin, Daniel E; Demchak, Barry; Pratt, Dexter; Sage, Eric; Ideker, Trey
2017-10-01
Network propagation is an important and widely used algorithm in systems biology, with applications in protein function prediction, disease gene prioritization, and patient stratification. However, up to this point it has required significant expertise to run. Here we extend the popular network analysis program Cytoscape to perform network propagation as an integrated function. Such integration greatly increases the access to network propagation by putting it in the hands of biologists and linking it to the many other types of network analysis and visualization available through Cytoscape. We demonstrate the power and utility of the algorithm by identifying mutations conferring resistance to Vemurafenib.
COSMOS-e'-soft Higgsotic attractors
NASA Astrophysics Data System (ADS)
Choudhury, Sayantan
2017-07-01
In this work, we have developed an elegant algorithm to study the cosmological consequences from a huge class of quantum field theories (i.e. superstring theory, supergravity, extra dimensional theory, modified gravity, etc.), which are equivalently described by soft attractors in the effective field theory framework. In this description we have restricted our analysis for two scalar fields - dilaton and Higgsotic fields minimally coupled with Einstein gravity, which can be generalized for any arbitrary number of scalar field contents with generalized non-canonical and non-minimal interactions. We have explicitly used R^2 gravity, from which we have studied the attractor and non-attractor phases by exactly computing two point, three point and four point correlation functions from scalar fluctuations using the In-In (Schwinger-Keldysh) and the δ N formalisms. We have also presented theoretical bounds on the amplitude, tilt and running of the primordial power spectrum, various shapes (equilateral, squeezed, folded kite or counter-collinear) of the amplitude as obtained from three and four point scalar functions, which are consistent with observed data. Also the results from two point tensor fluctuations and the field excursion formula are explicitly presented for the attractor and non-attractor phase. Further, reheating constraints, scale dependent behavior of the couplings and the dynamical solution for the dilaton and Higgsotic fields are also presented. New sets of consistency relations between two, three and four point observables are also presented, which shows significant deviation from canonical slow-roll models. Additionally, three possible theoretical proposals have presented to overcome the tachyonic instability at the time of late time acceleration. Finally, we have also provided the bulk interpretation from the three and four point scalar correlation functions for completeness.
PynPoint code for exoplanet imaging
NASA Astrophysics Data System (ADS)
Amara, A.; Quanz, S. P.; Akeret, J.
2015-04-01
We announce the public release of PynPoint, a Python package that we have developed for analysing exoplanet data taken with the angular differential imaging observing technique. In particular, PynPoint is designed to model the point spread function of the central star and to subtract its flux contribution to reveal nearby faint companion planets. The current version of the package does this correction by using a principal component analysis method to build a basis set for modelling the point spread function of the observations. We demonstrate the performance of the package by reanalysing publicly available data on the exoplanet β Pictoris b, which consists of close to 24,000 individual image frames. We show that PynPoint is able to analyse this typical data in roughly 1.5 min on a Mac Pro, when the number of images is reduced by co-adding in sets of 5. The main computational work, the calculation of the Singular-Value-Decomposition, parallelises well as a result of a reliance on the SciPy and NumPy packages. For this calculation the peak memory load is 6 GB, which can be run comfortably on most workstations. A simpler calculation, by co-adding over 50, takes 3 s with a peak memory usage of 600 MB. This can be performed easily on a laptop. In developing the package we have modularised the code so that we will be able to extend functionality in future releases, through the inclusion of more modules, without it affecting the users application programming interface. We distribute the PynPoint package under GPLv3 licence through the central PyPI server, and the documentation is available online (http://pynpoint.ethz.ch).
A Teachable Moment Uncovered by Video Analysis
NASA Astrophysics Data System (ADS)
Gates, Joshua
2011-05-01
Early in their study of one-dimensional kinematics, my students build an algebraic model that describes the effects of a rolling ball's (perpendicular) collision with a wall. The goal is for the model to predict the ball's velocity when it returns to a fixed point approximately 50-100 cm from the wall as a function of its velocity as it passes this point initially. They are told to assume that the ball's velocity does not change while it rolls to or from the wall—that the velocity change all happens very quickly and only at the wall. In order to evaluate this assumption following the data collection, I have the students analyze one such collision using video analysis. The results uncover an excellent teachable moment about assumptions and their impact on models and error analysis.
Group-theoretical analysis of two-dimensional hexagonal materials
NASA Astrophysics Data System (ADS)
Minami, Susumu; Sugita, Itaru; Tomita, Ryosuke; Oshima, Hiroyuki; Saito, Mineo
2017-10-01
Two-dimensional hexagonal materials such as graphene and silicene have highly symmetric crystal structures and Dirac cones at the K point, which induce novel electronic properties. In this report, we calculate their electronic structures by using density functional theory and analyze their band structures on the basis of the group theory. Dirac cones frequently appear when the symmetry at the K point is high; thus, two-dimensional irreducible representations are included. We discuss the relationship between symmetry and the appearance of the Dirac cone.
Proceedings of the 1982 Army Numerical Analysis and Computers Conference.
1982-08-01
field array WACC (l,J). Configuration types. The cartesian coordinates of the points on the entire boundary of the physical region, i.e., the closed outer...the field array WACC . This calculation is discussed in Ref.[8],where it is noted that the values obtained are not truly optimum in all cases...placed in the field 60 4g array WACC . The addition to the control functions from attraction to specified lines and/or points in the physical region is
Roh, Hyun Woong; Hong, Chang Hyung; Lee, SooJin; Lee, Yunhwan; Lee, Kang Soo; Chang, Ki Jung; Oh, Byoung Hoon; Choi, Seong Hye; Kim, Seong Yoon; Back, Joung Hwan; Chung, Young Ki; Lim, Ki Young; Noh, Jai Sung; Son, Sang Joon
2015-11-01
To determine the association between frontal lobe function and risk of hip fracture in patients with Alzheimer disease (AD).Retrospective cohort study using multicenter hospital-based dementia registry and national health insurance claim data was done. Participants who had available data of neuropsychological test, national health insurance claim, and other covariates were included. A total of 1660 patients with AD were included based on Stroop Test results. A total of 1563 patients with AD were included based on the Controlled Oral Word Association Test (COWAT) results. Hip fracture was measured by validated identification criteria using national health insurance claim data. Frontal lobe function was measured by Stroop Test and COWAT at baseline.After adjusting for potential covariates, including cognitive function in other domains (language, verbal and nonverbal memory, and attention), the Cox proportional hazard regression analysis revealed that risk of a hip fracture was decreased with a hazard ratio (HR) of 0.98 per one point of increase in the Stroop Test (adjusted HR = 0.98, 95% confidence interval [CI]: 0.97-1.00) and 0.93 per one point increase in COWAT (adjusted HR = 0.93, 95% CI: 0.88-0.99).The risk of hip fracture in AD patients was associated with baseline frontal lobe function. The result of this research presents evidence of association between frontal lobe function and risk of hip fracture in patients with AD.
A color prediction model for imagery analysis
NASA Technical Reports Server (NTRS)
Skaley, J. E.; Fisher, J. R.; Hardy, E. E.
1977-01-01
A simple model has been devised to selectively construct several points within a scene using multispectral imagery. The model correlates black-and-white density values to color components of diazo film so as to maximize the color contrast of two or three points per composite. The CIE (Commission Internationale de l'Eclairage) color coordinate system is used as a quantitative reference to locate these points in color space. Superimposed on this quantitative reference is a perceptional framework which functionally contrasts color values in a psychophysical sense. This methodology permits a more quantitative approach to the manual interpretation of multispectral imagery while resulting in improved accuracy and lower costs.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
The Researches on Damage Detection Method for Truss Structures
NASA Astrophysics Data System (ADS)
Wang, Meng Hong; Cao, Xiao Nan
2018-06-01
This paper presents an effective method to detect damage in truss structures. Numerical simulation and experimental analysis were carried out on a damaged truss structure under instantaneous excitation. The ideal excitation point and appropriate hammering method were determined to extract time domain signals under two working conditions. The frequency response function and principal component analysis were used for data processing, and the angle between the frequency response function vectors was selected as a damage index to ascertain the location of a damaged bar in the truss structure. In the numerical simulation, the time domain signal of all nodes was extracted to determine the location of the damaged bar. In the experimental analysis, the time domain signal of a portion of the nodes was extracted on the basis of an optimal sensor placement method based on the node strain energy coefficient. The results of the numerical simulation and experimental analysis showed that the damage detection method based on the frequency response function and principal component analysis could locate the damaged bar accurately.
Extracting the Essential Cartographic Functionality of Programs on the Web
NASA Astrophysics Data System (ADS)
Ledermann, Florian
2018-05-01
Following Aristotle, F. P. Brooks (1987) emphasizes the distinction between "essential difficulties" and "accidental difficulties" as a key challenge in software engineering. From the point of view of cartography, it would be desirable to identify the cartographic essence of a program, and subject it to additional scrutiny, while its accidental proper-ties, again from the point of view of cartography, are usually of lesser relevance to cartographic analysis. In this paper, two methods that facilitate extracting the cartographic essence of programs are presented: close reading of their source code, and the automated analysis of their runtime behavior. The advantages and shortcomings of both methods are discussed, followed by an outlook to future developments and potential applications.
Novel Analytic Methods Needed for Real-Time Continuous Core Body Temperature Data
Hertzberg, Vicki; Mac, Valerie; Elon, Lisa; Mutic, Nathan; Mutic, Abby; Peterman, Katherine; Tovar-Aguilar, J. Antonio; Economos, Jeannie; Flocks, Joan; McCauley, Linda
2017-01-01
Affordable measurement of core body temperature, Tc, in a continuous, real-time fashion is now possible. With this advance comes a new data analysis paradigm for occupational epidemiology. We characterize issues arising after obtaining Tc data over 188 workdays for 83 participating farmworkers, a population vulnerable to effects of rising temperatures due to climate change. We describe a novel approach to these data using smoothing and functional data analysis. This approach highlights different data aspects compared to describing Tc at a single time point or summaries of the time course into an indicator function (e.g., did Tc ever exceed 38°C, the threshold limit value for occupational heat exposure). Participants working in ferneries had significantly higher Tc at some point during the workday compared to those working in nurseries, despite a shorter workday for fernery participants. Our results typify the challenges and opportunities in analyzing big data streams from real-time physiologic monitoring. PMID:27756853
NASA Technical Reports Server (NTRS)
Muniz, R.; Martinez, El; Szafran, J.; Dalton, A.
2011-01-01
The Function Point Analysis (FPA) Depot is a web application originally designed by one of the NE-C3 branch's engineers, Jamie Szafran, and created specifically for the Software Development team of the Launch Control Systems (LCS) project. The application consists of evaluating the work of each developer to be able to get a real estimate of the hours that is going to be assigned to a specific task of development. The Architect Team had made design change requests for the depot to change the schema of the application's information; that information, changed in the database, needed to be changed in the graphical user interface (GUI) (written in Ruby on Rails (RoR and the web service/server side in Java to match the database changes. These changes were made by two interns from NE-C, Ricardo Muniz from NE-C3, who made all the schema changes for the GUI in RoR and Edwin Martinez, from NE-C2, who made all the changes in the Java side.
Novel Analytic Methods Needed for Real-Time Continuous Core Body Temperature Data.
Hertzberg, Vicki; Mac, Valerie; Elon, Lisa; Mutic, Nathan; Mutic, Abby; Peterman, Katherine; Tovar-Aguilar, J Antonio; Economos, Eugenia; Flocks, Joan; McCauley, Linda
2016-10-18
Affordable measurement of core body temperature (T c ) in a continuous, real-time fashion is now possible. With this advance comes a new data analysis paradigm for occupational epidemiology. We characterize issues arising after obtaining T c data over 188 workdays for 83 participating farmworkers, a population vulnerable to effects of rising temperatures due to climate change. We describe a novel approach to these data using smoothing and functional data analysis. This approach highlights different data aspects compared with describing T c at a single time point or summaries of the time course into an indicator function (e.g., did T c ever exceed 38 °C, the threshold limit value for occupational heat exposure). Participants working in ferneries had significantly higher T c at some point during the workday compared with those working in nurseries, despite a shorter workday for fernery participants. Our results typify the challenges and opportunities in analyzing big data streams from real-time physiologic monitoring. © The Author(s) 2016.
Functional dissection of the Hox protein Abdominal-B in Drosophila cell culture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Zongzhao; CellNetworks - Cluster of Excellence, Centre for Organismal Studies; Graduate School of Chinese Academy of Sciences, Beijing 100039
2011-11-04
Highlights: Black-Right-Pointing-Pointer ct340 CRM was identified to be the posterior spiracle enhancer of gene cut. Black-Right-Pointing-Pointer ct340 is under the direct transcriptional control of Hox protein Abd-B. Black-Right-Pointing-Pointer An efficient cloning system was developed to assay protein-DNA interaction. Black-Right-Pointing-Pointer New features of Abd-B dependent target gene regulation were detected. -- Abstract: Hox transcription factors regulate the morphogenesis along the anterior-posterior (A/P) body axis through the interaction with small cis-regulatory modules (CRMs) of their target gene, however so far very few Hox CRMs are known and have been analyzed in detail. In this study we have identified a new Hox CRM,more » ct340, which guides the expression of the cell type specification gene cut (ct) in the posterior spiracle under the direct control of the Hox protein Abdominal-B (Abd-B). Using the ct340 enhancer activity as readout, an efficient cloning system to generate VP16 activation domain fusion protein was developed to unambiguously test protein-DNA interaction in Drosophila cell culture. By functionally dissecting the Abd-B protein, new features of Abd-B dependent target gene regulation were detected. Due to its easy adaptability, this system can be generally used to map functional domains within sequence-specific transcriptional factors in Drosophila cell culture, and thus provide preliminary knowledge of the protein functional domain structure for further in vivo analysis.« less
Lin, Shih-Hsien; Chen, Wei Tseng; Chen, Kao Chin; Lee, Sheng-Yu; Lee, I Hui; Chen, Po See; Yeh, Tzung Lieh; Lu, Ru-Band; Yang, Yen Kuang
2013-01-01
The efficacy of methadone maintenance therapy for heroin dependence is compromised by the low retention rate. Hypothalamus-pituitary-adrenal (HPA) axis function, which is associated with stress response, and novelty seeking (NS), a personality trait associated with low dopaminergic activity, may play roles in retention. We conducted a prospective study in which HPA axis function and NS were assessed by the dexamethasone suppression test and the Tridimensional Personality Questionnaire at baseline, respectively. The retention rate was assessed at the half- and 1-year points of methadone maintenance therapy. A low suppression rate of dexamethasone suppression test (D%) was associated with a high level of NS. A low D% was associated with half-year dropout, whereas a high level of NS was associated with 1-year dropout. Survival analysis confirmed that D% and NS were significant time-dependent covariates for retention. The findings showed that HPA axis function and NA were associated with retention at different time points.
Framework for adaptive multiscale analysis of nonhomogeneous point processes.
Helgason, Hannes; Bartroff, Jay; Abry, Patrice
2011-01-01
We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.
Erdal, Barbaros Selnur; Yildiz, Vedat; King, Mark A.; Patterson, Andrew T.; Knopp, Michael V.; Clymer, Bradley D.
2012-01-01
Background: Chest CT scans are commonly used to clinically assess disease severity in patients presenting with pulmonary sarcoidosis. Despite their ability to reliably detect subtle changes in lung disease, the utility of chest CT scans for guiding therapy is limited by the fact that image interpretation by radiologists is qualitative and highly variable. We sought to create a computerized CT image analysis tool that would provide quantitative and clinically relevant information. Methods: We established that a two-point correlation analysis approach reduced the background signal attendant to normal lung structures, such as blood vessels, airways, and lymphatics while highlighting diseased tissue. This approach was applied to multiple lung fields to generate an overall lung texture score (LTS) representing the quantity of diseased lung parenchyma. Using deidentified lung CT scan and pulmonary function test (PFT) data from The Ohio State University Medical Center’s Information Warehouse, we analyzed 71 consecutive CT scans from patients with sarcoidosis for whom simultaneous matching PFTs were available to determine whether the LTS correlated with standard PFT results. Results: We found a high correlation between LTS and FVC, total lung capacity, and diffusing capacity of the lung for carbon monoxide (P < .0001 for all comparisons). Moreover, LTS was equivalent to PFTs for the detection of active lung disease. The image analysis protocol was conducted quickly (< 1 min per study) on a standard laptop computer connected to a publicly available National Institutes of Health ImageJ toolkit. Conclusions: The two-point image analysis tool is highly practical and appears to reliably assess lung disease severity. We predict that this tool will be useful for clinical and research applications. PMID:22628487
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.
Ci, Wenyan; Huang, Yingping
2016-10-17
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
Ci, Wenyan; Huang, Yingping
2016-01-01
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508
On the complexity of a combined homotopy interior method for convex programming
NASA Astrophysics Data System (ADS)
Yu, Bo; Xu, Qing; Feng, Guochen
2007-03-01
In [G.C. Feng, Z.H. Lin, B. Yu, Existence of an interior pathway to a Karush-Kuhn-Tucker point of a nonconvex programming problem, Nonlinear Anal. 32 (1998) 761-768; G.C. Feng, B. Yu, Combined homotopy interior point method for nonlinear programming problems, in: H. Fujita, M. Yamaguti (Eds.), Advances in Numerical Mathematics, Proceedings of the Second Japan-China Seminar on Numerical Mathematics, Lecture Notes in Numerical and Applied Analysis, vol. 14, Kinokuniya, Tokyo, 1995, pp. 9-16; Z.H. Lin, B. Yu, G.C. Feng, A combined homotopy interior point method for convex programming problem, Appl. Math. Comput. 84 (1997) 193-211.], a combined homotopy was constructed for solving non-convex programming and convex programming with weaker conditions, without assuming the logarithmic barrier function to be strictly convex and the solution set to be bounded. It was proven that a smooth interior path from an interior point of the feasible set to a K-K-T point of the problem exists. This shows that combined homotopy interior point methods can solve the problem that commonly used interior point methods cannot solveE However, so far, there is no result on its complexity, even for linear programming. The main difficulty is that the objective function is not monotonically decreasing on the combined homotopy path. In this paper, by taking a piecewise technique, under commonly used conditions, polynomiality of a combined homotopy interior point method is given for convex nonlinear programming.
Pfleger, Christopher; Rathi, Prakash Chandra; Klein, Doris L; Radestock, Sebastian; Gohlke, Holger
2013-04-22
For deriving maximal advantage from information on biomacromolecular flexibility and rigidity, results from rigidity analyses must be linked to biologically relevant characteristics of a structure. Here, we describe the Python-based software package Constraint Network Analysis (CNA) developed for this task. CNA functions as a front- and backend to the graph-based rigidity analysis software FIRST. CNA goes beyond the mere identification of flexible and rigid regions in a biomacromolecule in that it (I) provides a refined modeling of thermal unfolding simulations that also considers the temperature-dependence of hydrophobic tethers, (II) allows performing rigidity analyses on ensembles of network topologies, either generated from structural ensembles or by using the concept of fuzzy noncovalent constraints, and (III) computes a set of global and local indices for quantifying biomacromolecular stability. This leads to more robust results from rigidity analyses and extends the application domain of rigidity analyses in that phase transition points ("melting points") and unfolding nuclei ("structural weak spots") are determined automatically. Furthermore, CNA robustly handles small-molecule ligands in general. Such advancements are important for applying rigidity analysis to data-driven protein engineering and for estimating the influence of ligand molecules on biomacromolecular stability. CNA maintains the efficiency of FIRST such that the analysis of a single protein structure takes a few seconds for systems of several hundred residues on a single core. These features make CNA an interesting tool for linking biomacromolecular structure, flexibility, (thermo-)stability, and function. CNA is available from http://cpclab.uni-duesseldorf.de/software for nonprofit organizations.
The Existence of the Solution to One Kind of Algebraic Riccati Equation
NASA Astrophysics Data System (ADS)
Liu, Jianming
2018-03-01
The matrix equation ATX + XA + XRX + Q = O is called algebraic Riccati equation, which is very important in the fields of automatic control and other engineering applications. Many researchers have studied the solutions to various algebraic Riccati equations and most of them mainly applied the matrix methods, while few used the functional analysis theories. This paper mainly studies the existence of the solution to the following kind of algebraic Riccati equation from the functional view point: ATX + XA + XRX ‑λX + Q = O Here, X, A, R, Q ∈ n×n , Q is a symmetric matrix, and R is a positive or negative semi-definite matrix, λ is arbitrary constants. This paper uses functional approach such as fixed point theorem and contraction mapping thinking so as to provide two sufficient conditions for the solvability about this kind of Riccati equation and to arrive at some relevant conclusions.
Theories on anxiety in Freud and Melanie Klein. Their metapsychological status.
De Bianchedi, E T; Scalozub De Boschan, L; De Cortiñas, L P; De Piccolo, E G
1988-01-01
This paper presents a comparative study of the theories on anxiety formulated by Freud and Melanie Klein, with particular emphasis on the questions of its origin, its meaning for the individual and its function in both theoretical systems. The purpose of this comparative analysis is to offer an instrument which helps frame the theoretical discussions in psychoanalysis in an epistemological context. The authors hold that for Freud anxiety is considered as one more amongst the various manifestations of mental life, which his general theories try to explain, whereas for Melanie Klein anxiety and its destinies occupies a central place in her theories on mental functioning. The differences in both theories, which the authors of this paper describe, especially as to origin, function and meaning of anxiety, respond partially to the different metapsychological points of view with which both authors focus mental life--points of view which they have themselves investigated in a previous paper.
Neural field theory of perceptual echo and implications for estimating brain connectivity
NASA Astrophysics Data System (ADS)
Robinson, P. A.; Pagès, J. C.; Gabay, N. C.; Babaie, T.; Mukta, K. N.
2018-04-01
Neural field theory is used to predict and analyze the phenomenon of perceptual echo in which random input stimuli at one location are correlated with electroencephalographic responses at other locations. It is shown that this echo correlation (EC) yields an estimate of the transfer function from the stimulated point to other locations. Modal analysis then explains the observed spatiotemporal structure of visually driven EC and the dominance of the alpha frequency; two eigenmodes of similar amplitude dominate the response, leading to temporal beating and a line of low correlation that runs from the crown of the head toward the ears. These effects result from mode splitting and symmetry breaking caused by interhemispheric coupling and cortical folding. It is shown how eigenmodes obtained from functional magnetic resonance imaging experiments can be combined with temporal dynamics from EC or other evoked responses to estimate the spatiotemporal transfer function between any two points and hence their effective connectivity.
Considering the spatial-scale factor when modelling sustainable land management.
NASA Astrophysics Data System (ADS)
Bouma, Johan
2015-04-01
Considering the spatial-scale factor when modelling sustainable land management. J.Bouma Em.prof. soil science, Wageningen University, Netherlands. Modelling soil-plant processes is a necessity when exploring future effects of climate change and innovative soil management on agricultural productivity. Soil data are needed to run models and traditional soil maps and the associated databases (based on various soil Taxonomies ), have widely been applied to provide such data obtained at "representative" points in the field. Pedotransferfunctions (PTF)are used to feed simulation models, statistically relating soil survey data ( obtained at a given point in the landscape) to physical parameters for simulation, thus providing a link with soil functionality. Soil science has a basic problem: their object of study is invisible. Only point data are obtained by augering or in pits. Only occasionally roadcuts provide a better view. Extrapolating point to area data is essential for all applications and presents a basic problem for soil science, because mapping units on soil maps, named for a given soil type,may also contain other soil types and quantitative information about the composition of soil map units is usually not available. For detailed work at farm level ( 1:5000-1:10000), an alternative procedure is proposed. Based on a geostatistical analysis, onsite soil observations are made in a grid pattern with spacings based on a geostatistical analysis. Multi-year simulations are made for each point of the functional properties that are relevant for the case being studied, such as the moisture supply capacity, nitrate leaching etc. under standardized boundary conditions to allow comparisons. Functional spatial units are derived next by aggregating functional point data. These units, which have successfully functioned as the basis for precision agriculture, do not necessarily correspond with Taxonomic units but when they do the Taxonomic names should be noted . At lower landscape and watershed scale ( 1:25.000 -1:50000) digital soil mapping can provide soil data for small grids that can be used for modeling, again through pedotransferfunctions. There is a risk, however, that digital mapping results in an isolated series of projects that don't increase the knowledge base on soil functionality, e.g.linking Taxonomic names ( such as soil series) to functionality, allowing predictions of soil behavior at new sites where certain soil series occur. We therefore suggest that aside from collecting 13 soil characteristics for each grid, as occurs in digital soil mapping, also the Taxonomic name of the representative soil in the grid is recorded. At spatial scales of 1:50000 and smaller, use of Taxonomic names becomes ever more attractive because at such small scales relations between soil types and landscape features become more pronounced. But in all cases, selection of procedures should not be science-based but based on the type of questions being asked including their level of generalization. These questions are quite different at the different spatial-scale levels and so should be the procedures.
Boonstra, Anne M; Stewart, Roy E; Köke, Albère J A; Oosterwijk, René F A; Swaan, Jeannette L; Schreurs, Karlein M G; Schiphorst Preuper, Henrica R
2016-01-01
Objectives: The 0-10 Numeric Rating Scale (NRS) is often used in pain management. The aims of our study were to determine the cut-off points for mild, moderate, and severe pain in terms of pain-related interference with functioning in patients with chronic musculoskeletal pain, to measure the variability of the optimal cut-off points, and to determine the influence of patients' catastrophizing and their sex on these cut-off points. Methods: 2854 patients were included. Pain was assessed by the NRS, functioning by the Pain Disability Index (PDI) and catastrophizing by the Pain Catastrophizing Scale (PCS). Cut-off point schemes were tested using ANOVAs with and without using the PSC scores or sex as co-variates and with the interaction between CP scheme and PCS score and sex, respectively. The variability of the optimal cut-off point schemes was quantified using bootstrapping procedure. Results and conclusion: The study showed that NRS scores ≤ 5 correspond to mild, scores of 6-7 to moderate and scores ≥8 to severe pain in terms of pain-related interference with functioning. Bootstrapping analysis identified this optimal NRS cut-off point scheme in 90% of the bootstrapping samples. The interpretation of the NRS is independent of sex, but seems to depend on catastrophizing. In patients with high catastrophizing tendency, the optimal cut-off point scheme equals that for the total study sample, but in patients with a low catastrophizing tendency, NRS scores ≤ 3 correspond to mild, scores of 4-6 to moderate and scores ≥7 to severe pain in terms of interference with functioning. In these optimal cut-off schemes, NRS scores of 4 and 5 correspond to moderate interference with functioning for patients with low catastrophizing tendency and to mild interference for patients with high catastrophizing tendency. Theoretically one would therefore expect that among the patients with NRS scores 4 and 5 there would be a higher average PDI score for those with low catastrophizing than for those with high catastrophizing. However, we found the opposite. The fact that we did not find the same optimal CP scheme in the subgroups with lower and higher catastrophizing tendency may be due to chance variability.
Boonstra, Anne M.; Stewart, Roy E.; Köke, Albère J. A.; Oosterwijk, René F. A.; Swaan, Jeannette L.; Schreurs, Karlein M. G.; Schiphorst Preuper, Henrica R.
2016-01-01
Objectives: The 0–10 Numeric Rating Scale (NRS) is often used in pain management. The aims of our study were to determine the cut-off points for mild, moderate, and severe pain in terms of pain-related interference with functioning in patients with chronic musculoskeletal pain, to measure the variability of the optimal cut-off points, and to determine the influence of patients’ catastrophizing and their sex on these cut-off points. Methods: 2854 patients were included. Pain was assessed by the NRS, functioning by the Pain Disability Index (PDI) and catastrophizing by the Pain Catastrophizing Scale (PCS). Cut-off point schemes were tested using ANOVAs with and without using the PSC scores or sex as co-variates and with the interaction between CP scheme and PCS score and sex, respectively. The variability of the optimal cut-off point schemes was quantified using bootstrapping procedure. Results and conclusion: The study showed that NRS scores ≤ 5 correspond to mild, scores of 6–7 to moderate and scores ≥8 to severe pain in terms of pain-related interference with functioning. Bootstrapping analysis identified this optimal NRS cut-off point scheme in 90% of the bootstrapping samples. The interpretation of the NRS is independent of sex, but seems to depend on catastrophizing. In patients with high catastrophizing tendency, the optimal cut-off point scheme equals that for the total study sample, but in patients with a low catastrophizing tendency, NRS scores ≤ 3 correspond to mild, scores of 4–6 to moderate and scores ≥7 to severe pain in terms of interference with functioning. In these optimal cut-off schemes, NRS scores of 4 and 5 correspond to moderate interference with functioning for patients with low catastrophizing tendency and to mild interference for patients with high catastrophizing tendency. Theoretically one would therefore expect that among the patients with NRS scores 4 and 5 there would be a higher average PDI score for those with low catastrophizing than for those with high catastrophizing. However, we found the opposite. The fact that we did not find the same optimal CP scheme in the subgroups with lower and higher catastrophizing tendency may be due to chance variability. PMID:27746750
NASA Astrophysics Data System (ADS)
Mock, Alyssa; Korlacki, Rafał; Briley, Chad; Darakchieva, Vanya; Monemar, Bo; Kumagai, Yoshinao; Goto, Ken; Higashiwaki, Masataka; Schubert, Mathias
2017-12-01
We employ an eigenpolarization model including the description of direction dependent excitonic effects for rendering critical point structures within the dielectric function tensor of monoclinic β -Ga2O3 yielding a comprehensive analysis of generalized ellipsometry data obtained from 0.75-9 eV. The eigenpolarization model permits complete description of the dielectric response. We obtain, for single-electron and excitonic band-to-band transitions, anisotropic critical point model parameters including their polarization vectors within the monoclinic lattice. We compare our experimental analysis with results from density functional theory calculations performed using the Gaussian-attenuation-Perdew-Burke-Ernzerhof hybrid density functional. We present and discuss the order of the fundamental direct band-to-band transitions and their polarization selection rules, the electron and hole effective mass parameters for the three lowest band-to-band transitions, and their excitonic contributions. We find that the effective masses for holes are highly anisotropic and correlate with the selection rules for the fundamental band-to-band transitions. The observed transitions are polarized close to the direction of the lowest hole effective mass for the valence band participating in the transition.
N-point correlation functions in the CfA and SSRS redshift distribution of galaxies
NASA Technical Reports Server (NTRS)
Gaztanaga, Enrique
1992-01-01
Using counts in cells, we estimate the volume-average N-point galaxy correlation functions for N = 2, 3, and 4, in redshift samples of the CfA and SSRS catalogs. Volume-limited samples of different sizes are used to study the uncertainties at different scales, the shot noise, and the problem with the boundaries. The hierarchical constants S3 and S4 agree well in all samples in CfA and SSRS, with average S3 = 194 +/- 0.07 and S4 = 4.56 +/- 0.53. We compare these results with estimates obtained from angular catalogs and recent analysis over IRAS samples. The amplitudes SJ seem larger in real space than in redshift space, although the values from the angular analysis correspond to smaller scales, where we might expect larger nonperturbative effects. It is also found that S3 and S4 are smaller for IRAS than for optical galaxies. This, together with the fact that IRAS galaxies have smaller amplitude for the above correlation functions, indicates that the density fluctuations of IRAS galaxies cannot be simply proportional to the density fluctuations of optical galaxies, i.e., biasing has to be nonlinear between them.
Weissman-Miller, Deborah
2013-11-02
Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.
NASA Astrophysics Data System (ADS)
Coletta, Vincent P.; Evans, Jonathan
2008-10-01
We analyze the motion of a gravity powered model race car on a downhill track of variable slope. Using a simple algebraic function to approximate the height of the track as a function of the distance along the track, and taking account of the rotational energy of the wheels, rolling friction, and air resistance, we obtain analytic expressions for the velocity and time of the car as functions of the distance traveled along the track. Photogates are used to measure the time at selected points along the track, and the measured values are in excellent agreement with the values predicted from theory. The design and analysis of model race cars provides a good application of principles of mechanics and suggests interesting projects for classes in introductory and intermediate mechanics.
Webb, Laura E.; Bak Jensen, Margit; Engel, Bas; van Reenen, Cornelis G.; Gerrits, Walter J. J.; de Boer, Imke J. M.; Bokkers, Eddie A. M.
2014-01-01
The present study aimed to quantify calves'(Bos taurus) preference for long versus chopped hay and straw, and hay versus straw, using cross point analysis of double demand functions, in a context where energy intake was not a limiting factor. Nine calves, fed milk replacer and concentrate, were trained to work for roughage rewards from two simultaneously available panels. The cost (number of muzzle presses) required on the panels varied in each session (left panel/right panel): 7/35, 14/28, 21/21, 28/14, 35/7. Demand functions were estimated from the proportion of rewards achieved on one panel relative to the total number of rewards achieved in one session. Cross points (cp) were calculated as the cost at which an equal number of rewards was achieved from both panels. The deviation of the cp from the midpoint (here 21) indicates the strength of the preference. Calves showed a preference for long versus chopped hay (cp = 14.5; P = 0.004), and for hay versus straw (cp = 38.9; P = 0.004), both of which improve rumen function. Long hay may stimulate chewing more than chopped hay, and the preference for hay versus straw could be related to hedonic characteristics. No preference was found for chopped versus long straw (cp = 20.8; P = 0.910). These results could be used to improve the welfare of calves in production systems; for example, in systems where calves are fed hay along with high energy concentrate, providing long hay instead of chopped could promote roughage intake, rumen development, and rumination. PMID:24558426
Evaluation of Liver Function After Proton Beam Therapy for Hepatocellular Carcinoma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mizumoto, Masashi; Okumura, Toshiyuki; Hashimoto, Takayuki
Purpose: Our previous results for treatment of hepatocellular carcinoma with proton beam therapy (PBT) revealed excellent local control. In this study, we focused on the impact of PBT on normal liver function. Methods and Materials: The subjects were 259 patients treated with PBT at University of Tsukuba between January 2001 and December 2007. We evaluated the Child-Pugh score pretreatment, on the final day of PBT, and 6, 12, and 24 months after treatment with PBT. Patients who had disease progression or who died with tumor progression at each evaluation point were excluded from the analysis to rule out an effectmore » of tumor progression. An increase in the Child-Pugh score of 1 or more was defined as an adverse event. Results: Of the 259 patients, 241 had no disease progression on the final day of PBT, and 91 had no progression within 12 months after PBT. In univariate analysis, the percentage volumes of normal liver receiving at least 0, 10, 20, and 30 GyE in PBT (V0, 10, 20, and 30) were significantly associated with an increase of Child-Pugh score at 12 months after PBT. Of the 91 patients evaluated at 12 months, 66 had no increase of Child-Pugh score, 15 had a 1-point increase, and 10 had an increase of {>=}2 points. For the Youden index, the optimal cut-offs for V0, V10, V20, and V30 were 30%, 20%, 26%, and 18%, respectively. Conclusion: Our findings indicate that liver function after PBT is significantly related to the percentage volume of normal liver that is not irradiated. This suggests that further study of the relationship between liver function and PBT is required.« less
Nakamura, Misa; Tazaki, Fumie; Nomura, Kazuki; Takano, Taeko; Hashimoto, Masashi; Hashizume, Hiroshi; Kamei, Ichiro
2017-01-01
In our worldwide aging society, elderly people should maintain cognitive and physical function to help avoid health problems. Dementia is a major brain disease among elderly people, and is caused by cognitive impairment. The locomotive syndrome (LS) refers to a condition in which people require healthcare services because of problems associated with locomotion. The purpose of this study was to determine the association between cognitive impairment and LS. Study participants were 142 healthy elderly female volunteers living in a rural area in Japan. Cognitive function was assessed using the Mini-Mental State Examination (MMSE). A score of ≤26 points on the MMSE was used to indicate categorically defined poor cognitive performance (cognitive impairment). The LS was defined by a score ≥16 points, and non-LS as <16 points, on the 25-question Geriatric Locomotive Function Scale (GLFS-25). Twenty-one participants (14.8%) had an MMSE score ≤26, and 19.0% were found to have LS. Compared with the MMSE >26 group, the ≤26 group was significantly older, had a higher percentage of body fat, and a higher GLFS-25 score. Those with LS were significantly older, had a higher body mass index, a higher percentage of body fat, and a lower MMSE score. Participants in the LS group had higher odds of cognitive impairment than those without LS [odds ratio (OR) =3.08] by logistic regression analysis adjusted for age. Furthermore, participants with GLFS-25 scores ≥6 had higher odds of cognitive impairment than those with a GLFS-25 score <6 by logistic regression analysis adjusted for both age (OR =4.44), and age and percent body fat (OR =4.12). These findings suggest that a strong relationship exists between the early stage of decreased motor function and cognitive impairment.
Gat-Viks, Irit; Geiger, Tamar; Barbi, Mali; Raini, Gali; Elroy-Stein, Orna
2015-08-01
Vanishing white matter (VWM) is a recessive neurodegenerative disease caused by mutations in translation initiation factor eIF2B and leading to progressive brain myelin deterioration, secondary axonal damage, and death in early adolescence. Eif2b5(R132H/R132H) mice exhibit delayed developmental myelination, mild early neurodegeneration and a robust remyelination defect in response to cuprizone-induced demyelination. In the current study we used Eif2b5(R132H/R132H) mice for mass-spectrometry analyses, to follow the changes in brain protein abundance in normal- versus cuprizone-diet fed mice during the remyelination recovery phase. Analysis of proteome profiles suggested that dysregulation of mitochondrial functions, altered proteasomal activity and impaired balance between protein synthesis and degradation play a role in VWM pathology. Consistent with these findings, we detected elevated levels of reactive oxygen species in mutant-derived primary fibroblasts and reduced 20S proteasome activity in mutant brain homogenates. These observations highlight the importance of tight translational control to precise coordination of processes involved in myelin formation and regeneration and point at cellular functions that may contribute to VWM pathology. Eif2b5(R132H/R132H) mouse model for vanishing white matter (VWM) disease was used for mass spectrometry of brain proteins at two time points under normal conditions and along recovery from cuprizone-induced demyelination. Comparisons of proteome profiles revealed the importance of mitochondrial function and tight coordination between protein synthesis and degradation to myelination formation and regeneration, pointing at cellular functions that contribute to VWM pathology. © 2015 International Society for Neurochemistry.
Limit cycles and conformal invariance
NASA Astrophysics Data System (ADS)
Fortin, Jean-François; Grinstein, Benjamín; Stergiou, Andreas
2013-01-01
There is a widely held belief that conformal field theories (CFTs) require zero beta functions. Nevertheless, the work of Jack and Osborn implies that the beta functions are not actually the quantites that decide conformality, but until recently no such behavior had been exhibited. Our recent work has led to the discovery of CFTs with nonzero beta functions, more precisely CFTs that live on recurrent trajectories, e.g., limit cycles, of the beta-function vector field. To demonstrate this we study the S function of Jack and Osborn. We use Weyl consistency conditions to show that it vanishes at fixed points and agrees with the generator Q of limit cycles on them. Moreover, we compute S to third order in perturbation theory, and explicitly verify that it agrees with our previous determinations of Q. A byproduct of our analysis is that, in perturbation theory, unitarity and scale invariance imply conformal invariance in four-dimensional quantum field theories. Finally, we study some properties of these new, "cyclic" CFTs, and point out that the a-theorem still governs the asymptotic behavior of renormalization-group flows.
Kumar, Rajeev; Pitcher, Tony J.; Varkey, Divya A.
2017-01-01
We present a comprehensive analysis of estimation of fisheries Maximum Sustainable Yield (MSY) reference points using an ecosystem model built for Mille Lacs Lake, the second largest lake within Minnesota, USA. Data from single-species modelling output, extensive annual sampling for species abundances, annual catch-survey, stomach-content analysis for predatory-prey interactions, and expert opinions were brought together within the framework of an Ecopath with Ecosim (EwE) ecosystem model. An increase in the lake water temperature was observed in the last few decades; therefore, we also incorporated a temperature forcing function in the EwE model to capture the influences of changing temperature on the species composition and food web. The EwE model was fitted to abundance and catch time-series for the period 1985 to 2006. Using the ecosystem model, we estimated reference points for most of the fished species in the lake at single-species as well as ecosystem levels with and without considering the influence of temperature change; therefore, our analysis investigated the trophic and temperature effects on the reference points. The paper concludes that reference points such as MSY are not stationary, but change when (1) environmental conditions alter species productivity and (2) fishing on predators alters the compensatory response of their prey. Thus, it is necessary for the management to re-estimate or re-evaluate the reference points when changes in environmental conditions and/or major shifts in species abundance or community structure are observed. PMID:28957387
Deepak, Kishore K; Al-Umran, Khalid Umran; AI-Sheikh, Mona H; Dkoli, B V; Al-Rubaish, Abdullah
2015-01-01
The functionality of distracters in a multiple choice question plays a very important role. We examined the frequency and impact of functioning and non-functioning distracters on psychometric properties of 5-option items in clinical disciplines. We analyzed item statistics of 1115 multiple choice questions from 15 summative assessments of undergraduate medical students and classified the items into five groups by their number of non-functioning distracters. We analyzed the effect of varying degree of non-functionality ranging from 0 to 4, on test reliability, difficulty index, discrimination index and point biserial correlation. The non-functionality of distracters inversely affected the test reliability and quality of items in a predictable manner. The non-functioning distracters made the items easier and lowered the discrimination index significantly. Three non-functional distracters in a 5-option MCQ significantly affected all psychometric properties (p < 0.5). The corrected point biserial correlation revealed that the items with 3 functional options were psychometrically as effective as 5-option items. Our study reveals that a multiple choice question with 3 functional options provides lower most limit of item format that has adequate psychometric property. The test containing items with less number of functioning options have significantly lower reliability. The distracter function analysis and revision of nonfunctioning distracters can serve as important methods to improve the psychometrics and reliability of assessment.
Cognitive components of self esteem for individuals with severe mental illness.
Blankertz, L
2001-10-01
In a sample of 182 individuals with severe mental illness, the applicability of reflected appraisals and self-enhancement theories as explanations for global self-esteem was examined at two time points on components of stigma, mastery, overall functioning, education, and job prestige. Path analysis demonstrated that the two theories work independently; and that stigma, mastery, and overall functioning are significant, persist over time, and have an enduring effect on self-esteem.
What's the point of the type III secretion system needle?
Blocker, Ariel J.; Deane, Janet E.; Veenendaal, Andreas K. J.; Roversi, Pietro; Hodgkinson, Julie L.; Johnson, Steven; Lea, Susan M.
2008-01-01
Recent work by several groups has significantly expanded our knowledge of the structure, regulation of assembly, and function of components of the extracellular portion of the type III secretion system (T3SS) of Gram-negative bacteria. This perspective presents a structure-informed analysis of functional data and discusses three nonmutually exclusive models of how a key aspect of T3SS biology, the sensing of host cells, may be performed. PMID:18458349
Extracting neuronal functional network dynamics via adaptive Granger causality analysis.
Sheikhattar, Alireza; Miran, Sina; Liu, Ji; Fritz, Jonathan B; Shamma, Shihab A; Kanold, Patrick O; Babadi, Behtash
2018-04-24
Quantifying the functional relations between the nodes in a network based on local observations is a key challenge in studying complex systems. Most existing time series analysis techniques for this purpose provide static estimates of the network properties, pertain to stationary Gaussian data, or do not take into account the ubiquitous sparsity in the underlying functional networks. When applied to spike recordings from neuronal ensembles undergoing rapid task-dependent dynamics, they thus hinder a precise statistical characterization of the dynamic neuronal functional networks underlying adaptive behavior. We develop a dynamic estimation and inference paradigm for extracting functional neuronal network dynamics in the sense of Granger, by integrating techniques from adaptive filtering, compressed sensing, point process theory, and high-dimensional statistics. We demonstrate the utility of our proposed paradigm through theoretical analysis, algorithm development, and application to synthetic and real data. Application of our techniques to two-photon Ca 2+ imaging experiments from the mouse auditory cortex reveals unique features of the functional neuronal network structures underlying spontaneous activity at unprecedented spatiotemporal resolution. Our analysis of simultaneous recordings from the ferret auditory and prefrontal cortical areas suggests evidence for the role of rapid top-down and bottom-up functional dynamics across these areas involved in robust attentive behavior.
Loop transfer recovery for general nonminimum phase discrete time systems. I - Analysis
NASA Technical Reports Server (NTRS)
Chen, Ben M.; Saberi, Ali; Sannuti, Peddapullaiah; Shamash, Yacov
1992-01-01
A complete analysis of loop transfer recovery (LTR) for general nonstrictly proper, not necessarily minimum phase discrete time systems is presented. Three different observer-based controllers, namely, `prediction estimator' and full or reduced-order type `current estimator' based controllers, are used. The analysis corresponding to all these three controllers is unified into a single mathematical framework. The LTR analysis given here focuses on three fundamental issues: (1) the recoverability of a target loop when it is arbitrarily given, (2) the recoverability of a target loop while taking into account its specific characteristics, and (3) the establishment of necessary and sufficient conditions on the given system so that it has at least one recoverable target loop transfer function or sensitivity function. Various differences that arise in LTR analysis of continuous and discrete systems are pointed out.
Transforming Functions by Rescaling Axes
ERIC Educational Resources Information Center
Ferguson, Robert
2017-01-01
Students are often asked to plot a generalised parent function from their knowledge of a parent function. One approach is to sketch the parent function, choose a few points on the parent function curve, transform and plot these points, and use the transformed points as a guide to sketching the generalised parent function. Another approach is to…
Measuring Aggregation of Events about a Mass Using Spatial Point Pattern Methods
Smith, Michael O.; Ball, Jackson; Holloway, Benjamin B.; Erdelyi, Ferenc; Szabo, Gabor; Stone, Emily; Graham, Jonathan; Lawrence, J. Josh
2017-01-01
We present a methodology that detects event aggregation about a mass surface using 3-dimensional study regions with a point pattern and a mass present. The Aggregation about a Mass function determines aggregation, randomness, or repulsion of events with respect to the mass surface. Our method closely resembles Ripley’s K function but is modified to discern the pattern about the mass surface. We briefly state the definition and derivation of Ripley’s K function and explain how the Aggregation about a Mass function is different. We develop the novel function according to the definition: the Aggregation about a Mass function times the intensity is the expected number of events within a distance h of a mass. Special consideration of edge effects is taken in order to make the function invariant to the location of the mass within the study region. Significance of aggregation or repulsion is determined using simulation envelopes. A simulation study is performed to inform researchers how the Aggregation about a Mass function performs under different types of aggregation. Finally, we apply the Aggregation about a Mass function to neuroscience as a novel analysis tool by examining the spatial pattern of neurotransmitter release sites as events about a neuron. PMID:29046865
Contrasting Causatives: A Minimalist Approach
ERIC Educational Resources Information Center
Tubino Blanco, Mercedes
2010-01-01
This dissertation explores the mechanisms behind the linguistic expression of causation in English, Hiaki (Uto-Aztecan) and Spanish. Pylkkanen's (2002, 2008) analysis of causatives as dependent on the parameterization of the functional head v[subscript CAUSE] is chosen as a point of departure. The studies conducted in this dissertation confirm…
NASA Astrophysics Data System (ADS)
Aoun, Bachir; Yu, Cun; Fan, Longlong; Chen, Zonghai; Amine, Khalil; Ren, Yang
2015-04-01
A generalized method is introduced to extract critical information from series of ranked correlated data. The method is generally applicable to all types of spectra evolving as a function of any arbitrary parameter. This approach is based on correlation functions and statistical scedasticity formalism. Numerous challenges in analyzing high throughput experimental data can be tackled using the herein proposed method. We applied this method to understand the reactivity pathway and formation mechanism of a Li-ion battery cathode material during high temperature synthesis using in-situ high-energy X-ray diffraction. We demonstrate that Pearson's correlation function can easily unravel all major phase transition and, more importantly, the minor structural changes which cannot be revealed by conventionally inspecting the series of diffraction patterns. Furthermore, a two-dimensional (2D) reactivity pattern calculated as the scedasticity along all measured reciprocal space of all successive diffraction pattern pairs unveils clearly the structural evolution path and the active areas of interest during the synthesis. The methods described here can be readily used for on-the-fly data analysis during various in-situ operando experiments in order to quickly evaluate and optimize experimental conditions, as well as for post data analysis and large data mining where considerable amount of data hinders the feasibility of the investigation through point-by-point inspection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aoun, Bachir; Yu, Cun; Fan, Longlong
A generalized method is introduced to extract critical information from series of ranked correlated data. The method is generally applicable to all types of spectra evolving as a function of any arbitrary parameter. This approach is based on correlation functions and statistical scedasticity formalism. Numerous challenges in analyzing high throughput experimental data can be tackled using the herein proposed method. We applied this method to understand the reactivity pathway and formation mechanism of a Li-ion battery cathode material during high temperature synthesis using in-situ highenergy X-ray diffraction. We demonstrate that Pearson's correlation function can easily unravel all major phase transitionmore » and, more importantly, the minor structural changes which cannot be revealed by conventionally inspecting the series of diffraction patterns. Furthermore, a two-dimensional (2D) reactivity pattern calculated as the scedasticity along all measured reciprocal space of all successive diffraction pattern pairs unveils clearly the structural evolution path and the active areas of interest during the synthesis. The methods described here can be readily used for on-the-fly data analysis during various in-situ operando experiments in order to quickly evaluate and optimize experimental conditions, as well as for post data analysis and large data mining where considerable amount of data hinders the feasibility of the investigation through point-by-point inspection.« less
Preliminary Design and Analysis of the GIFTS Instrument Pointing System
NASA Technical Reports Server (NTRS)
Zomkowski, Paul P.
2003-01-01
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Instrument is the next generation spectrometer for remote sensing weather satellites. The GIFTS instrument will be used to perform scans of the Earth s atmosphere by assembling a series of field-of- views (FOV) into a larger pattern. Realization of this process is achieved by step scanning the instrument FOV in a contiguous fashion across any desired portion of the visible Earth. A 2.3 arc second pointing stability, with respect to the scanning instrument, must be maintained for the duration of the FOV scan. A star tracker producing attitude data at 100 Hz rate will be used by the autonomous pointing algorithm to precisely track target FOV s on the surface of the Earth. The main objective is to validate the pointing algorithm in the presence of spacecraft disturbances and determine acceptable disturbance limits from expected noise sources. Proof of concept validation of the pointing system algorithm is carried out with a full system simulation developed using Matlab Simulink. Models for the following components function within the full system simulation: inertial reference unit (IRU), attitude control system (ACS), reaction wheels, star tracker, and mirror controller. With the spacecraft orbital position and attitude maintained to within specified limits the pointing algorithm receives quaternion, ephemeris, and initialization data that are used to construct the required mirror pointing commands at a 100 Hz rate. This comprehensive simulation will also aid in obtaining a thorough understanding of spacecraft disturbances and other sources of pointing system errors. Parameter sensitivity studies and disturbance analysis will be used to obtain limits of operability for the GIFTS instrument. The culmination of this simulation development and analysis will be used to validate the specified performance requirements outlined for this instrument.
Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.
Kärkkäinen, Salme; Lantuéjoul, Christian
2007-10-01
We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.
Adeeb A. Rahman; Thomas J. Urbanik; Mustafa Mahamid
2006-01-01
This paper presents a model using finite element method to study the response of a typical commercial corrugated fiberboard due to an induced moisture function at one side of the fiberboard. The model predicts how the moisture diffusion will permeate through the fiberboardâs layers(medium and liners) providing information on moisture content at any given point...
Analysis of Spatial Point Patterns in Nuclear Biology
Weston, David J.; Adams, Niall M.; Russell, Richard A.; Stephens, David A.; Freemont, Paul S.
2012-01-01
There is considerable interest in cell biology in determining whether, and to what extent, the spatial arrangement of nuclear objects affects nuclear function. A common approach to address this issue involves analyzing a collection of images produced using some form of fluorescence microscopy. We assume that these images have been successfully pre-processed and a spatial point pattern representation of the objects of interest within the nuclear boundary is available. Typically in these scenarios, the number of objects per nucleus is low, which has consequences on the ability of standard analysis procedures to demonstrate the existence of spatial preference in the pattern. There are broadly two common approaches to look for structure in these spatial point patterns. First a spatial point pattern for each image is analyzed individually, or second a simple normalization is performed and the patterns are aggregated. In this paper we demonstrate using synthetic spatial point patterns drawn from predefined point processes how difficult it is to distinguish a pattern from complete spatial randomness using these techniques and hence how easy it is to miss interesting spatial preferences in the arrangement of nuclear objects. The impact of this problem is also illustrated on data related to the configuration of PML nuclear bodies in mammalian fibroblast cells. PMID:22615822
Security analysis of boolean algebra based on Zhang-Wang digital signature scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Jinbin, E-mail: jbzheng518@163.com
2014-10-06
In 2005, Zhang and Wang proposed an improvement signature scheme without using one-way hash function and message redundancy. In this paper, we show that this scheme exits potential safety concerns through the analysis of boolean algebra, such as bitwise exclusive-or, and point out that mapping is not one to one between assembly instructions and machine code actually by means of the analysis of the result of the assembly program segment, and which possibly causes safety problems unknown to the software.
A Kinematically Consistent Two-Point Correlation Function
NASA Technical Reports Server (NTRS)
Ristorcelli, J. R.
1998-01-01
A simple kinematically consistent expression for the longitudinal two-point correlation function related to both the integral length scale and the Taylor microscale is obtained. On the inner scale, in a region of width inversely proportional to the turbulent Reynolds number, the function has the appropriate curvature at the origin. The expression for two-point correlation is related to the nonlinear cascade rate, or dissipation epsilon, a quantity that is carried as part of a typical single-point turbulence closure simulation. Constructing an expression for the two-point correlation whose curvature at the origin is the Taylor microscale incorporates one of the fundamental quantities characterizing turbulence, epsilon, into a model for the two-point correlation function. The integral of the function also gives, as is required, an outer integral length scale of the turbulence independent of viscosity. The proposed expression is obtained by kinematic arguments; the intention is to produce a practically applicable expression in terms of simple elementary functions that allow an analytical evaluation, by asymptotic methods, of diverse functionals relevant to single-point turbulence closures. Using the expression devised an example of the asymptotic method by which functionals of the two-point correlation can be evaluated is given.
Analysis of truss, beam, frame, and membrane components. [composite structures
NASA Technical Reports Server (NTRS)
Knoell, A. C.; Robinson, E. Y.
1975-01-01
Truss components are considered, taking into account composite truss structures, truss analysis, column members, and truss joints. Beam components are discussed, giving attention to composite beams, laminated beams, and sandwich beams. Composite frame components and composite membrane components are examined. A description is given of examples of flat membrane components and examples of curved membrane elements. It is pointed out that composite structural design and analysis is a highly interactive, iterative procedure which does not lend itself readily to characterization by design or analysis function only.-
Cost-effectiveness of exercise and diet in overweight and obese adults with knee osteoarthritis.
Sevick, Mary A; Miller, Gary D; Loeser, Richard F; Williamson, Jeff D; Messier, Stephen P
2009-06-01
The purpose of this study was to compare the cost-effectiveness of dietary and exercise interventions in overweight or obese elderly patients with knee osteoarthritis (OA) enrolled in the Arthritis, Diet, and Physical Activity Promotion Trial (ADAPT). ADAPT was a single-blinded, controlled trial of 316 adults with knee OA, randomized to one of four groups: Healthy Lifestyle Control group, Diet group, Exercise group, or Exercise and Diet group. A cost analysis was performed from a payer perspective, incorporating those costs and benefits that would be realized by a managed care organization interested in maintaining the health and satisfaction of its enrollees while reducing unnecessary utilization of health care services. The Diet intervention was most cost-effective for reducing weight, at $35 for each percentage point reduction in baseline body weight. The Exercise intervention was most cost-effective for improving mobility, costing $10 for each percentage point improvement in a 6-min walking distance and $9 for each percentage point improvement in the timed stair climbing task. The Exercise and Diet intervention was most cost-effective for improving self-reported function and symptoms of arthritis, costing $24 for each percentage point improvement in subjective function, $20 for each percentage point improvement in self-reported pain, and $56 for each percentage point improvement in self-reported stiffness. The Exercise and Diet intervention consistently yielded the greatest improvements in weight, physical performance, and symptoms of knee OA. However, it was also the most expensive and was the most cost-effective approach only for the subjective outcomes of knee OA (self-reported function, pain, and stiffness). Perceived function and symptoms of knee OA are likely to be stronger drivers of downstream health service utilization than weight, or objective performance measures and may be the most cost-effective in the long term.
NASA Astrophysics Data System (ADS)
Abu-Assab, Samah; Baier, Daniel
In this paper, we compare two product design approaches, quality function deployment (QFD) and conjoint analysis (CA), on the example of mobile phones for elderly people as a target group. Then, we compare between our results and the results from former similar comparisons, e.g., Pullman et al. (J Prod Innov Manage 19(5):354-364, 2002) and Katz (J Innov Manage 21:61-63, 2004). In this work, the same procedures and conditions are taken into consideration as that taken by Pullman et al. in their paper. They viewed the relation between the two methods: QFD and CA as a complementary one in which both should be simultaneously implemented since each provide feedback to the other. They concluded that CA is more efficient in reflecting the end-users’ present preferences for the product attributes, whereas QFD is definitely better in satisfying end-users’ needs from the developers’ point of view. Katz in his response from a practitioner’s point of view agreed with Pullman et al. However, he concluded that the two methods are better used sequentially and that QFD should precede conjoint analysis. We test these results in a market for elderly people.
Features of control systems analysis with discrete control devices using mathematical packages
NASA Astrophysics Data System (ADS)
Yakovleva, E. M.; Faerman, V. A.
2017-02-01
The article contains presentation of basic provisions of the theory of automatic pulse control systems as well as methods of analysis of such systems using the mathematical software widespread in the academic environment. The pulse systems under research are considered as analogues systems interacting among themselves, including sensors, amplifiers, controlled objects, and discrete parts. To describe such systems, one uses a mathematical apparatus of difference equations as well as discrete transfer functions. To obtain a transfer function of the open-loop system, being important from the point of view of the analysis of control systems, one uses mathematical packages Mathcad and Matlab. Despite identity of the obtained result, the way of its achievement from the point of view of user’s action is various for the specified means. In particular, Matlab uses a structural model of the control system while Mathcad allows only execution of a chain of operator transforms. It is worth noting that distinctions taking place allow considering transformation of signals during interaction of the linear and continuous parts of the control system from different sides. The latter can be used in an educational process for the best assimilation of the course of the control system theory by students.
Mishra, Pankaj Kumar; Thekkudan, Joyce; Sahajanandan, Raj; Gravenor, Mike; Lakshmanan, Suresh; Fayaz, Khazi Mohammed; Luckraz, Heyman
2015-01-01
OBJECTIVE platelet function assessment after cardiac surgery can predict postoperative blood loss, guide transfusion requirements and discriminate the need for surgical re-exploration. We conducted this study to assess the predictive value of point-of-care testing platelet function using the Multiplate® device. Patients undergoing isolated coronary artery bypass grafting were prospectively recruited ( n = 84). Group A ( n = 42) patients were on anti-platelet therapy until surgery; patients in Group B ( n = 42) stopped anti-platelet treatment at least 5 days preoperatively. Multiplate® and thromboelastography (TEG) tests were performed in the perioperative period. Primary end-point was excessive bleeding (>2.5 ml/kg/h) within first 3 h postoperative. Secondary end-points included transfusion requirements, re-exploration rates, intensive care unit and in-hospital stays. Patients in Group A had excessive bleeding (59% vs. 33%, P = 0.02), higher re-exploration rates (14% vs. 0%, P < 0.01) and higher rate of blood (41% vs. 14%, P < 0.01) and platelet (14% vs. 2%, P = 0.05) transfusions. On multivariate analysis, preoperative platelet function testing was the most significant predictor of excessive bleeding (odds ratio [OR]: 2.3, P = 0.08), need for blood (OR: 5.5, P < 0.01) and platelet transfusion (OR: 15.1, P < 0.01). Postoperative "ASPI test" best predicted the need for transfusion (sensitivity - 0.86) and excessive blood loss (sensitivity - 0.81). TEG results did not correlate well with any of these outcome measures. Peri-operative platelet functional assessment with Multiplate® was the strongest predictor for bleeding and transfusion requirements in patients on anti-platelet therapy until the time of surgery.
Film characteristics pertinent to coherent optical data processing systems.
Thomas, C E
1972-08-01
Photographic film is studied quantitatively as the input mechanism for coherent optical data recording and processing systems. The two important film characteristics are the amplitude transmission vs exposure (T(A) - E) curve and the film noise power spectral density. Both functions are measured as a function of the type of film, the type of developer, developer time and temperature, and the exposing and readout light wavelengths. A detailed analysis of a coherent optical spatial frequency analyzer reveals that the optimum do bias point for 649-F film is an amplitude transmission of about 70%. This operating point yields minimum harmonic and intermodulation distortion, whereas the 50% amplitude transmission bias point recommended by holographers yields maximum diffraction efficiency. It is also shown that the effective ac gain or contrast of the film is nearly independent of the development conditions for a given film. Finally, the linear dynamic range of one particular coherent optical spatial frequency analyzer is shown to be about 40-50 dB.
Preliminary GAOFEN-3 Insar dem Accuracy Analysis
NASA Astrophysics Data System (ADS)
Chen, Q.; Li, T.; Tang, X.; Gao, X.; Zhang, X.
2018-04-01
GF-3 satellite, the first C band and full-polarization SAR satellite of China with spatial resolution of 1 m, was successfully launched in August 2016. We analyze the error sources of GF-3 satellite in this paper, and provide the interferometric calibration model based on range function, Doppler shift equation and interferometric phase function, and interferometric parameters calibrated using the three-dimensional coordinates of ground control points. Then, we conduct the experimental two pairs of images in fine stripmap I mode covering Songshan of Henan Province and Tangshan of Hebei Province, respectively. The DEM data are assessed using SRTM DEM, ICESat-GLAS points, and ground control points database obtained using ZY-3 satellite to validate the accuracy of DEM elevation. The experimental results show that the accuracy of DEM extracted from GF-3 satellite SAR data can meet the requirements of topographic mapping in mountain and alpine regions at the scale of 1 : 50000 in China. Besides, it proves that GF-3 satellite has the potential of interferometry.
Iwata, Akira; Fuchioka, Satoshi; Hiraoka, Koichi; Masuhara, Mitsuhiko; Kami, Katsuya
2010-05-01
Although numerous studies have aimed to elucidate the mechanisms used to repair the structure and function of injured skeletal muscles, it remains unclear how and when movement recovers following damage. We performed a temporal analysis to characterize the changes in movement, muscle function, and muscle structure after muscle injury induced by the drop-mass technique. At each time-point, movement recovery was determined by ankle kinematic analysis of locomotion, and functional recovery was represented by isometric force. As a histological analysis, the cross-sectional area of myotubes was measured to examine structural regeneration. The dorsiflexion angle of the ankle, as assessed by kinematic analysis of locomotion, increased after injury and then returned to control levels by day 14 post-injury. The isometric force returned to normal levels by day 21 post-injury. However, the size of the myotubes did not reach normal levels, even at day 21 post-injury. These results indicate that recovery of locomotion occurs prior to recovery of isometric force and that functional recovery occurs earlier than structural regeneration. Thus, it is suggested that recovery of the movement and function of injured skeletal muscles might be insufficient as markers for estimating the degree of neuromuscular system reconstitution.
NASA Technical Reports Server (NTRS)
Mcclelland, J.; Silk, J.
1978-01-01
Higher-order correlation functions for the large-scale distribution of galaxies in space are investigated. It is demonstrated that the three-point correlation function observed by Peebles and Groth (1975) is not consistent with a distribution of perturbations that at present are randomly distributed in space. The two-point correlation function is shown to be independent of how the perturbations are distributed spatially, and a model of clustered perturbations is developed which incorporates a nonuniform perturbation distribution and which explains the three-point correlation function. A model with hierarchical perturbations incorporating the same nonuniform distribution is also constructed; it is found that this model also explains the three-point correlation function, but predicts different results for the four-point and higher-order correlation functions than does the model with clustered perturbations. It is suggested that the model of hierarchical perturbations might be explained by the single assumption of having density fluctuations or discrete objects all of the same mass randomly placed at some initial epoch.
A novel grid multiwing chaotic system with only non-hyperbolic equilibria
NASA Astrophysics Data System (ADS)
Zhang, Sen; Zeng, Yicheng; Li, Zhijun; Wang, Mengjiao; Xiong, Le
2018-05-01
The structure of the chaotic attractor of a system is mainly determined by the nonlinear functions in system equations. By using a new saw-tooth wave function and a new stair function, a novel complex grid multiwing chaotic system which belongs to non-Shil'nikov chaotic system with non-hyperbolic equilibrium points is proposed in this paper. It is particularly interesting that the complex grid multiwing attractors are generated by increasing the number of non-hyperbolic equilibrium points, which are different from the traditional methods of realising multiwing attractors by adding the index-2 saddle-focus equilibrium points in double-wing chaotic systems. The basic dynamical properties of the new system, such as dissipativity, phase portraits, the stability of the equilibria, the time-domain waveform, power spectrum, bifurcation diagram, Lyapunov exponents, and so on, are investigated by theoretical analysis and numerical simulations. Furthermore, the corresponding electronic circuit is designed and simulated on the Multisim platform. The Multisim simulation results and the hardware experimental results are in good agreement with the numerical simulations of the same system on Matlab platform, which verify the feasibility of this new grid multiwing chaotic system.
Modeling spatio-temporal wildfire ignition point patterns
Amanda S. Hering; Cynthia L. Bell; Marc G. Genton
2009-01-01
We analyze and model the structure of spatio-temporal wildfire ignitions in the St. Johns River Water Management District in northeastern Florida. Previous studies, based on the K-function and an assumption of homogeneity, have shown that wildfire events occur in clusters. We revisit this analysis based on an inhomogeneous K-...
Strength Analysis of Glass-Fiber-Reinforced Plastic during Buckling,
An algorithm is developed for calculating and analyzing the stress tensor by the experimental function of deflections during the buckling of glass ... fiber -reinforced plastic shells loaded with a hydrostatic load. Malmeyster’s theory of strength is used to qualitatively establish the possible points of shell failure. (Author-PL)
Conformational analysis of cellobiose by electronic structure theories
USDA-ARS?s Scientific Manuscript database
Adiabatic phi/psi maps for cellobiose were prepared with B3LYP density functional theory. A mixed basis set was used for minimization, followed with 6-31+G(d) single-point calculations, with and without SMD continuum solvation. Different arrangements of the exocyclic groups (3starting geometries) we...
Terluin, Berend; Smits, Niels; Miedema, Baukje
2014-12-01
Translations of questionnaires need to be carefully validated to assure that the translation measures the same construct(s) as the original questionnaire. The four-dimensional symptom questionnaire (4DSQ) is a Dutch self-report questionnaire measuring distress, depression, anxiety and somatization. To evaluate the equivalence of the English version of the 4DSQ. 4DSQ data of English and Dutch speaking general practice attendees were analysed and compared. The English speaking group consisted of 205 attendees, aged 18-64 years, in general practice, in Canada whereas the Dutch group consisted of 302 general practice attendees in the Netherlands. Differential item functioning (DIF) analysis was conducted using the Mantel-Haenszel method and ordinal logistic regression. Differential test functioning (DTF; i.e., the scale impact of DIF) was evaluated using linear regression analysis. DIF was detected in 2/16 distress items, 2/6 depression items, 2/12 anxiety items, and 1/16 somatization items. With respect to mean scale scores, the impact of DIF on the scale level was negligible for all scales. On the anxiety scale DIF caused the English speaking patients with moderate to severe anxiety to score about one point lower than Dutch patients with the same anxiety level. The English 4DSQ measures the same constructs like the original Dutch 4DSQ. The distress, depression and somatization scales can employ the same cut-off points as the corresponding Dutch scales. However, cut-off points of the English 4DSQ anxiety scale should be lowered by one point to retain the same meaning as the Dutch anxiety cut-off points.
Striatal dopamine in Parkinson disease: A meta-analysis of imaging studies.
Kaasinen, Valtteri; Vahlberg, Tero
2017-12-01
A meta-analysis of 142 positron emission tomography and single photon emission computed tomography studies that have investigated striatal presynaptic dopamine function in Parkinson disease (PD) was performed. Subregional estimates of striatal dopamine metabolism are presented. The aromatic L-amino-acid decarboxylase (AADC) defect appears to be consistently smaller than the dopamine transporter and vesicular monoamine transporter 2 defects, suggesting upregulation of AADC function in PD. The correlation between disease severity and dopamine loss appears linear, but the majority of longitudinal studies point to a negative exponential progression pattern of dopamine loss in PD. Ann Neurol 2017;82:873-882. © 2017 American Neurological Association.
Statistical representation of a spray as a point process
NASA Astrophysics Data System (ADS)
Subramaniam, S.
2000-10-01
The statistical representation of a spray as a finite point process is investigated. One objective is to develop a better understanding of how single-point statistical information contained in descriptions such as the droplet distribution function (ddf), relates to the probability density functions (pdfs) associated with the droplets themselves. Single-point statistical information contained in the droplet distribution function (ddf) is shown to be related to a sequence of single surrogate-droplet pdfs, which are in general different from the physical single-droplet pdfs. It is shown that the ddf contains less information than the fundamental single-point statistical representation of the spray, which is also described. The analysis shows which events associated with the ensemble of spray droplets can be characterized by the ddf, and which cannot. The implications of these findings for the ddf approach to spray modeling are discussed. The results of this study also have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single-point statistics such as the droplet number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets. Implications of these findings for large eddy simulations of multiphase flows are also discussed.
Gayarre, Javier; Martín-Gimeno, Paloma; Osorio, Ana; Paumard, Beatriz; Barroso, Alicia; Fernández, Victoria; de la Hoya, Miguel; Rojo, Alejandro; Caldés, Trinidad; Palacios, José; Urioste, Miguel; Benítez, Javier; García, María J
2017-09-26
Despite a high prevalence of deleterious missense variants, most studies of RAD51C ovarian cancer susceptibility gene only provide in silico pathogenicity predictions of missense changes. We identified a novel deleterious RAD51C missense variant (p.Arg312Trp) in a high-risk family, and propose a criteria to prioritise RAD51C missense changes qualifying for functional analysis. To evaluate pathogenicity of p.Arg312Trp variant we used sequence homology, loss of heterozygosity (LOH) and segregation analysis, and a comprehensive functional characterisation. To define a functional-analysis prioritisation criteria, we used outputs for the known functionally confirmed deleterious and benign RAD51C missense changes from nine pathogenicity prediction algorithms. The p.Arg312Trp variant failed to correct mitomycin and olaparib hypersensitivity and to complement abnormal RAD51C foci formation according to functional assays, which altogether with LOH and segregation data demonstrated deleteriousness. Prioritisation criteria were based on the number of predictors providing a deleterious output, with a minimum of 5 to qualify for testing and a PredictProtein score greater than 33 to assign high-priority indication. Our study points to a non-negligible number of RAD51C missense variants likely to impair protein function, provides a guideline to prioritise and encourage their selection for functional analysis and anticipates that reference laboratories should have available resources to conduct such assays.
Varzakas, Theodoros H; Arvanitoyannis, Ioannis S
2007-01-01
The Failure Mode and Effect Analysis (FMEA) model has been applied for the risk assessment of corn curl manufacturing. A tentative approach of FMEA application to the snacks industry was attempted in an effort to exclude the presence of GMOs in the final product. This is of crucial importance both from the ethics and the legislation (Regulations EC 1829/2003; EC 1830/2003; Directive EC 18/2001) point of view. The Preliminary Hazard Analysis and the Fault Tree Analysis were used to analyze and predict the occurring failure modes in a food chain system (corn curls processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and the fishbone diagram). Finally, Pareto diagrams were employed towards the optimization of GMOs detection potential of FMEA.
Quantum phase space with a basis of Wannier functions
NASA Astrophysics Data System (ADS)
Fang, Yuan; Wu, Fan; Wu, Biao
2018-02-01
A quantum phase space with Wannier basis is constructed: (i) classical phase space is divided into Planck cells; (ii) a complete set of Wannier functions are constructed with the combination of Kohn’s method and Löwdin method such that each Wannier function is localized at a Planck cell. With these Wannier functions one can map a wave function unitarily onto phase space. Various examples are used to illustrate our method and compare it to Wigner function. The advantage of our method is that it can smooth out the oscillations in wave functions without losing any information and is potentially a better tool in studying quantum-classical correspondence. In addition, we point out that our method can be used for time-frequency analysis of signals.
Arthroscopic rotator cuff repair in the weight-bearing shoulder.
Kerr, Jacek; Borbas, Paul; Meyer, Dominik C; Gerber, Christian; Buitrago Téllez, Carlos; Wieser, Karl
2015-12-01
In wheelchair-dependent individuals, pain often develops because of rotator cuff tendon failure and/or osteoarthritis of the glenohumeral joint. The purposes of this study were to investigate (1) specific rotator cuff tear patterns, (2) structural healing, and (3) clinical outcomes after arthroscopic rotator cuff repair in a cohort of wheelchair-dependent patients. Forty-six shoulders with a mean follow-up of 46 months (range, 24-82 months; SD, 13 months) from a consecutive series of 61 shoulders in 56 patients (46 men and 10 women) undergoing arthroscopic rotator cuff repair were available for analysis. Clinical outcome analysis was performed using the Constant-Murley score, the Subjective Shoulder Value, and the American Shoulder and Elbow Surgeons score. The integrity of the repair was analyzed by ultrasound. Of the shoulders, 87% had supraspinatus involvement, 70% had subscapularis involvement, and 57% had an anterosuperior lesion involving both the supraspinatus and subscapularis. Despite an overall structural failure rate of 33%, the patients showed improvements in the Constant-Murley score from 50 points (range, 22-86 points; SD, 16 points) preoperatively to 80 points (range, 40-98 points; SD, 12 points) postoperatively and in the American Shoulder and Elbow Surgeons score from 56 points (range, 20-92 points; SD, 20 points) preoperatively to 92 points (range, 53-100 points; SD, 10 points) postoperatively, with a mean postoperative Subjective Shoulder Value of 84% (range, 25%-100%; SD, 17%). Failure of the rotator cuff in weight-bearing shoulders occurs primarily anterosuperiorly. Arthroscopic rotator cuff repair leads to a structural failure rate of 33% but satisfactory functional results with high patient satisfaction at midterm follow-up. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Unsupervised Detection of Planetary Craters by a Marked Point Process
NASA Technical Reports Server (NTRS)
Troglio, G.; Benediktsson, J. A.; Le Moigne, J.; Moser, G.; Serpico, S. B.
2011-01-01
With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features.
NASA Astrophysics Data System (ADS)
Barbasiewicz, Adrianna; Widerski, Tadeusz; Daliga, Karol
2018-01-01
This article was created as a result of research conducted within the master thesis. The purpose of the measurements was to analyze the accuracy of the positioning of points by computer programs. Selected software was a specialized computer software dedicated to photogrammetric work. For comparative purposes it was decided to use tools with similar functionality. As the basic parameters that affect the results selected the resolution of the photos on which the key points were searched. In order to determine the location of the determined points, it was decided to follow the photogrammetric resection rule. In order to automate the measurement, the measurement session planning was omitted. The coordinates of the points collected by the tachymetric measure were used as a reference system. The resulting deviations and linear displacements oscillate in millimeters. The visual aspects of the cloud points have also been briefly analyzed.
Symptoms, visual function, and mucin expression of eyes with tear film instability.
Shimazaki-Den, Seika; Dogru, Murat; Higa, Kazunari; Shimazaki, Jun
2013-09-01
We examined symptoms, tear stability, visual function, and conjunctival cytology in eyes with an unstable tear film (UTF), expressed as a short tear film breakup time without epithelial damage or low tear secretion, and compared the results with those from eyes with aqueous deficiency (AD) associated with epithelial damage, and healthy eyes. We divided the patients with ocular discomfort into 2 groups according to the breakup time, Schirmer value, and epithelial staining score: UTF group (≤5 seconds, >5 mm, and <3 points; 21 eyes of 21 patients) and AD group (≤5 seconds, ≤5 mm, and ≥3 points; 21 eyes of 21 patients). We examined all patients and 17 healthy subjects for symptoms, tear functions, tear film stability by tear film lipid layer interferometry and tear film analysis system, and functional visual acuity. Conjunctival impression cytology was performed to investigate changes in goblet cell density, squamous metaplasia, and messenger RNA expression of MUC5AC and MUC16. The symptom scores, tear film analysis system index, and functional visual acuity testing were significantly worse in the UTF and AD groups compared with those in the control group (P < 0.05). The messenger RNA expression levels of MUC5AC and MUC16 were significantly lower in UTF and AD eyes compared with those in the control eyes (P < 0.0001). An UTF itself can cause dry eye symptoms and visual disturbance comparable with those of AD dry eyes.
Acupuncture for peripheral joint osteoarthritis
Manheimer, Eric; Cheng, Ke; Linde, Klaus; Lao, Lixing; Yoo, Junghee; Wieland, Susan; van der Windt, Daniëlle AWM; Berman, Brian M; Bouter, Lex M
2011-01-01
Background Peripheral joint osteoarthritis is a major cause of pain and functional limitation. Few treatments are safe and effective. Objectives To assess the effects of acupuncture for treating peripheral joint osteoarthritis. Search strategy We searched the Cochrane Central Register of Controlled Trials (The Cochrane Library 2008, Issue 1), MEDLINE, and EMBASE (both through December 2007), and scanned reference lists of articles. Selection criteria Randomized controlled trials (RCTs) comparing needle acupuncture with a sham, another active treatment, or a waiting list control group in people with osteoarthritis of the knee, hip, or hand. Data collection and analysis Two authors independently assessed trial quality and extracted data. We contacted study authors for additional information. We calculated standardized mean differences using the differences in improvements between groups. Main results Sixteen trials involving 3498 people were included. Twelve of the RCTs included only people with OA of the knee, 3 only OA of the hip, and 1 a mix of people with OA of the hip and/or knee. In comparison with a sham control, acupuncture showed statistically significant, short-term improvements in osteoarthritis pain (standardized mean difference -0.28, 95% confidence interval -0.45 to -0.11; 0.9 point greater improvement than sham on 20 point scale; absolute percent change 4.59%; relative percent change 10.32%; 9 trials; 1835 participants) and function (-0.28, -0.46 to -0.09; 2.7 point greater improvement on 68 point scale; absolute percent change 3.97%; relative percent change 8.63%); however, these pooled short-term benefits did not meet our predefined thresholds for clinical relevance (i.e. 1.3 points for pain; 3.57 points for function) and there was substantial statistical heterogeneity. Additionally, restriction to sham-controlled trials using shams judged most likely to adequately blind participants to treatment assignment (which were also the same shams judged most likely to have physiological activity), reduced heterogeneity and resulted in pooled short-term benefits of acupuncture that were smaller and non-significant. In comparison with sham acupuncture at the six-month follow-up, acupuncture showed borderline statistically significant, clinically irrelevant improvements in osteoarthritis pain (-0.10, -0.21 to 0.01; 0.4 point greater improvement than sham on 20 point scale; absolute percent change 1.81%; relative percent change 4.06%; 4 trials;1399 participants) and function (-0.11, -0.22 to 0.00; 1.2 point greater improvement than sham on 68 point scale; absolute percent change 1.79%; relative percent change 3.89%). In a secondary analysis versus a waiting list control, acupuncture was associated with statistically significant, clinically relevant short-term improvements in osteoarthritis pain (-0.96, -1.19 to -0.72; 14.5 point greater improvement than sham on 100 point scale; absolute percent change 14.5%; relative percent change 29.14%; 4 trials; 884 participants) and function (-0.89, -1.18 to -0.60; 13.0 point greater improvement than sham on 100 point scale; absolute percent change 13.0%; relative percent change 25.21%). In the head-on comparisons of acupuncture with the ‘supervised osteoarthritis education’ and the ‘physician consultation’ control groups, acupuncture was associated with clinically relevant short- and long-term improvements in pain and function. In the head on comparisons of acupuncture with ‘home exercises/advice leaflet’ and ‘supervised exercise’, acupuncture was associated with similar treatment effects as the controls. Acupuncture as an adjuvant to an exercise based physiotherapy program did not result in any greater improvements than the exercise program alone. Information on safety was reported in only 8 trials and even in these trials there was limited reporting and heterogeneous methods. Authors' conclusions Sham-controlled trials show statistically significant benefits; however, these benefits are small, do not meet our pre-defined thresholds for clinical relevance, and are probably due at least partially to placebo effects from incomplete blinding. Waiting list-controlled trials of acupuncture for peripheral joint osteoarthritis suggest statistically significant and clinically relevant benefits, much of which may be due to expectation or placebo effects. PMID:20091527
Liu, Lu-Ning; Su, Hai-Nan; Yan, Shi-Gan; Shao, Si-Mi; Xie, Bin-Bin; Chen, Xiu-Lan; Zhang, Xi-Ying; Zhou, Bai-Cheng; Zhang, Yu-Zhong
2009-07-01
Crystal structures of phycobiliproteins have provided valuable information regarding the conformations and amino acid organizations of peptides and chromophores, and enable us to investigate their structural and functional relationships with respect to environmental variations. In this work, we explored the pH-induced conformational and functional dynamics of R-phycoerythrin (R-PE) by means of absorption, fluorescence and circular dichroism spectra, together with analysis of its crystal structure. R-PE presents stronger functional stability in the pH range of 3.5-10 compared to the structural stability. Beyond this range, pronounced functional and structural changes occur. Crystal structure analysis shows that the tertiary structure of R-PE is fixed by several key anchoring points of the protein. With this specific association, the fundamental structure of R-PE is stabilized to present physiological spectroscopic properties, while local variations in protein peptides are also allowed in response to environmental disturbances. The functional stability and relative structural sensitivity of R-PE allow environmental adaptation.
Correlation Function Analysis of Fiber Networks: Implications for Thermal Conductivity
NASA Technical Reports Server (NTRS)
Martinez-Garcia, Jorge; Braginsky, Leonid; Shklover, Valery; Lawson, John W.
2011-01-01
The heat transport in highly porous fiber structures is investigated. The fibers are supposed to be thin, but long, so that the number of the inter-fiber connections along each fiber is large. We show that the effective conductivity of such structures can be found from the correlation length of the two-point correlation function of the local conductivities. Estimation of the parameters, determining the conductivity, from the 2D images of the structures is analyzed.
Functional vs. Traditional Analysis in Biomechanical Gait Data: An Alternative Statistical Approach
Seeley, Matthew K.; Francom, Devin; Reese, C. Shane; Hopkins, J. Ty
2017-01-01
Abstract In human motion studies, discrete points such as peak or average kinematic values are commonly selected to test hypotheses. The purpose of this study was to describe a functional data analysis and describe the advantages of using functional data analyses when compared with a traditional analysis of variance (ANOVA) approach. Nineteen healthy participants (age: 22 ± 2 yrs, body height: 1.7 ± 0.1 m, body mass: 73 ± 16 kg) walked under two different conditions: control and pain+effusion. Pain+effusion was induced by injection of sterile saline into the joint capsule and hypertonic saline into the infrapatellar fat pad. Sagittal-plane ankle, knee, and hip joint kinematics were recorded and compared following injections using 2×2 mixed model ANOVAs and FANOVAs. The results of ANOVAs detected a condition × time interaction for the peak ankle (F1,18 = 8.56, p = 0.01) and hip joint angle (F1,18 = 5.77, p = 0.03), but did not for the knee joint angle (F1,18 = 0.36, p = 0.56). The functional data analysis, however, found several differences at initial contact (ankle and knee joint), in the mid-stance (each joint) and at toe off (ankle). Although a traditional ANOVA is often appropriate for discrete or summary data, in biomechanical applications, the functional data analysis could be a beneficial alternative. When using the functional data analysis approach, a researcher can (1) evaluate the entire data as a function, and (2) detect the location and magnitude of differences within the evaluated function. PMID:29339984
Functional vs. Traditional Analysis in Biomechanical Gait Data: An Alternative Statistical Approach.
Park, Jihong; Seeley, Matthew K; Francom, Devin; Reese, C Shane; Hopkins, J Ty
2017-12-01
In human motion studies, discrete points such as peak or average kinematic values are commonly selected to test hypotheses. The purpose of this study was to describe a functional data analysis and describe the advantages of using functional data analyses when compared with a traditional analysis of variance (ANOVA) approach. Nineteen healthy participants (age: 22 ± 2 yrs, body height: 1.7 ± 0.1 m, body mass: 73 ± 16 kg) walked under two different conditions: control and pain+effusion. Pain+effusion was induced by injection of sterile saline into the joint capsule and hypertonic saline into the infrapatellar fat pad. Sagittal-plane ankle, knee, and hip joint kinematics were recorded and compared following injections using 2×2 mixed model ANOVAs and FANOVAs. The results of ANOVAs detected a condition × time interaction for the peak ankle (F1,18 = 8.56, p = 0.01) and hip joint angle (F1,18 = 5.77, p = 0.03), but did not for the knee joint angle (F1,18 = 0.36, p = 0.56). The functional data analysis, however, found several differences at initial contact (ankle and knee joint), in the mid-stance (each joint) and at toe off (ankle). Although a traditional ANOVA is often appropriate for discrete or summary data, in biomechanical applications, the functional data analysis could be a beneficial alternative. When using the functional data analysis approach, a researcher can (1) evaluate the entire data as a function, and (2) detect the location and magnitude of differences within the evaluated function.
Functional vs. Traditional Analysis in Biomechanical Gait Data: An Alternative Statistical Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Jihong; Seeley, Matthew K.; Francom, Devin
In human motion studies, discrete points such as peak or average kinematic values are commonly selected to test hypotheses. The purpose of this study was to describe a functional data analysis and describe the advantages of using functional data analyses when compared with a traditional analysis of variance (ANOVA) approach. Nineteen healthy participants (age: 22 ± 2 yrs, body height: 1.7 ± 0.1 m, body mass: 73 ± 16 kg) walked under two different conditions: control and pain+effusion. Pain+effusion was induced by injection of sterile saline into the joint capsule and hypertonic saline into the infrapatellar fat pad. Sagittal-plane ankle,more » knee, and hip joint kinematics were recorded and compared following injections using 2×2 mixed model ANOVAs and FANOVAs. The results of ANOVAs detected a condition × time interaction for the peak ankle (F1,18 = 8.56, p = 0.01) and hip joint angle (F1,18 = 5.77, p = 0.03), but did not for the knee joint angle (F1,18 = 0.36, p = 0.56). The functional data analysis, however, found several differences at initial contact (ankle and knee joint), in the mid-stance (each joint) and at toe off (ankle). Although a traditional ANOVA is often appropriate for discrete or summary data, in biomechanical applications, the functional data analysis could be a beneficial alternative. Thus when using the functional data analysis approach, a researcher can (1) evaluate the entire data as a function, and (2) detect the location and magnitude of differences within the evaluated function.« less
Functional vs. Traditional Analysis in Biomechanical Gait Data: An Alternative Statistical Approach
Park, Jihong; Seeley, Matthew K.; Francom, Devin; ...
2017-12-28
In human motion studies, discrete points such as peak or average kinematic values are commonly selected to test hypotheses. The purpose of this study was to describe a functional data analysis and describe the advantages of using functional data analyses when compared with a traditional analysis of variance (ANOVA) approach. Nineteen healthy participants (age: 22 ± 2 yrs, body height: 1.7 ± 0.1 m, body mass: 73 ± 16 kg) walked under two different conditions: control and pain+effusion. Pain+effusion was induced by injection of sterile saline into the joint capsule and hypertonic saline into the infrapatellar fat pad. Sagittal-plane ankle,more » knee, and hip joint kinematics were recorded and compared following injections using 2×2 mixed model ANOVAs and FANOVAs. The results of ANOVAs detected a condition × time interaction for the peak ankle (F1,18 = 8.56, p = 0.01) and hip joint angle (F1,18 = 5.77, p = 0.03), but did not for the knee joint angle (F1,18 = 0.36, p = 0.56). The functional data analysis, however, found several differences at initial contact (ankle and knee joint), in the mid-stance (each joint) and at toe off (ankle). Although a traditional ANOVA is often appropriate for discrete or summary data, in biomechanical applications, the functional data analysis could be a beneficial alternative. Thus when using the functional data analysis approach, a researcher can (1) evaluate the entire data as a function, and (2) detect the location and magnitude of differences within the evaluated function.« less
The importance of the external potential on group electronegativity.
Leyssens, Tom; Geerlings, Paul; Peeters, Daniel
2005-11-03
The electronegativity of groups placed in a molecular environment is obtained using CCSD calculations of the electron affinity and ionization energy. A point charge model is used as an approximation of the molecular environment. The electronegativity values obtained in the presence of a point charge model are compared to the isolated group property to estimate the importance of the external potential on the group's electronegativity. The validity of the "group in molecule" electronegativities is verified by comparing EEM (electronegativity equalization method) charge transfer values to the explicitly calculated natural population analysis (NPA) ones, as well as by comparing the variation in electronegativity between the isolated functional group and the functional group in the presence of a modeled environment with the variation based on a perturbation expansion of the chemical potential.
Above Saddle-Point Regions of Order in a Sea of Chaos in the Vibrational Dynamics of KCN.
Párraga, H; Arranz, F J; Benito, R M; Borondo, F
2018-04-05
The dynamical characteristics of a region of regular vibrational motion in the sea of chaos above the saddle point corresponding to the linear C-N-K configuration is examined in detail. To explain the origin of this regularity, the associated phase space structures were characterized using suitably defined Poincaré surfaces of section, identifying the different resonances between the stretching and bending modes, as a function of excitation energy. The corresponding topology is elucidated by means of periodic orbit analysis.
Unitary subsector of generalized minimal models
NASA Astrophysics Data System (ADS)
Behan, Connor
2018-05-01
We revisit the line of nonunitary theories that interpolate between the Virasoro minimal models. Numerical bootstrap applications have brought about interest in the four-point function involving the scalar primary of lowest dimension. Using recent progress in harmonic analysis on the conformal group, we prove the conjecture that global conformal blocks in this correlator appear with positive coefficients. We also compute many such coefficients in the simplest mixed correlator system. Finally, we comment on the status of using global conformal blocks to isolate the truly unitary points on this line.
NASA Astrophysics Data System (ADS)
Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang
2018-01-01
Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.
Ecohydrology and tipping points in semiarid australian rangelands
NASA Astrophysics Data System (ADS)
Saco, P. M.; Azadi, S.; Moreno de las Heras, M.; Willgoose, G. R.
2017-12-01
Semiarid landscapes are often characterised by a spatially heterogeneous vegetation cover forming mosaics of patches with dense vegetation within bare soil. This patchy vegetation cover, which is linked to the healthy function of these ecosystems, is sensitive to human disturbances that can lead to degradation. Previous work suggests that vegetation loss below a critical value can lead to a sudden decrease in landscape functionality following threshold behaviour. The decrease in vegetation cover is linked to erosion and substantial water losses by increasing landscape hydrological connectivity. We study these interactions and the possible existence of tipping points in the Mulga land bioregion, by combining remote sensing observations and results from an eco-geomorphologic model to investigate changes in ecosystem connectivity and the existence of threshold behaviour. More than 30 sites were selected along a precipitation gradient spanning a range from approximately 250 to 500 mm annual rainfall. The analysis of vegetation patterns is derived from high resolution remote sensing images (IKONOS, QuickBird, Pleiades) and MODIS NDVI, which combined with local precipitation data is used to compute rainfall use efficiency to assess the ecosystem function. A critical tipping point associated to loss of vegetation cover appears in the sites with lower annual precipitation. We found that this tipping point behaviour decreases for sites with higher rainfall. We use the model to investigate the relation between structural and functional connectivity and the emergence of threshold behaviour for selected plots along this precipitation gradient. Both observations and modelling results suggest that sites with higher rainfall are more resilient to changes in surface connectivity. The implications for ecosystem resilience and land management are discussed
ACCEPTANCE OF FUNCTIONAL FOOD AMONG CHILEAN CONSUMERS: APPLE LEATHER.
van Vliet, Maya; Adasme-Berrios, Cristian; Schnettler, Berta
2015-10-01
the aim of this study is to measure acceptance of a specific functional food: apple (fruit) leather, based on organoleptic characteristics and to identify consumer types and preferences for natural additives which increase the product's functionality and meet current nutritional needs. a sample of 800 consumers provided an evaluation of apple leather in terms of acceptance (liking). A sensorial panel was carried out using a 9-point hedonic scale. Cluster analysis was used to identify different acceptance-based consumer types. In addition, a conjoint analysis was carried out to determine preference for different additives. the cluster analysis resulted in four groups with significant differences in the average likings obtained from the sensory panel. Results indicate that the sweetness of the tested apple leather was evaluated best among all groups and, on average, color was rated as the worst attribute. However, overall likings differ significantly between groups. Results from the conjoint analysis indicate that, in general, consumers prefer natural additives included in the product which enhance functionality. although there is a "global acceptance" of the product, there are significant differences between groups. The results of the conjoint analysis indicate that, in general, consumers prefer the aggregation of natural additives which increase the product's functionality. Apple leather with natural additives, such as anticariogenics and antioxidants, can be considered a functional substitute of unhealthy snacks and/or sweets. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Adaptive grid methods for RLV environment assessment and nozzle analysis
NASA Technical Reports Server (NTRS)
Thornburg, Hugh J.
1996-01-01
Rapid access to highly accurate data about complex configurations is needed for multi-disciplinary optimization and design. In order to efficiently meet these requirements a closer coupling between the analysis algorithms and the discretization process is needed. In some cases, such as free surface, temporally varying geometries, and fluid structure interaction, the need is unavoidable. In other cases the need is to rapidly generate and modify high quality grids. Techniques such as unstructured and/or solution-adaptive methods can be used to speed the grid generation process and to automatically cluster mesh points in regions of interest. Global features of the flow can be significantly affected by isolated regions of inadequately resolved flow. These regions may not exhibit high gradients and can be difficult to detect. Thus excessive resolution in certain regions does not necessarily increase the accuracy of the overall solution. Several approaches have been employed for both structured and unstructured grid adaption. The most widely used involve grid point redistribution, local grid point enrichment/derefinement or local modification of the actual flow solver. However, the success of any one of these methods ultimately depends on the feature detection algorithm used to determine solution domain regions which require a fine mesh for their accurate representation. Typically, weight functions are constructed to mimic the local truncation error and may require substantial user input. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. These weight functions can then be used to construct blending functions for algebraic redistribution, interpolation functions for unstructured grid generation, forcing functions to attract/repel points in an elliptic system, or to trigger local refinement, based upon application of an equidistribution principle. The popularity of solution-adaptive techniques is growing in tandem with unstructured methods. The difficultly of precisely controlling mesh densities and orientations with current unstructured grid generation systems has driven the use of solution-adaptive meshing. Use of derivatives of density or pressure are widely used for construction of such weight functions, and have been proven very successful for inviscid flows with shocks. However, less success has been realized for flowfields with viscous layers, vortices or shocks of disparate strength. It is difficult to maintain the appropriate mesh point spacing in the various regions which require a fine spacing for adequate resolution. Mesh points often migrate from important regions due to refinement of dominant features. An example of this is the well know tendency of adaptive methods to increase the resolution of shocks in the flowfield around airfoils, but in the incorrect location due to inadequate resolution of the stagnation region. This problem has been the motivation for this research.
Cognitive Function in a Randomized Trial of Evolocumab.
Giugliano, Robert P; Mach, François; Zavitz, Kenton; Kurtz, Christopher; Im, Kyungah; Kanevsky, Estella; Schneider, Jingjing; Wang, Huei; Keech, Anthony; Pedersen, Terje R; Sabatine, Marc S; Sever, Peter S; Robinson, Jennifer G; Honarpour, Narimon; Wasserman, Scott M; Ott, Brian R
2017-08-17
Background Findings from clinical trials of proprotein convertase subtilisin-kexin type 9 (PCSK9) inhibitors have led to concern that these drugs or the low levels of low-density lipoprotein (LDL) cholesterol that result from their use are associated with cognitive deficits. Methods In a subgroup of patients from a randomized, placebo-controlled trial of evolocumab added to statin therapy, we prospectively assessed cognitive function using the Cambridge Neuropsychological Test Automated Battery. The primary end point was the score on the spatial working memory strategy index of executive function (scores range from 4 to 28, with lower scores indicating a more efficient use of strategy and planning). Secondary end points were the scores for working memory (scores range from 0 to 279, with lower scores indicating fewer errors), episodic memory (scores range from 0 to 70, with lower scores indicating fewer errors), and psychomotor speed (scores range from 100 to 5100 msec, with faster times representing better performance). Assessments of cognitive function were performed at baseline, week 24, yearly, and at the end of the trial. The primary analysis was a noninferiority comparison of the mean change from baseline in the score on the spatial working memory strategy index of executive function between the patients who received evolocumab and those who received placebo; the noninferiority margin was set at 20% of the standard deviation of the score in the placebo group. Results A total of 1204 patients were followed for a median of 19 months; the mean (±SD) change from baseline over time in the raw score for the spatial working memory strategy index of executive function (primary end point) was -0.21±2.62 in the evolocumab group and -0.29±2.81 in the placebo group (P<0.001 for noninferiority; P=0.85 for superiority). There were no significant between-group differences in the secondary end points of scores for working memory (change in raw score, -0.52 in the evolocumab group and -0.93 in the placebo group), episodic memory (change in raw score, -1.53 and -1.53, respectively), or psychomotor speed (change in raw score, 5.2 msec and 0.9 msec, respectively). In an exploratory analysis, there were no associations between LDL cholesterol levels and cognitive changes. Conclusions In a randomized trial involving patients who received either evolocumab or placebo in addition to statin therapy, no significant between-group difference in cognitive function was observed over a median of 19 months. (Funded by Amgen; EBBINGHAUS ClinicalTrials.gov number, NCT02207634 .).
NASA Technical Reports Server (NTRS)
Mikulas, Martin M., Jr.; Sumpter, Rod
1999-01-01
In a previous paper, a new merit function for determining the strength performance of flawed composite laminates was presented. This previous analysis was restricted to circular hole flaws that were large enough that failure could be predicted using the laminate stress concentration factor. In this paper, the merit function is expanded to include the flaw cases of an arbitrary size circular hole or a center crack. Failure prediction for these cases is determined using the point stress criterion. An example application of the merit function is included for a wide range of graphite/epoxy laminates.
NASA Technical Reports Server (NTRS)
Martin, Mikulas M., Jr.; Sumpter, Rod
2000-01-01
In a previous paper, a new merit function for determining the strength performance of flawed composite laminates was presented. This previous analysis was restricted to circular hole flaws that were large enough that failure could be predicted using the laminate stress concentration factor. In this paper, the merit function is expanded to include the flaw cases of an arbitrary size circular hole or center crack. Failure prediction for these cases is determined using the point stress criterion. An example application of the merit function is included for a wide range of graphite/epoxy laminates.
NASA Technical Reports Server (NTRS)
Mikulas, Martin M., Jr.; Sumpter, Rod
1997-01-01
In a previous paper, a new merit function for determining the strength performance of flawed composite laminates was presented. This previous analysis was restricted to circular hole flaws that were large enough that failure could be predicted using the laminate stress concentration factor. In this paper, the merit function is expanded to include the flaw cases of an arbitrary size circular hole or a center crack. Failure prediction for these cases is determined using the point stress criterion. An example application of the merit function is included for a wide range of graphite/epoxy laminates.
Analysis of the proton longitudinal structure function from the gluon distribution function
NASA Astrophysics Data System (ADS)
Boroun, G. R.; Rezaei, B.
2012-11-01
We make a critical, next-to-leading order, study of the relationship between the longitudinal structure function F L and the gluon distribution proposed in Cooper-Sarkar et al. (Z. Phys. C 39:281, 1988; Acta Phys. Pol. B 34:2911 2003), which is frequently used to extract the gluon distribution from the proton longitudinal structure function at small x. The gluon density is obtained by expanding at particular choices of the point of expansion and compared with the hard Pomeron behavior for the gluon density. Comparisons with H1 data are made and predictions for the proposed best approach are also provided.
NASA Astrophysics Data System (ADS)
Sulik-Górecka, Aleksandra
2018-06-01
Modern manufacturing entities often operate in capital groups, and their role is sometimes limited to the function of cost centers. From the legal point of view, however, they are separate entities obliged to apply transfer pricing regulations. Meeting the requirements of the arm's length principle can be very difficult at this time, given the relationships and conflicts of interest in the capital group. Complexity increases in capital groups operating in different countries, due to differences in tax regulations. The main purpose of the paper is to demonstrate that the need to valuate the sale of finished goods to a manufacturing entity, which is a subject to a different tax jurisdiction, may lead to a problem of compliance with the arm's length principle. In addition, the paper proposes a methodology for comparability analysis that may be used by manufacturing entities to defend conditions of setting transfer pricing. The paper presents the different functional profiles of manufacturing entities and points out the difficulties that they may encounter when preparing the comparability analysis. It has also been noted that there are differences in transfer pricing regulations in different countries, for example by analyzing Polish and Czech regulations. The lack of uniform benchmarking legislation can cause inconsistencies in the selection of comparable data, resulting in differences in transfer pricing. The paper uses the method of legal regulation review and analysis of results of published studies concerning the scope of transfer pricing and comparability analysis. The paper also adopts a case study analysis.
Point-of-care instrument for monitoring tissue health during skin graft repair
NASA Astrophysics Data System (ADS)
Gurjar, R. S.; Seetamraju, M.; Zhang, J.; Feinberg, S. E.; Wolf, D. E.
2011-06-01
We have developed the necessary theoretical framework and the basic instrumental design parameters to enable mapping of subsurface blood dynamics and tissue oxygenation for patients undergoing skin graft procedures. This analysis forms the basis for developing a simple patch geometry, which can be used to map by diffuse optical techniques blood flow velocity and tissue oxygenation as a function of depth in subsurface tissue.skin graft, diffuse correlation analysis, oxygen saturation.
Rajput, Dhirajsingh S; Patgiri, Biswajyoti; Shukla, Vinay J
2014-01-01
Standardization of Ayurvedic medicine is the need of hour to obtain desired quality of final product. Shodhana literally means purification, is the initial step to make drugs like metals, minerals and poisonous herbs suitable for further procedure. Shodhana of metals/minerals help to expose maximum surface area of drug for chemical reactions and also in impregnation of organic materials and their properties in the drug. Thermo-gravimetric analysis (TGA) facilitates in identifying the presence of organic matter and change in the melting point of metal whereas Fourier transform infra-red spectroscopy (FTIR) assists in identifying the presence of various functional groups. To standardize the process of Naga Shodhana and to study the change in chemical nature of Shodhita Naga in each media through TGA and FTIR. Samanya and Vishesha Shodhana of Naga was carried out. Time taken for melting of Naga, physico-chemical changes in media used for Shodhana and weight changes after Shodhana were recorded. Samples of Naga were collected after Shodhana in each media for TGA and FTIR analysis. Average loss occurred during Shodhana was 6.26%. Melting point of Ashuddha Naga was 327.46°C, and it was 328.42°C after Shodhana. Percentage purity of Naga (percentage of lead in Naga) decreased after Shodhana from 99.80% to 99.40%. FTIR analysis of Shodhita Naga in each sample showed stretching vibrations particularly between C-H and C-N bonds that are indicating the presence of various organic compounds. According to TGA and FTIR analysis, Shodhana process increases melting point of Naga and initiation of new physico-chemical properties which are indicated by detection of large number of functional groups and organo-metallic nature of Shodhita Naga.
Rajput, Dhirajsingh S.; Patgiri, Biswajyoti; Shukla, Vinay J.
2014-01-01
Background: Standardization of Ayurvedic medicine is the need of hour to obtain desired quality of final product. Shodhana literally means purification, is the initial step to make drugs like metals, minerals and poisonous herbs suitable for further procedure. Shodhana of metals/minerals help to expose maximum surface area of drug for chemical reactions and also in impregnation of organic materials and their properties in the drug. Thermo-gravimetric analysis (TGA) facilitates in identifying the presence of organic matter and change in the melting point of metal whereas Fourier transform infra-red spectroscopy (FTIR) assists in identifying the presence of various functional groups. Aim: To standardize the process of Naga Shodhana and to study the change in chemical nature of Shodhita Naga in each media through TGA and FTIR. Material and Methods: Samanya and Vishesha Shodhana of Naga was carried out. Time taken for melting of Naga, physico-chemical changes in media used for Shodhana and weight changes after Shodhana were recorded. Samples of Naga were collected after Shodhana in each media for TGA and FTIR analysis. Results: Average loss occurred during Shodhana was 6.26%. Melting point of Ashuddha Naga was 327.46°C, and it was 328.42°C after Shodhana. Percentage purity of Naga (percentage of lead in Naga) decreased after Shodhana from 99.80% to 99.40%. FTIR analysis of Shodhita Naga in each sample showed stretching vibrations particularly between C-H and C-N bonds that are indicating the presence of various organic compounds. Conclusion: According to TGA and FTIR analysis, Shodhana process increases melting point of Naga and initiation of new physico-chemical properties which are indicated by detection of large number of functional groups and organo-metallic nature of Shodhita Naga. PMID:26664241
Dielectric function, critical points, and Rydberg exciton series of WSe2 monolayer.
Diware, M S; Ganorkar, S P; Park, K; Chegal, W; Cho, H M; Cho, Y J; Kim, Y D; Kim, H
2018-06-13
The complex dielectric function ([Formula: see text]) of WSe 2 monolayer grown by atomic layer deposition is investigated using spectroscopic ellipsometry. Band structure parameters are obtained by standard line-shape analysis of the second-energy-derivative of [Formula: see text] spectra. The fundamental band gap is observed at 2.26 eV, corresponds to transition between valence band (VB) maximum at the K point and conduction band (CB) minimum at Q point in the Brillouin zone (BZ). Two strong so-called A and B excitonic peaks in [Formula: see text] spectra originate from vertical transitions from spin-orbit split (0.43 eV) VB to CB at K point of the BZ. Binding energies of A and B exactions are 0.71 and 0.28 eV, respectively. Well resolved five excited excitons states has been detected within the spectral region between A and B. Energy profile of the Rydberg series shows significant deviation from the hydrogenic behavior, discussed in connection with the 2D hydrogen model. Results presented here will improve our understanding about the optical response of 2D materials and will help to design better optoelectronic applications and validate theoretical considerations.
Dielectric function, critical points, and Rydberg exciton series of WSe2 monolayer
NASA Astrophysics Data System (ADS)
Diware, M. S.; Ganorkar, S. P.; Park, K.; Chegal, W.; Cho, H. M.; Cho, Y. J.; Kim, Y. D.; Kim, H.
2018-06-01
The complex dielectric function () of WSe2 monolayer grown by atomic layer deposition is investigated using spectroscopic ellipsometry. Band structure parameters are obtained by standard line-shape analysis of the second-energy-derivative of spectra. The fundamental band gap is observed at 2.26 eV, corresponds to transition between valence band (VB) maximum at the K point and conduction band (CB) minimum at Q point in the Brillouin zone (BZ). Two strong so-called A and B excitonic peaks in spectra originate from vertical transitions from spin–orbit split (0.43 eV) VB to CB at K point of the BZ. Binding energies of A and B exactions are 0.71 and 0.28 eV, respectively. Well resolved five excited excitons states has been detected within the spectral region between A and B. Energy profile of the Rydberg series shows significant deviation from the hydrogenic behavior, discussed in connection with the 2D hydrogen model. Results presented here will improve our understanding about the optical response of 2D materials and will help to design better optoelectronic applications and validate theoretical considerations.
Dynamical analysis of a fractional SIR model with birth and death on heterogeneous complex networks
NASA Astrophysics Data System (ADS)
Huo, Jingjing; Zhao, Hongyong
2016-04-01
In this paper, a fractional SIR model with birth and death rates on heterogeneous complex networks is proposed. Firstly, we obtain a threshold value R0 based on the existence of endemic equilibrium point E∗, which completely determines the dynamics of the model. Secondly, by using Lyapunov function and Kirchhoff's matrix tree theorem, the globally asymptotical stability of the disease-free equilibrium point E0 and the endemic equilibrium point E∗ of the model are investigated. That is, when R0 < 1, the disease-free equilibrium point E0 is globally asymptotically stable and the disease always dies out; when R0 > 1, the disease-free equilibrium point E0 becomes unstable and in the meantime there exists a unique endemic equilibrium point E∗, which is globally asymptotically stable and the disease is uniformly persistent. Finally, the effects of various immunization schemes are studied and compared. Numerical simulations are given to demonstrate the main results.
Wu, Dan; Faria, Andreia V; Younes, Laurent; Mori, Susumu; Brown, Timothy; Johnson, Hans; Paulsen, Jane S; Ross, Christopher A; Miller, Michael I
2017-10-01
Huntington's disease (HD) is an autosomal dominant neurodegenerative disorder that progressively affects motor, cognitive, and emotional functions. Structural MRI studies have demonstrated brain atrophy beginning many years prior to clinical onset ("premanifest" period), but the order and pattern of brain structural changes have not been fully characterized. In this study, we investigated brain regional volumes and diffusion tensor imaging (DTI) measurements in premanifest HD, and we aim to determine (1) the extent of MRI changes in a large number of structures across the brain by atlas-based analysis, and (2) the initiation points of structural MRI changes in these brain regions. We adopted a novel multivariate linear regression model to detect the inflection points at which the MRI changes begin (namely, "change-points"), with respect to the CAG-age product (CAP, an indicator of extent of exposure to the effects of CAG repeat expansion). We used approximately 300 T1-weighted and DTI data from premanifest HD and control subjects in the PREDICT-HD study, with atlas-based whole brain segmentation and change-point analysis. The results indicated a distinct topology of structural MRI changes: the change-points of the volumetric measurements suggested a central-to-peripheral pattern of atrophy from the striatum to the deep white matter; and the change points of DTI measurements indicated the earliest changes in mean diffusivity in the deep white matter and posterior white matter. While interpretation needs to be cautious given the cross-sectional nature of the data, these findings suggest a spatial and temporal pattern of spread of structural changes within the HD brain. Hum Brain Mapp 38:5035-5050, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kutz, J Nathan
2016-01-01
In this wIn this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.ork, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.
A NEW METHOD FOR DERIVING THE STELLAR BIRTH FUNCTION OF RESOLVED STELLAR POPULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gennaro, M.; Brown, T. M.; Gordon, K. D.
We present a new method for deriving the stellar birth function (SBF) of resolved stellar populations. The SBF (stars born per unit mass, time, and metallicity) is the combination of the initial mass function (IMF), the star formation history (SFH), and the metallicity distribution function (MDF). The framework of our analysis is that of Poisson Point Processes (PPPs), a class of statistical models suitable when dealing with points (stars) in a multidimensional space (the measurement space of multiple photometric bands). The theory of PPPs easily accommodates the modeling of measurement errors as well as that of incompleteness. Our method avoidsmore » binning stars in the color–magnitude diagram and uses the whole likelihood function for each data point; combining the individual likelihoods allows the computation of the posterior probability for the population's SBF. Within the proposed framework it is possible to include nuisance parameters, such as distance and extinction, by specifying their prior distributions and marginalizing over them. The aim of this paper is to assess the validity of this new approach under a range of assumptions, using only simulated data. Forthcoming work will show applications to real data. Although it has a broad scope of possible applications, we have developed this method to study multi-band Hubble Space Telescope observations of the Milky Way Bulge. Therefore we will focus on simulations with characteristics similar to those of the Galactic Bulge.« less
NASA Astrophysics Data System (ADS)
Ren, Qianyu; Li, Junhong; Hong, Yingping; Jia, Pinggang; Xiong, Jijun
2017-09-01
A new demodulation algorithm of the fiber-optic Fabry-Perot cavity length based on the phase generated carrier (PGC) is proposed in this paper, which can be applied in the high-temperature pressure sensor. This new algorithm based on arc tangent function outputs two orthogonal signals by utilizing an optical system, which is designed based on the field-programmable gate array (FPGA) to overcome the range limit of the original PGC arc tangent function demodulation algorithm. The simulation and analysis are also carried on. According to the analysis of demodulation speed and precision, the simulation of different numbers of sampling points, and measurement results of the pressure sensor, the arc tangent function demodulation method has good demodulation results: 1 MHz processing speed of single data and less than 1% error showing practical feasibility in the fiber-optic Fabry-Perot cavity length demodulation of the Fabry-Perot high-temperature pressure sensor.
Probing the statistical properties of CMB B-mode polarization through Minkowski functionals
NASA Astrophysics Data System (ADS)
Santos, Larissa; Wang, Kai; Zhao, Wen
2016-07-01
The detection of the magnetic type B-mode polarization is the main goal of future cosmic microwave background (CMB) experiments. In the standard model, the B-mode map is a strong non-gaussian field due to the CMB lensing component. Besides the two-point correlation function, the other statistics are also very important to dig the information of the polarization map. In this paper, we employ the Minkowski functionals to study the morphological properties of the lensed B-mode maps. We find that the deviations from Gaussianity are very significant for both full and partial-sky surveys. As an application of the analysis, we investigate the morphological imprints of the foreground residuals in the B-mode map. We find that even for very tiny foreground residuals, the effects on the map can be detected by the Minkowski functional analysis. Therefore, it provides a complementary way to investigate the foreground contaminations in the CMB studies.
Green's functions for analysis of dynamic response of wheel/rail to vertical excitation
NASA Astrophysics Data System (ADS)
Mazilu, Traian
2007-09-01
An analytical model to simulate wheel/rail interaction using the Green's functions method is proposed in this paper. The model consists of a moving wheel on a discretely supported rail. Particularly for this model of rail, the bending and the longitudinal displacement are coupled due to the rail pad and a complex model of the rail pad is adopted. An efficient method for solving a time-domain analysis for wheel/rail interaction is presented. The method is based on the properties of the rail's Green functions and starting to these functions, a track's Green matrix is assembled for the numerical simulations of wheel/rail response due to three kinds of vertical excitations: the steady-state interaction, the rail corrugation and the wheel flat. The study points to influence of the worn rail—rigid contact—on variation in the wheel/rail contact force. The concept of pinned-pinned inhibitive rail pad is also presented.
ERIC Educational Resources Information Center
Touval, Ayana
1992-01-01
Introduces the concept of maximum and minimum function values as turning points on the function's graphic representation and presents a method for finding these values without using calculus. The process of utilizing transformations to find the turning point of a quadratic function is extended to find the turning points of cubic functions. (MDH)
Meshless deformable models for 3D cardiac motion and strain analysis from tagged MRI.
Wang, Xiaoxu; Chen, Ting; Zhang, Shaoting; Schaerer, Joël; Qian, Zhen; Huh, Suejung; Metaxas, Dimitris; Axel, Leon
2015-01-01
Tagged magnetic resonance imaging (TMRI) provides a direct and noninvasive way to visualize the in-wall deformation of the myocardium. Due to the through-plane motion, the tracking of 3D trajectories of the material points and the computation of 3D strain field call for the necessity of building 3D cardiac deformable models. The intersections of three stacks of orthogonal tagging planes are material points in the myocardium. With these intersections as control points, 3D motion can be reconstructed with a novel meshless deformable model (MDM). Volumetric MDMs describe an object as point cloud inside the object boundary and the coordinate of each point can be written in parametric functions. A generic heart mesh is registered on the TMRI with polar decomposition. A 3D MDM is generated and deformed with MR image tagging lines. Volumetric MDMs are deformed by calculating the dynamics function and minimizing the local Laplacian coordinates. The similarity transformation of each point is computed by assuming its neighboring points are making the same transformation. The deformation is computed iteratively until the control points match the target positions in the consecutive image frame. The 3D strain field is computed from the 3D displacement field with moving least squares. We demonstrate that MDMs outperformed the finite element method and the spline method with a numerical phantom. Meshless deformable models can track the trajectory of any material point in the myocardium and compute the 3D strain field of any particular area. The experimental results on in vivo healthy and patient heart MRI show that the MDM can fully recover the myocardium motion in three dimensions. Copyright © 2014. Published by Elsevier Inc.
Meshless deformable models for 3D cardiac motion and strain analysis from tagged MRI
Wang, Xiaoxu; Chen, Ting; Zhang, Shaoting; Schaerer, Joël; Qian, Zhen; Huh, Suejung; Metaxas, Dimitris; Axel, Leon
2016-01-01
Tagged magnetic resonance imaging (TMRI) provides a direct and noninvasive way to visualize the in-wall deformation of the myocardium. Due to the through-plane motion, the tracking of 3D trajectories of the material points and the computation of 3D strain field call for the necessity of building 3D cardiac deformable models. The intersections of three stacks of orthogonal tagging planes are material points in the myocardium. With these intersections as control points, 3D motion can be reconstructed with a novel meshless deformable model (MDM). Volumetric MDMs describe an object as point cloud inside the object boundary and the coordinate of each point can be written in parametric functions. A generic heart mesh is registered on the TMRI with polar decomposition. A 3D MDM is generated and deformed with MR image tagging lines. Volumetric MDMs are deformed by calculating the dynamics function and minimizing the local Laplacian coordinates. The similarity transformation of each point is computed by assuming its neighboring points are making the same transformation. The deformation is computed iteratively until the control points match the target positions in the consecutive image frame. The 3D strain field is computed from the 3D displacement field with moving least squares. We demonstrate that MDMs outperformed the finite element method and the spline method with a numerical phantom. Meshless deformable models can track the trajectory of any material point in the myocardium and compute the 3D strain field of any particular area. The experimental results on in vivo healthy and patient heart MRI show that the MDM can fully recover the myocardium motion in three dimensions. PMID:25157446
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2014-01-01
To eliminate the need to use finite-element modeling for structure shape predictions, a new method was invented. This method is to use the Displacement Transfer Functions to transform the measured surface strains into deflections for mapping out overall structural deformed shapes. The Displacement Transfer Functions are expressed in terms of rectilinearly distributed surface strains, and contain no material properties. This report is to apply the patented method to the shape predictions of non-symmetrically loaded slender curved structures with different curvatures up to a full circle. Because the measured surface strains are not available, finite-element analysis had to be used to analytically generate the surface strains. Previously formulated straight-beam Displacement Transfer Functions were modified by introducing the curvature-effect correction terms. Through single-point or dual-point collocations with finite-elementgenerated deflection curves, functional forms of the curvature-effect correction terms were empirically established. The resulting modified Displacement Transfer Functions can then provide quite accurate shape predictions. Also, the uniform straight-beam Displacement Transfer Function was applied to the shape predictions of a section-cut of a generic capsule (GC) outer curved sandwich wall. The resulting GC shape predictions are quite accurate in partial regions where the radius of curvature does not change sharply.
NASA Astrophysics Data System (ADS)
Ahn, Woo Sang; Park, Sung Ho; Jung, Sang Hoon; Choi, Wonsik; Do Ahn, Seung; Shin, Seong Soo
2014-06-01
The purpose of this study is to determine the radial dose function of HDR 192Ir source based on Monte Carlo simulation using elliptic cylindrical phantom, similar to realistic shape of pelvis, in brachytherapy dosimetric study. The elliptic phantom size and shape was determined by analysis of dimensions of pelvis on CT images of 20 patients treated with brachytherapy for cervical cancer. The radial dose function obtained using the elliptic cylindrical water phantom was compared with radial dose functions for different spherical phantom sizes, including the Williamsion's data loaded into conventional planning system. The differences in the radial dose function for the different spherical water phantoms increase with radial distance, r, and the largest differences in the radial dose function appear for the smallest phantom size. The radial dose function of the elliptic cylindrical phantom significantly decreased with radial distance in the vertical direction due to different scatter condition in comparison with the Williamson's data. Considering doses to ICRU rectum and bladder points, doses to reference points can be underestimated up to 1-2% at the distance from 3 to 6 cm. The radial dose function in this study could be used as realistic data for calculating the brachytherapy dosimetry for cervical cancer.
NASA Astrophysics Data System (ADS)
Wang, Zhihui; Bao, Lin; Tong, Binggang
2009-12-01
This paper is a research on the variation character of stagnation point heat flux for hypersonic pointed bodies from continuum to rarefied flow states by using theoretical analysis and numerical simulation methods. The newly developed near space hypersonic cruise vehicles have sharp noses and wingtips, which desires exact and relatively simple methods to estimate the stagnation point heat flux. With the decrease of the curvature radius of the leading edge, the flow becomes rarefied gradually, and viscous interaction effects and rarefied gas effects come forth successively, which results in that the classical Fay-Riddell equation under continuum hypothesis will become invalid and the variation of stagnation point heat flux is characterized by a new trend. The heat flux approaches the free molecular flow limit instead of an infinite value when the curvature radius of the leading edge tends to 0. The physical mechanism behind this phenomenon remains in need of theoretical study. Firstly, due to the fact that the whole flow regime can be described by Boltzmann equation, the continuum and rarefied flow are analyzed under a uniform framework. A relationship is established between the molecular collision insufficiency in rarefied flow and the failure of Fourier’s heat conduction law along with the increasing significance of the nonlinear heat flux. Then based on an inspiration drew from Burnett approximation, control factors are grasped and a specific heat flux expression containing the nonlinear term is designed in the stagnation region of hypersonic leading edge. Together with flow pattern analysis, the ratio of nonlinear to linear heat flux W r is theoretically obtained as a parameter which reflects the influence of nonlinear factors, i.e. a criterion to classify the hypersonic rarefied flows. Ultimately, based on the characteristic parameter W r , a bridge function with physical background is constructed, which predicts comparative reasonable results in coincidence well with DSMC and experimental data in the whole flow regime.
NASA Astrophysics Data System (ADS)
Uchidate, M.
2018-09-01
In this study, with the aim of establishing a systematic knowledge on the impact of summit extraction methods and stochastic model selection in rough contact analysis, the contact area ratio (A r /A a ) obtained by statistical contact models with different summit extraction methods was compared with a direct simulation using the boundary element method (BEM). Fifty areal topography datasets with different autocorrelation functions in terms of the power index and correlation length were used for investigation. The non-causal 2D auto-regressive model which can generate datasets with specified parameters was employed in this research. Three summit extraction methods, Nayak’s theory, 8-point analysis and watershed segmentation, were examined. With regard to the stochastic model, Bhushan’s model and BGT (Bush-Gibson-Thomas) model were applied. The values of A r /A a from the stochastic models tended to be smaller than BEM. The discrepancy between the Bhushan’s model with the 8-point analysis and BEM was slightly smaller than Nayak’s theory. The results with the watershed segmentation was similar to those with the 8-point analysis. The impact of the Wolf pruning on the discrepancy between the stochastic analysis and BEM was not very clear. In case of the BGT model which employs surface gradients, good quantitative agreement against BEM was obtained when the Nayak’s bandwidth parameter was large.
Kandala, Sridhar; Nolan, Dan; Laumann, Timothy O.; Power, Jonathan D.; Adeyemo, Babatunde; Harms, Michael P.; Petersen, Steven E.; Barch, Deanna M.
2016-01-01
Abstract Like all resting-state functional connectivity data, the data from the Human Connectome Project (HCP) are adversely affected by structured noise artifacts arising from head motion and physiological processes. Functional connectivity estimates (Pearson's correlation coefficients) were inflated for high-motion time points and for high-motion participants. This inflation occurred across the brain, suggesting the presence of globally distributed artifacts. The degree of inflation was further increased for connections between nearby regions compared with distant regions, suggesting the presence of distance-dependent spatially specific artifacts. We evaluated several denoising methods: censoring high-motion time points, motion regression, the FMRIB independent component analysis-based X-noiseifier (FIX), and mean grayordinate time series regression (MGTR; as a proxy for global signal regression). The results suggest that FIX denoising reduced both types of artifacts, but left substantial global artifacts behind. MGTR significantly reduced global artifacts, but left substantial spatially specific artifacts behind. Censoring high-motion time points resulted in a small reduction of distance-dependent and global artifacts, eliminating neither type. All denoising strategies left differences between high- and low-motion participants, but only MGTR substantially reduced those differences. Ultimately, functional connectivity estimates from HCP data showed spatially specific and globally distributed artifacts, and the most effective approach to address both types of motion-correlated artifacts was a combination of FIX and MGTR. PMID:27571276
Complete N-point superstring disk amplitude II. Amplitude and hypergeometric function structure
NASA Astrophysics Data System (ADS)
Mafra, Carlos R.; Schlotterer, Oliver; Stieberger, Stephan
2013-08-01
Using the pure spinor formalism in part I (Mafra et al., preprint [1]) we compute the complete tree-level amplitude of N massless open strings and find a striking simple and compact form in terms of minimal building blocks: the full N-point amplitude is expressed by a sum over (N-3)! Yang-Mills partial subamplitudes each multiplying a multiple Gaussian hypergeometric function. While the former capture the space-time kinematics of the amplitude the latter encode the string effects. This result disguises a lot of structure linking aspects of gauge amplitudes as color and kinematics with properties of generalized Euler integrals. In this part II the structure of the multiple hypergeometric functions is analyzed in detail: their relations to monodromy equations, their minimal basis structure, and methods to determine their poles and transcendentality properties are proposed. Finally, a Gröbner basis analysis provides independent sets of rational functions in the Euler integrals. In contrast to [1] here we use momenta redefined by a factor of i. As a consequence the signs of the kinematic invariants are flipped, e.g. |→|.
Negahban, Hossein; Behtash, Zeinab; Sohani, Soheil Mansour; Salehi, Reza
2015-01-01
To identify the ability of the Persian-version of the Shoulder Pain and Disability Index (SPADI) and the Disabilities of the Arm, Shoulder, and Hand (DASH) to detect changes in shoulder function following physiotherapy intervention (i.e. responsiveness) and to determine the change score that indicates a meaningful change in functional ability of the patient (i.e. Minimally Clinically Important Difference (MCID)). A convenient sample of 200 Persian-speaking patients with shoulder disorders completed the SPADI and the DASH at baseline and then again 4 weeks after physiotherapy intervention. Furthermore, patients were asked to rate their global rating of shoulder function at follow-up. The responsiveness was evaluated using two methods: the receiver operating characteristics (ROC) method and the correlation analysis. Two useful statistics extracted from the ROC method are the area under curve (AUC) and the optimal cutoff point called as MCID. Both the SPADI and the DASH showed the AUC of greater than 0.70 (AUC ranges = 0.77-0.82). The best cutoff points (or change scores) for the SPADI-total, SPADI-pain, SPADI-disability and the DASH were 14.88, 26.36, 23.86, and 25.41, respectively. Additionally, moderate to good correlations (Gamma = -0.51 to -0.58) were found between the changes in SPADI/DASH and changes in global rating scale. The Persian SPADI and DASH have adequate responsiveness to clinical changes in patients with shoulder disorders. Moreover, the MCIDs obtained in this study will help the clinicians and researchers to determine if a Persian-speaking patient with shoulder disorder has experienced a true change following a physiotherapy intervention. Implications for Rehabilitation Responsiveness was evaluated using two methods; the receiver operating characteristics (ROC) method and the correlation analysis. The Persian SPADI and DASH can be used as two responsive instruments in both clinical practice and research settings. The MCIDs of 14.88 and 25.41 points obtained for the SPADI-total and DASH indicated that the change scores of at least 14.88 points on the SPADI-total and 25.41 points on the DASH is necessary to certain that a true change has occurred following a physiotherapy intervention.
Renosh, P R; Schmitt, Francois G; Loisel, Hubert
2015-01-01
Satellite remote sensing observations allow the ocean surface to be sampled synoptically over large spatio-temporal scales. The images provided from visible and thermal infrared satellite observations are widely used in physical, biological, and ecological oceanography. The present work proposes a method to understand the multi-scaling properties of satellite products such as the Chlorophyll-a (Chl-a), and the Sea Surface Temperature (SST), rarely studied. The specific objectives of this study are to show how the small scale heterogeneities of satellite images can be characterised using tools borrowed from the fields of turbulence. For that purpose, we show how the structure function, which is classically used in the frame of scaling time series analysis, can be used also in 2D. The main advantage of this method is that it can be applied to process images which have missing data. Based on both simulated and real images, we demonstrate that coarse-graining (CG) of a gradient modulus transform of the original image does not provide correct scaling exponents. We show, using a fractional Brownian simulation in 2D, that the structure function (SF) can be used with randomly sampled couple of points, and verify that 1 million of couple of points provides enough statistics.
Bivariate normal, conditional and rectangular probabilities: A computer program with applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.
1980-01-01
Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.
Reports 5. The Yugoslav Serbo-Croatian-English Contrastive Project.
ERIC Educational Resources Information Center
Filipovic, Rudolf, Ed.
The fifth volume of this series contains ten articles dealing with various aspects of Serbo-Croatian-English contrastive analysis. They are: "On the Word Order of Subject and Predicate in English and Serbo-Croatian from the Point of View of Functional Sentence Perspective," by Ljiljana Bibovic; "The English Personal Pronouns and Their…
USDA-ARS?s Scientific Manuscript database
Resource-poor countries and regions require effective, low-cost diagnostic devices for accurate identification and diagnosis of health conditions. Optical detection technologies used for many types of biological and clinical analysis can play a significant role in addressing this need, but must be s...
Infant Stimulation and the Etiology of Cognitive Processes.
ERIC Educational Resources Information Center
Fowler, William
What data, problems, and concepts are most relevant in determining the role of stimulation in human development? A critical analysis of the relationships between long term stimulation, behavior, and cognitive functioning and development points up biases and gaps in past as well as contemporary approaches. Each of the four sections of this paper…
Protein Analysis Meets Visual Word Recognition: A Case for String Kernels in the Brain
ERIC Educational Resources Information Center
Hannagan, Thomas; Grainger, Jonathan
2012-01-01
It has been recently argued that some machine learning techniques known as Kernel methods could be relevant for capturing cognitive and neural mechanisms (Jakel, Scholkopf, & Wichmann, 2009). We point out that "String kernels," initially designed for protein function prediction and spam detection, are virtually identical to one contending proposal…
Childers, A B; Walsh, B
1996-07-23
Preharvest food safety is essential for the protection of our food supply. The production and transport of livestock and poultry play an integral part in the safety of these food products. The goals of this safety assurance include freedom from pathogenic microorganisms, disease, and parasites, and from potentially harmful residues and physical hazards. Its functions should be based on hazard analysis and critical control points from producer to slaughter plant with emphasis on prevention of identifiable hazards rather than on removal of contaminated products. The production goal is to minimize infection and insure freedom from potentially harmful residues and physical hazards. The marketing goal is control of exposure to pathogens and stress. Both groups should have functional hazard analysis and critical control points management programs which include personnel training and certification of producers. These programs must cover production procedures, chemical usage, feeding, treatment practices, drug usage, assembly and transportation, and animal identification. Plans must use risk assessment principles, and the procedures must be defined. Other elements would include preslaughter certification, environmental protection, control of chemical hazards, live-animal drug-testing procedures, and identification of physical hazards.
Clustering and Network Analysis of Reverse Phase Protein Array Data.
Byron, Adam
2017-01-01
Molecular profiling of proteins and phosphoproteins using a reverse phase protein array (RPPA) platform, with a panel of target-specific antibodies, enables the parallel, quantitative proteomic analysis of many biological samples in a microarray format. Hence, RPPA analysis can generate a high volume of multidimensional data that must be effectively interrogated and interpreted. A range of computational techniques for data mining can be applied to detect and explore data structure and to form functional predictions from large datasets. Here, two approaches for the computational analysis of RPPA data are detailed: the identification of similar patterns of protein expression by hierarchical cluster analysis and the modeling of protein interactions and signaling relationships by network analysis. The protocols use freely available, cross-platform software, are easy to implement, and do not require any programming expertise. Serving as data-driven starting points for further in-depth analysis, validation, and biological experimentation, these and related bioinformatic approaches can accelerate the functional interpretation of RPPA data.
Bhaya, Amit; Kaszkurewicz, Eugenius
2004-01-01
It is pointed out that the so called momentum method, much used in the neural network literature as an acceleration of the backpropagation method, is a stationary version of the conjugate gradient method. Connections with the continuous optimization method known as heavy ball with friction are also made. In both cases, adaptive (dynamic) choices of the so called learning rate and momentum parameters are obtained using a control Liapunov function analysis of the system.
[On the issues of functioning of the clinic of research institute of balneology].
2012-01-01
The article presents the results of analysis of effectiveness of application of main resources in organizing and quality supporting of medical diagnostic care to patients in the clinic of research institute of balneology profile. The result data points out the insufficient effectiveness of application of these resources; the defects in organization and quality of curative diagnostic and rehabilitation care. They determine the priority directions of enhancement of functioning of the institution being a clinical base of research institute of balneology.
Spatial correlation of hydrometeor occurrence, reflectivity, and rain rate from CloudSat
NASA Astrophysics Data System (ADS)
Marchand, Roger
2012-03-01
This paper examines the along-track vertical and horizontal structure of hydrometeor occurrence, reflectivity, and column rain rate derived from CloudSat. The analysis assumes hydrometeors statistics in a given region are horizontally invariant, with the probability of hydrometeor co-occurrence obtained simply by determining the relative frequency at which hydrometeors can be found at two points (which may be at different altitudes and offset by a horizontal distance, Δx). A correlation function is introduced (gamma correlation) that normalizes hydrometeor co-occurrence values to the range of 1 to -1, with a value of 0 meaning uncorrelated in the usual sense. This correlation function is a generalization of the alpha overlap parameter that has been used in recent studies to describe the overlap between cloud (or hydrometeor) layers. Examples of joint histograms of reflectivity at two points are also examined. The analysis shows that the traditional linear (or Pearson) correlation coefficient provides a useful one-to-one measure of the strength of the relationship between hydrometeor reflectivity at two points in the horizontal (that is, two points at the same altitude). While also potentially useful in the vertical direction, the relationship between reflectivity values at different altitudes is not as well described by the linear correlation coefficient. The decrease in correlation of hydrometeor occurrence and reflectivity with horizontal distance, as well as precipitation occurrence and column rain rate, can be reasonably well fit with a simple two-parameter exponential model. In this paper, the North Pacific and tropical western Pacific are examined in detail, as is the zonal dependence.
Efficient Analysis of Mass Spectrometry Data Using the Isotope Wavelet
NASA Astrophysics Data System (ADS)
Hussong, Rene; Tholey, Andreas; Hildebrandt, Andreas
2007-09-01
Mass spectrometry (MS) has become today's de-facto standard for high-throughput analysis in proteomics research. Its applications range from toxicity analysis to MS-based diagnostics. Often, the time spent on the MS experiment itself is significantly less than the time necessary to interpret the measured signals, since the amount of data can easily exceed several gigabytes. In addition, automated analysis is hampered by baseline artifacts, chemical as well as electrical noise, and an irregular spacing of data points. Thus, filtering techniques originating from signal and image analysis are commonly employed to address these problems. Unfortunately, smoothing, base-line reduction, and in particular a resampling of data points can affect important characteristics of the experimental signal. To overcome these problems, we propose a new family of wavelet functions based on the isotope wavelet, which is hand-tailored for the analysis of mass spectrometry data. The resulting technique is theoretically well-founded and compares very well with standard peak picking tools, since it is highly robust against noise spoiling the data, but at the same time sufficiently sensitive to detect even low-abundant peptides.
Towards tests of quark-hadron duality with functional analysis and spectral function data
NASA Astrophysics Data System (ADS)
Boito, Diogo; Caprini, Irinel
2017-04-01
The presence of terms that violate quark-hadron duality in the expansion of QCD Green's functions is a generally accepted fact. Recently, a new approach was proposed for the study of duality violations (DVs), which exploits the existence of a rigorous lower bound on the functional distance, measured in a certain norm, between a "true" correlator and its approximant calculated theoretically along a contour in the complex energy plane. In the present paper, we pursue the investigation of functional-analysis-based tests towards their application to real spectral function data. We derive a closed analytic expression for the minimal functional distance based on the general weighted L2 norm and discuss its relation with the distance measured in the L∞ norm. Using fake data sets obtained from a realistic toy model in which we allow for covariances inspired from the publicly available ALEPH spectral functions, we obtain, by Monte Carlo simulations, the statistical distribution of the strength parameter that measures the magnitude of the DV term added to the usual operator product expansion. The results show that, if the region with large errors near the end point of the spectrum in τ decays is excluded, the functional-analysis-based tests using either L2 or L∞ norms are able to detect, in a statistically significant way, the presence of DVs in realistic spectral function pseudodata.
NASA Astrophysics Data System (ADS)
Mori, Shintaro; Hisakado, Masato
2015-05-01
We propose a finite-size scaling analysis method for binary stochastic processes X(t) in { 0,1} based on the second moment correlation length ξ for the autocorrelation function C(t). The purpose is to clarify the critical properties and provide a new data analysis method for information cascades. As a simple model to represent the different behaviors of subjects in information cascade experiments, we assume that X(t) is a mixture of an independent random variable that takes 1 with probability q and a random variable that depends on the ratio z of the variables taking 1 among recent r variables. We consider two types of the probability f(z) that the latter takes 1: (i) analog [f(z) = z] and (ii) digital [f(z) = θ(z - 1/2)]. We study the universal functions of scaling for ξ and the integrated correlation time τ. For finite r, C(t) decays exponentially as a function of t, and there is only one stable renormalization group (RG) fixed point. In the limit r to ∞ , where X(t) depends on all the previous variables, C(t) in model (i) obeys a power law, and the system becomes scale invariant. In model (ii) with q ≠ 1/2, there are two stable RG fixed points, which correspond to the ordered and disordered phases of the information cascade phase transition with the critical exponents β = 1 and ν|| = 2.
Growth Points in Linking Representations of Function: A Research-Based Framework
ERIC Educational Resources Information Center
Ronda, Erlina
2015-01-01
This paper describes five growth points in linking representations of function developed from a study of secondary school learners. Framed within the cognitivist perspective and process-object conception of function, the growth points were identified and described based on linear and quadratic function tasks learners can do and their strategies…
Gene function in early mouse embryonic stem cell differentiation
Sene, Kagnew Hailesellasse; Porter, Christopher J; Palidwor, Gareth; Perez-Iratxeta, Carolina; Muro, Enrique M; Campbell, Pearl A; Rudnicki, Michael A; Andrade-Navarro, Miguel A
2007-01-01
Background Little is known about the genes that drive embryonic stem cell differentiation. However, such knowledge is necessary if we are to exploit the therapeutic potential of stem cells. To uncover the genetic determinants of mouse embryonic stem cell (mESC) differentiation, we have generated and analyzed 11-point time-series of DNA microarray data for three biologically equivalent but genetically distinct mESC lines (R1, J1, and V6.5) undergoing undirected differentiation into embryoid bodies (EBs) over a period of two weeks. Results We identified the initial 12 hour period as reflecting the early stages of mESC differentiation and studied probe sets showing consistent changes of gene expression in that period. Gene function analysis indicated significant up-regulation of genes related to regulation of transcription and mRNA splicing, and down-regulation of genes related to intracellular signaling. Phylogenetic analysis indicated that the genes showing the largest expression changes were more likely to have originated in metazoans. The probe sets with the most consistent gene changes in the three cell lines represented 24 down-regulated and 12 up-regulated genes, all with closely related human homologues. Whereas some of these genes are known to be involved in embryonic developmental processes (e.g. Klf4, Otx2, Smn1, Socs3, Tagln, Tdgf1), our analysis points to others (such as transcription factor Phf21a, extracellular matrix related Lama1 and Cyr61, or endoplasmic reticulum related Sc4mol and Scd2) that have not been previously related to mESC function. The majority of identified functions were related to transcriptional regulation, intracellular signaling, and cytoskeleton. Genes involved in other cellular functions important in ESC differentiation such as chromatin remodeling and transmembrane receptors were not observed in this set. Conclusion Our analysis profiles for the first time gene expression at a very early stage of mESC differentiation, and identifies a functional and phylogenetic signature for the genes involved. The data generated constitute a valuable resource for further studies. All DNA microarray data used in this study are available in the StemBase database of stem cell gene expression data [1] and in the NCBI's GEO database. PMID:17394647
Nealon, W H; Townsend, C M; Thompson, J C
1988-01-01
In a prospective study, 85 patients with chronic pancreatitis have been subjected to evaluation by morphologic analysis (endoscopic retrograde cholangiopancreatography), by exocrine function tests (bentiromide PABA and 72-hour fecal fat testing), and by endocrine function tests (oral glucose tolerance test and fat-stimulated release of pancreatic polypeptide). All patients were graded on a five-point system, with 1 point assessed for an abnormal result in each of the five tests performed. Zero score denoted mild disease; 1-2 points signaled moderate disease; and 3-5 points indicated severe disease. In 68 patients, both an initial and late (mean follow-up period of 14 months) evaluation were performed. Forty-one patients underwent modified Puestow side-to-side Roux-en-Y pancreaticojejunostomy. The Puestow procedure alone was performed in 18 patients. Eight patients also had drainage of pseudocysts, seven also had a biliary bypass, and eight had pseudocyst drainage plus bypass, in addition to the Puestow. There were no deaths. Of the 68 patients who were studied twice, 30 had operations and 38 did not. None of the patients with severe disease improved their grade during follow-up. Of 24 patients who did not undergo operation, 17 (71%) who were graded mild/moderate progressed to a severe grade at follow-up. By contrast, only three of the 19 patients operated on (16%) and who were initially graded as mild/moderate progressed to severe disease at follow-up testing. More than 75% of all of the patients had a history of weight loss. Twenty-six of 30 patients operated on (87%) (all of whom had lost weight before surgery) gained a mean 4.2 kg (range 1.4-2.7 kg) after surgery, compared with no significant weight change (range -3.6-2.7 kg) among patients not operated on. These findings support a policy of early operation for chronic pancreatitis, perhaps even in the absence of disabling abdominal pain. PMID:3421756
Nealon, W H; Townsend, C M; Thompson, J C
1988-09-01
In a prospective study, 85 patients with chronic pancreatitis have been subjected to evaluation by morphologic analysis (endoscopic retrograde cholangiopancreatography), by exocrine function tests (bentiromide PABA and 72-hour fecal fat testing), and by endocrine function tests (oral glucose tolerance test and fat-stimulated release of pancreatic polypeptide). All patients were graded on a five-point system, with 1 point assessed for an abnormal result in each of the five tests performed. Zero score denoted mild disease; 1-2 points signaled moderate disease; and 3-5 points indicated severe disease. In 68 patients, both an initial and late (mean follow-up period of 14 months) evaluation were performed. Forty-one patients underwent modified Puestow side-to-side Roux-en-Y pancreaticojejunostomy. The Puestow procedure alone was performed in 18 patients. Eight patients also had drainage of pseudocysts, seven also had a biliary bypass, and eight had pseudocyst drainage plus bypass, in addition to the Puestow. There were no deaths. Of the 68 patients who were studied twice, 30 had operations and 38 did not. None of the patients with severe disease improved their grade during follow-up. Of 24 patients who did not undergo operation, 17 (71%) who were graded mild/moderate progressed to a severe grade at follow-up. By contrast, only three of the 19 patients operated on (16%) and who were initially graded as mild/moderate progressed to severe disease at follow-up testing. More than 75% of all of the patients had a history of weight loss. Twenty-six of 30 patients operated on (87%) (all of whom had lost weight before surgery) gained a mean 4.2 kg (range 1.4-2.7 kg) after surgery, compared with no significant weight change (range -3.6-2.7 kg) among patients not operated on. These findings support a policy of early operation for chronic pancreatitis, perhaps even in the absence of disabling abdominal pain.
Lin, Meng-lung; Cao, Yu; Wang, Shin
2008-01-01
In this paper, the Lizejian wetland landscape patterns in northeastern Taiwan of China were established by landscape indices and aerial photo interpretation, and a parallel analysis was made on them. The results showed that landscape indices could only indicate the landscape geometric characteristics of the wetland at patch and landscape levels, but could not present its spatial and functional characteristics observed from aerial photos. Combining aerial photo interpretation with landscape indices could be helpful to the holistic understanding of Lizejian wetland' s landscape structure and function, and improve the landscape pattern analysis. The new method for assessing landscape structure from a holistic point of view would play an important role in future landscape ecology research.
NASA Technical Reports Server (NTRS)
Panda, J.; Roozeboom, N. H.; Ross, J. C.
2016-01-01
The recent advancement in fast-response Pressure-Sensitive Paint (PSP) allows time-resolved measurements of unsteady pressure fluctuations from a dense grid of spatial points on a wind tunnel model. This capability allows for direct calculations of the wavenumber-frequency (k-?) spectrum of pressure fluctuations. Such data, useful for the vibro-acoustics analysis of aerospace vehicles, are difficult to obtain otherwise. For the present work, time histories of pressure fluctuations on a flat plate subjected to vortex shedding from a rectangular bluff-body were measured using PSP. The light intensity levels in the photographic images were then converted to instantaneous pressure histories by applying calibration constants, which were calculated from a few dynamic pressure sensors placed at selective points on the plate. Fourier transform of the time-histories from a large number of spatial points provided k-? spectra for pressure fluctuations. The data provides first glimpse into the possibility of creating detailed forcing functions for vibro-acoustics analysis of aerospace vehicles, albeit for a limited frequency range.
NASA Astrophysics Data System (ADS)
Yun, Wanying; Lu, Zhenzhou; Jiang, Xian
2018-06-01
To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.
ROLE OF TIMING IN ASSESSMENT OF NERVE REGENERATION
BRENNER, MICHAEL J.; MORADZADEH, ARASH; MYCKATYN, TERENCE M.; TUNG, THOMAS H. H.; MENDEZ, ALLEN B.; HUNTER, DANIEL A.; MACKINNON, SUSAN E.
2014-01-01
Small animal models are indispensable for research on nerve injury and reconstruction, but their superlative regenerative potential may confound experimental interpretation. This study investigated time-dependent neuroregenerative phenomena in rodents. Forty-six Lewis rats were randomized to three nerve allograft groups treated with 2 mg/(kg day) tacrolimus; 5 mg/(kg day) Cyclosporine A; or placebo injection. Nerves were subjected to histomorphometric and walking track analysis at serial time points. Tacrolimus increased fiber density, percent neural tissue, and nerve fiber count and accelerated functional recovery at 40 days, but these differences were undetectable by 70 days. Serial walking track analysis showed a similar pattern of recovery. A ‘blow-through’ effect is observed in rodents whereby an advancing nerve front overcomes an experimental defect given sufficient time, rendering experimental groups indistinguishable at late time points. Selection of validated time points and corroboration in higher animal models are essential prerequisites for the clinical application of basic research on nerve regeneration. PMID:18381659
Environment parameters and basic functions for floating-point computation
NASA Technical Reports Server (NTRS)
Brown, W. S.; Feldman, S. I.
1978-01-01
A language-independent proposal for environment parameters and basic functions for floating-point computation is presented. Basic functions are proposed to analyze, synthesize, and scale floating-point numbers. The model provides a small set of parameters and a small set of axioms along with sharp measures of roundoff error. The parameters and functions can be used to write portable and robust codes that deal intimately with the floating-point representation. Subject to underflow and overflow constraints, a number can be scaled by a power of the floating-point radix inexpensively and without loss of precision. A specific representation for FORTRAN is included.
Research and constructive solutions on the reduction of slosh noise
NASA Astrophysics Data System (ADS)
Manta (Balas, M.; Balas, R.; Doicin, C. V.
2016-11-01
The paper presents a product design making of, over a “delicate issue” in automotive industry as slosh noise phenomena. Even though the current market tendency shows great achievements over this occurrence, in this study, the main idea is to design concepts of slosh noise baffles adapted for serial life existing fuel tanks in the automotive industry. Moreover, starting with internal and external research, going further through reversed engineering and applying own baffle technical solutions from conceptual sketches to 3D design, the paper shows the technical solutions identified as an alternative to a new development of fuel tank. Based on personal and academic experience there were identified several problematics and the possible answers based on functional analysis, in order to avoid blocking points. The idea of developing baffles adapted to already existent fuel tanks leaded to equivalent solutions analyzed from functional point of view. Once this stage is finished, a methodology will be used so as to choose the optimum solution so as to get the functional design.
Points to consider in renal involvement in systemic sclerosis.
Galluccio, Felice; Müller-Ladner, Ulf; Furst, Daniel E; Khanna, Dinesh; Matucci-Cerinic, Marco
2017-09-01
This article discusses points to consider when undertaking a clinical trial to test therapy for renal involvement in SSc, not including scleroderma renal crisis. Double-blind, randomized controlled trials vs placebo or standard background therapy should be strongly considered. Inclusion criteria should consider a pre-specified range of renal functions or stratification of renal function. Gender and age limitations are probably not necessary. Concomitant medications including vasodilators, immunosuppressants and endothelin receptor antagonists and confounding illnesses such as diabetes, kidney stones, hypertension and heart failure need to be considered. A measure of renal function should be strongly considered, while time to dialysis, mortality, prevention of scleroderma renal crisis and progression of renal disease can also be considered, although they remain to be validated. Detailed, pre-planned analysis should be strongly considered and should include accounting for missing data. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Self-propagated combustion synthesis of few-layered graphene: an optical properties perspective.
Mohandoss, Manonmani; Sen Gupta, Soujit; Kumar, Ramesh; Islam, Md Rabiul; Som, Anirban; Mohd, Azhardin Ganayee; Pradeep, T; Maliyekkal, Shihabudheen M
2018-04-26
This paper describes a labour efficient and cost-effective strategy to prepare few-layered of reduced graphene oxide like (RGOL) sheets from graphite. The self-propagated combustion route enables the bulk production of RGOL sheets. Microscopic and spectroscopic analyses confirmed the formation of few-layer graphene sheets of an average thickness of ∼3 nm and the presence of some oxygen functional groups with a C/O ratio of 8.74. A possible mechanistic pathway for the formation of RGOL sheets is proposed. The optical properties of the RGOL sample were studied in detail by means of Spectroscopic Ellipsometry (SE). The experimental abilities of SE in relating the optical properties with the number of oxygen functionalities present in the samples are explored. The data were analysed by a double-layered optical model along with the Drude-Lorentz oscillatory dispersion relation. The refractive index (n = 2.24), extinction coefficient (k = 2.03), and dielectric functions are obtained using point-by-point analysis and are also checked for Kramers-Kronig (KK) consistency.
Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei
2016-03-01
We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.
Beaty, Roger E.; Benedek, Mathias; Wilkins, Robin W.; Jauk, Emanuel; Fink, Andreas; Silvia, Paul J.; Hodges, Donald A.; Koschutnig, Karl; Neubauer, Aljoscha C.
2014-01-01
The present research used resting-state functional magnetic resonance imaging (fMRI) to examine whether the ability to generate creative ideas corresponds to differences in the intrinsic organization of functional networks in the brain. We examined the functional connectivity between regions commonly implicated in neuroimaging studies of divergent thinking, including the inferior prefrontal cortex and the core hubs of the default network. Participants were prescreened on a battery of divergent thinking tests and assigned to high- and low-creative groups based on task performance. Seed-based functional connectivity analysis revealed greater connectivity between the left inferior frontal gyrus (IFG) and the entire default mode network in the high-creative group. The right IFG also showed greater functional connectivity with bilateral inferior parietal cortex and the left dorsolateral prefrontal cortex in the high-creative group. The results suggest that the ability to generate creative ideas is characterized by increased functional connectivity between the inferior prefrontal cortex and the default network, pointing to a greater cooperation between brain regions associated with cognitive control and low-level imaginative processes. PMID:25245940
PredictProtein—an open resource for online prediction of protein structural and functional features
Yachdav, Guy; Kloppmann, Edda; Kajan, Laszlo; Hecht, Maximilian; Goldberg, Tatyana; Hamp, Tobias; Hönigschmid, Peter; Schafferhans, Andrea; Roos, Manfred; Bernhofer, Michael; Richter, Lothar; Ashkenazy, Haim; Punta, Marco; Schlessinger, Avner; Bromberg, Yana; Schneider, Reinhard; Vriend, Gerrit; Sander, Chris; Ben-Tal, Nir; Rost, Burkhard
2014-01-01
PredictProtein is a meta-service for sequence analysis that has been predicting structural and functional features of proteins since 1992. Queried with a protein sequence it returns: multiple sequence alignments, predicted aspects of structure (secondary structure, solvent accessibility, transmembrane helices (TMSEG) and strands, coiled-coil regions, disulfide bonds and disordered regions) and function. The service incorporates analysis methods for the identification of functional regions (ConSurf), homology-based inference of Gene Ontology terms (metastudent), comprehensive subcellular localization prediction (LocTree3), protein–protein binding sites (ISIS2), protein–polynucleotide binding sites (SomeNA) and predictions of the effect of point mutations (non-synonymous SNPs) on protein function (SNAP2). Our goal has always been to develop a system optimized to meet the demands of experimentalists not highly experienced in bioinformatics. To this end, the PredictProtein results are presented as both text and a series of intuitive, interactive and visually appealing figures. The web server and sources are available at http://ppopen.rostlab.org. PMID:24799431
1987-10-01
19 treated in interaction with each other and the hardware and software design. The authors point out some of the inadequacies in HP technologies and...life cycle costs recognition performance on secondary tasks effort/efficiency number of wins ( gaming tasks) number of instructors needed amount of...student interacts with this material in real time via a terminal and display system. The computer performs many functions, such as diagnose student
2017-09-01
analysis. Along the way we learned that the Chinese Medicine information will be difficult to integrate into the main data set. At this point only a poster... psychological health, social context, physical function. We completed and submitted for review a simpler analysis of this relationship in The Importance and...the unfolding integration of complementary medical practices into regular medicine. It will assist acupuncturists and VA officials in determining if
Unsteady three-dimensional thermal field prediction in turbine blades using nonlinear BEM
NASA Technical Reports Server (NTRS)
Martin, Thomas J.; Dulikravich, George S.
1993-01-01
A time-and-space accurate and computationally efficient fully three dimensional unsteady temperature field analysis computer code has been developed for truly arbitrary configurations. It uses boundary element method (BEM) formulation based on an unsteady Green's function approach, multi-point Gaussian quadrature spatial integration on each panel, and a highly clustered time-step integration. The code accepts either temperatures or heat fluxes as boundary conditions that can vary in time on a point-by-point basis. Comparisons of the BEM numerical results and known analytical unsteady results for simple shapes demonstrate very high accuracy and reliability of the algorithm. An example of computed three dimensional temperature and heat flux fields in a realistically shaped internally cooled turbine blade is also discussed.
NASA Technical Reports Server (NTRS)
Wong, K. W.
1974-01-01
In lunar phototriangulation, there is a complete lack of accurate ground control points. The accuracy analysis of the results of lunar phototriangulation must, therefore, be completely dependent on statistical procedure. It was the objective of this investigation to examine the validity of the commonly used statistical procedures, and to develop both mathematical techniques and computer softwares for evaluating (1) the accuracy of lunar phototriangulation; (2) the contribution of the different types of photo support data on the accuracy of lunar phototriangulation; (3) accuracy of absolute orientation as a function of the accuracy and distribution of both the ground and model points; and (4) the relative slope accuracy between any triangulated pass points.
Noise-induced extinction for a ratio-dependent predator-prey model with strong Allee effect in prey
NASA Astrophysics Data System (ADS)
Mandal, Partha Sarathi
2018-04-01
In this paper, we study a stochastically forced ratio-dependent predator-prey model with strong Allee effect in prey population. In the deterministic case, we show that the model exhibits the stable interior equilibrium point or limit cycle corresponding to the co-existence of both species. We investigate a probabilistic mechanism of the noise-induced extinction in a zone of stable interior equilibrium point. Computational methods based on the stochastic sensitivity function technique are applied for the analysis of the dispersion of random states near stable interior equilibrium point. This method allows to construct a confidence domain and estimate the threshold value of the noise intensity for a transition from the coexistence to the extinction.
Stroke Self-efficacy Questionnaire: a Rasch-refined measure of confidence post stroke.
Riazi, Afsane; Aspden, Trefor; Jones, Fiona
2014-05-01
Measuring self-efficacy during rehabilitation provides an important insight into understanding recovery post stroke. A Rasch analysis of the Stroke Self-efficacy Questionnaire (SSEQ) was undertaken to establish its use as a clinically meaningful and scientifically rigorous measure. One hundred and eighteen stroke patients completed the SSEQ with the help of an interviewer. Participants were recruited from local acute stroke units and community stroke rehabilitation teams. Data were analysed with confirmatory factor analysis conducted using AMOS and Rasch analysis conducted using RUMM2030 software. Confirmatory factor analysis and Rasch analyses demonstrated the presence of two separate scales that measure stroke survivors' self-efficacy with: i) self-management and ii) functional activities. Guided by Rasch analyses, the response categories of these two scales were collapsed from an 11-point to a 4-point scale. Modified scales met the expectations of the Rasch model. Items satisfied the Rasch requirements (overall and individual item fit, local response independence, differential item functioning, unidimensionality). Furthermore, the two subscales showed evidence of good construct validity. The new SSEQ has good psychometric properties and is a clinically useful assessment of self-efficacy after stroke. The scale measures stroke survivors' self-efficacy with self-management and activities as two unidimensional constructs. It is recommended for use in clinical and research interventions, and in evaluating stroke self-management interventions.
Sexual Function and Pessary Management among Women Using a Pessary for Pelvic Floor Disorders.
Meriwether, Kate V; Komesu, Yuko M; Craig, Ellen; Qualls, Clifford; Davis, Herbert; Rogers, Rebecca G
2015-12-01
Pessaries are commonly used to treat pelvic floor disorders, but little is known about the sexual function of pessary users. We aimed to describe sexual function among pessary users and pessary management with regard to sexual activity. This is a secondary analysis of a randomized trial of new pessary users, where study patients completed validated questionnaires on sexual function and body image at pessary fitting and 3 months later. Women completed the Pelvic Organ Prolapse-Urinary Incontinence Sexual Function Questionnaire, International Urogynecological Association Revised (PISQ-IR), a validated measure that evaluates the impact of pelvic floor disorders on sexual function, a modified female body image scale (mBIS), and questions regarding pessary management surrounding sexual activity. Of 127 women, 54% (68/127) were sexually active at baseline and 42% (64/114) were sexually active at 3 months. Sexual function scores were not different between baseline and 3 months on all domains except for a drop of 0.15 points (P = 0.04) for sexually active women, and a drop of 0.34 points for non-sexually active women (P = 0.02) in the score related to the sexual partner. Total mBIS score did not change (P = 0.07), but scores improved by 0.2 points (P = 0.03) in the question related to self-consciousness. Pessary satisfaction was associated with improved sexual function scores in multiple domains and improved mBIS scores. The majority (45/64, 70%) of sexually active women removed their pessary for sex, with over half stating their partner preferred removal for sex (24/45, 53%). Many women remove their pessary during sex for partner considerations, and increased partner concerns are the only change seen in sexual function in the first 3 months of pessary use. Pessary use may improve self-consciousness and pessary satisfaction is associated with improvements in sexual function and body image. © 2015 International Society for Sexual Medicine.
García-Peña, Carmen; García-Fabela, Luis C.; Gutiérrez-Robledo, Luis M.; García-González, Jose J.; Arango-Lopera, Victoria E.; Pérez-Zepeda, Mario U.
2013-01-01
Functional decline after hospitalization is a common adverse outcome in elderly. An easy to use, reproducible and accurate tool to identify those at risk would aid focusing interventions in those at higher risk. Handgrip strength has been shown to predict adverse outcomes in other settings. The aim of this study was to determine if handgrip strength measured upon admission to an acute care facility would predict functional decline (either incident or worsening of preexisting) at discharge among older Mexican, stratified by gender. In addition, cutoff points as a function of specificity would be determined. A cohort study was conducted in two hospitals in Mexico City. The primary endpoint was functional decline on discharge, defined as a 30-point reduction in the Barthel Index score from that of the baseline score. Handgrip strength along with other variables was measured at initial assessment, including: instrumental activities of daily living, cognition, depressive symptoms, delirium, hospitalization length and quality of life. All analyses were stratified by gender. Logistic regression to test independent association between handgrip strength and functional decline was performed, along with estimation of handgrip strength test values (specificity, sensitivity, area under the curve, etc.). A total of 223 patients admitted to an acute care facility between 2007 and 2009 were recruited. A total of 55 patients (24.7%) had functional decline, 23.46% in male and 25.6% in women. Multivariate analysis showed that only males with low handgrip strength had an increased risk of functional decline at discharge (OR 0.88, 95% CI 0.79–0.98, p = 0.01), with a specificity of 91.3% and a cutoff point of 20.65 kg for handgrip strength. Females had not a significant association between handgrip strength and functional decline. Measurement of handgrip strength on admission to acute care facilities may identify male elderly patients at risk of having functional decline, and intervene consequently. PMID:23936113
Kamairudin, Norsuhaili; Gani, Siti Salwa Abd; Masoumi, Hamid Reza Fard; Hashim, Puziah
2014-10-16
The D-optimal mixture experimental design was employed to optimize the melting point of natural lipstick based on pitaya (Hylocereus polyrhizus) seed oil. The influence of the main lipstick components-pitaya seed oil (10%-25% w/w), virgin coconut oil (25%-45% w/w), beeswax (5%-25% w/w), candelilla wax (1%-5% w/w) and carnauba wax (1%-5% w/w)-were investigated with respect to the melting point properties of the lipstick formulation. The D-optimal mixture experimental design was applied to optimize the properties of lipstick by focusing on the melting point with respect to the above influencing components. The D-optimal mixture design analysis showed that the variation in the response (melting point) could be depicted as a quadratic function of the main components of the lipstick. The best combination of each significant factor determined by the D-optimal mixture design was established to be pitaya seed oil (25% w/w), virgin coconut oil (37% w/w), beeswax (17% w/w), candelilla wax (2% w/w) and carnauba wax (2% w/w). With respect to these factors, the 46.0 °C melting point property was observed experimentally, similar to the theoretical prediction of 46.5 °C. Carnauba wax is the most influential factor on this response (melting point) with its function being with respect to heat endurance. The quadratic polynomial model sufficiently fit the experimental data.
Higgs boson self-coupling from two-loop analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alhendi, H. A.; National Center for Mathematics and Physics, KACST P. O. Box 6086, Riyadh 11442; Barakat, T.
2010-09-01
The scale invariant of the effective potential of the standard model at two loop is used as a boundary condition under the assumption that the two-loop effective potential approximates the full effective potential. This condition leads with the help of the renormalization-group functions of the model at two loop to an algebraic equation of the scalar self-coupling with coefficients that depend on the gauge and the top quark couplings. It admits only two real positive solutions. One of them, in the absence of the gauge and top quark couplings, corresponds to the nonperturbative ultraviolet fixed point of the scalar renormalization-groupmore » function and the other corresponds to the perturbative infrared fixed point. The dependence of the scalar coupling on the top quark and the strong couplings at two-loop radiative corrections is analyzed.« less
Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.
2014-01-01
Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358
Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology
NASA Astrophysics Data System (ADS)
Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya
A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.
NASA Astrophysics Data System (ADS)
Maznev, A. A.
2018-03-01
The avoided crossing behavior in the interaction of propagating sound or light waves with resonant inclusions is analyzed using a simple model of an acoustic medium containing damped mass-spring oscillators, which is shown to be equivalent to the Lorentz oscillator model in the elementary dispersion theory in optics. Two classes of experimental situations dictating the choice in the analysis of the dispersion relation are identified. If the wavevector is regarded as the independent variable and frequency as a complex function of the wavevector, then the avoided crossing bifurcates at an exceptional point at a certain value of the parameter γβ-1/2 , where γ and β characterize the oscillator damping and interaction strength, respectively. This behavior is not observed if the wavevector is regarded as a complex function of frequency.
A class of reduced-order models in the theory of waves and stability.
Chapman, C J; Sorokin, S V
2016-02-01
This paper presents a class of approximations to a type of wave field for which the dispersion relation is transcendental. The approximations have two defining characteristics: (i) they give the field shape exactly when the frequency and wavenumber lie on a grid of points in the (frequency, wavenumber) plane and (ii) the approximate dispersion relations are polynomials that pass exactly through points on this grid. Thus, the method is interpolatory in nature, but the interpolation takes place in (frequency, wavenumber) space, rather than in physical space. Full details are presented for a non-trivial example, that of antisymmetric elastic waves in a layer. The method is related to partial fraction expansions and barycentric representations of functions. An asymptotic analysis is presented, involving Stirling's approximation to the psi function, and a logarithmic correction to the polynomial dispersion relation.
Logical errors on proving theorem
NASA Astrophysics Data System (ADS)
Sari, C. K.; Waluyo, M.; Ainur, C. M.; Darmaningsih, E. N.
2018-01-01
In tertiary level, students of mathematics education department attend some abstract courses, such as Introduction to Real Analysis which needs an ability to prove mathematical statements almost all the time. In fact, many students have not mastered this ability appropriately. In their Introduction to Real Analysis tests, even though they completed their proof of theorems, they achieved an unsatisfactory score. They thought that they succeeded, but their proof was not valid. In this study, a qualitative research was conducted to describe logical errors that students made in proving the theorem of cluster point. The theorem was given to 54 students. Misconceptions on understanding the definitions seem to occur within cluster point, limit of function, and limit of sequences. The habit of using routine symbol might cause these misconceptions. Suggestions to deal with this condition are described as well.
Optimal solar sail planetocentric trajectories
NASA Technical Reports Server (NTRS)
Sackett, L. L.
1977-01-01
The analysis of solar sail planetocentric optimal trajectory problem is described. A computer program was produced to calculate optimal trajectories for a limited performance analysis. A square sail model is included and some consideration is given to a heliogyro sail model. Orbit to a subescape point and orbit to orbit transfer are considered. Trajectories about the four inner planets can be calculated and shadowing, oblateness, and solar motion may be included. Equinoctial orbital elements are used to avoid the classical singularities, and the method of averaging is applied to increase computational speed. Solution of the two-point boundary value problem which arises from the application of optimization theory is accomplished with a Newton procedure. Time optimal trajectories are emphasized, but a penalty function has been considered to prevent trajectories which intersect a planet's surface.
On D-brane -anti D-brane effective actions and their all order bulk singularity structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hatefi, Ehsan; Institute for Theoretical Physics, TU Wien,Wiedner Hauptstrasse 8-10/136, A-1040 Vienna
All four point functions of brane anti brane system including their correct all order α{sup ′} corrections have been addressed. All five point functions of one closed string Ramond-Ramond (RR), two real tachyons and either one gauge field or the scalar field in both symmetric and asymmetric pictures have also been explored. The entire analysis of is carried out. Not only does it fix the vertex operator of RR in asymmetric picture and in higher point functions of string theory amplitudes but also it confirms the fact that there is no issue of picture dependence of the mixed closed RR,more » gauge fields, tachyons and fermion fields in all symmetric or anti symmetric ones. We compute S-matrix in the presence of a transverse scalar field, two real tachyons and that reveals two different kinds of bulk singularity structures, involving an infinite number of u-channel gauge field and (u+s{sup ′}+t{sup ′})-channel scalar bulk poles. In order to produce all those bulk singularity structures, we define various couplings at the level of the effective field theory that involve the mixing term of Chern-Simons coupling (with C-potential field) and a covariant derivative of the scalar field that comes from the pull-back of brane. Eventually we explore their all order α{sup ′} corrections in the presence of brane anti brane system where various remarks will be also pointed out.« less
A point-value enhanced finite volume method based on approximate delta functions
NASA Astrophysics Data System (ADS)
Xuan, Li-Jun; Majdalani, Joseph
2018-02-01
We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.
Schoene, Daniel; Wu, Sandy M-S; Mikolaizak, A Stefanie; Menant, Jasmine C; Smith, Stuart T; Delbaere, Kim; Lord, Stephen R
2013-02-01
To investigate the discriminative ability and diagnostic accuracy of the Timed Up and Go Test (TUG) as a clinical screening instrument for identifying older people at risk of falling. Systematic literature review and meta-analysis. People aged 60 and older living independently or in institutional settings. Studies were identified with searches of the PubMed, EMBASE, CINAHL, and Cochrane CENTRAL data bases. Retrospective and prospective cohort studies comparing times to complete any version of the TUG of fallers and non-fallers were included. Fifty-three studies with 12,832 participants met the inclusion criteria. The pooled mean difference between fallers and non-fallers depended on the functional status of the cohort investigated: 0.63 seconds (95% confidence (CI) = 0.14-1.12 seconds) for high-functioning to 3.59 seconds (95% CI = 2.18-4.99 seconds) for those in institutional settings. The majority of studies did not retain TUG scores in multivariate analysis. Derived cut-points varied greatly between studies, and with the exception of a few small studies, diagnostic accuracy was poor to moderate. The findings suggest that the TUG is not useful for discriminating fallers from non-fallers in healthy, high-functioning older people but is of more value in less-healthy, lower-functioning older people. Overall, the predictive ability and diagnostic accuracy of the TUG are at best moderate. No cut-point can be recommended. Quick, multifactorial fall risk screens should be considered to provide additional information for identifying older people at risk of falls. © 2013, Copyright the Authors Journal compilation © 2013, The American Geriatrics Society.
Barkin, Jennifer L; Wisner, Katherine L; Bromberger, Joyce T; Beach, Scott R; Wisniewski, Stephen R
2016-07-01
Functional assessment may represent a valuable addition to postpartum depression screening, providing a more thorough characterization of the mother's health and quality of life. To the authors' knowledge, this analysis represents the first examination of postpartum maternal functioning, as measured by a patient-centered validated tool aimed at ascertainment of functional status explicitly, and its clinical and sociodemographic correlates. A total of 189 women recruited from a large, urban women's hospital in the northeastern United States who both (1) screened positive for depression between 4 and 6 weeks postpartum and (2) completed a subsequent home (baseline) visit between October 1, 2008, and September 4, 2009, were included in this analysis. Multiple linear regression was conducted to ascertain which clinical and sociodemographic variables were independently associated with maternal functioning. The multivariate analysis revealed independent associations between bipolar status, atypical depression, depression score (17-item Hamilton Rating Scale for Depression), and insurance type with postpartum maternal functioning. The beta coefficient for bipolar status indicates that on average we would expect those with bipolar disorder to have maternal functioning scores that are 5.6 points less than those without bipolar disorder. Healthcare providers treating postpartum women with complicating mental health conditions should be cognizant of the potential ramifications on maternal functioning. Impaired functioning in the maternal role is likely to impact child development, although the precise nature of this relationship is yet to be elucidated.
NASA Astrophysics Data System (ADS)
Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Jürg; Slob, Evert; Thorbecke, Jan; Snieder, Roel
2011-06-01
Seismic interferometry, also known as Green's function retrieval by crosscorrelation, has a wide range of applications, ranging from surface-wave tomography using ambient noise, to creating virtual sources for improved reflection seismology. Despite its successful applications, the crosscorrelation approach also has its limitations. The main underlying assumptions are that the medium is lossless and that the wavefield is equipartitioned. These assumptions are in practice often violated: the medium of interest is often illuminated from one side only, the sources may be irregularly distributed, and losses may be significant. These limitations may partly be overcome by reformulating seismic interferometry as a multidimensional deconvolution (MDD) process. We present a systematic analysis of seismic interferometry by crosscorrelation and by MDD. We show that for the non-ideal situations mentioned above, the correlation function is proportional to a Green's function with a blurred source. The source blurring is quantified by a so-called interferometric point-spread function which, like the correlation function, can be derived from the observed data (i.e. without the need to know the sources and the medium). The source of the Green's function obtained by the correlation method can be deblurred by deconvolving the correlation function for the point-spread function. This is the essence of seismic interferometry by MDD. We illustrate the crosscorrelation and MDD methods for controlled-source and passive-data applications with numerical examples and discuss the advantages and limitations of both methods.
Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations
NASA Astrophysics Data System (ADS)
Wyszkowska, Patrycja
2017-12-01
The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.
De la Garza-Ramos, Rafael; Nakhla, Jonathan; Gelfand, Yaroslav; Echt, Murray; Scoco, Aleka N; Kinon, Merritt D; Yassari, Reza
2018-03-01
To identify predictive factors for critical care unit-level complications (CCU complication) after long-segment fusion procedures for adult spinal deformity (ASD). The American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) database [2010-2014] was reviewed. Only adult patients who underwent fusion of 7 or more spinal levels for ASD were included. CCU complications included intraoperative arrest/infarction, ventilation >48 hours, pulmonary embolism, renal failure requiring dialysis, cardiac arrest, myocardial infarction, unplanned intubation, septic shock, stroke, coma, or new neurological deficit. A stepwise multivariate regression was used to identify independent predictors of CCU complications. Among 826 patients, the rate of CCU complications was 6.4%. On multivariate regression analysis, dependent functional status (P=0.004), combined approach (P=0.023), age (P=0.044), diabetes (P=0.048), and surgery for over 8 hours (P=0.080) were significantly associated with complication development. A simple scoring system was developed to predict complications with 0 points for patients aged <50, 1 point for patients between 50-70, 2 points for patients 70 or over, 1 point for diabetes, 2 points dependent functional status, 1 point for combined approach, and 1 point for surgery over 8 hours. The rate of CCU complications was 0.7%, 3.2%, 9.0%, and 12.6% for patients with 0, 1, 2, and 3+ points, respectively (P<0.001). The findings in this study suggest that older patients, patients with diabetes, patients who depend on others for activities of daily living, and patients who undergo combined approaches or surgery for over 8 hours may be at a significantly increased risk of developing a CCU-level complication after ASD surgery.
[Statistical analysis using freely-available "EZR (Easy R)" software].
Kanda, Yoshinobu
2015-10-01
Clinicians must often perform statistical analyses for purposes such evaluating preexisting evidence and designing or executing clinical studies. R is a free software environment for statistical computing. R supports many statistical analysis functions, but does not incorporate a statistical graphical user interface (GUI). The R commander provides an easy-to-use basic-statistics GUI for R. However, the statistical function of the R commander is limited, especially in the field of biostatistics. Therefore, the author added several important statistical functions to the R commander and named it "EZR (Easy R)", which is now being distributed on the following website: http://www.jichi.ac.jp/saitama-sct/. EZR allows the application of statistical functions that are frequently used in clinical studies, such as survival analyses, including competing risk analyses and the use of time-dependent covariates and so on, by point-and-click access. In addition, by saving the script automatically created by EZR, users can learn R script writing, maintain the traceability of the analysis, and assure that the statistical process is overseen by a supervisor.
Construction of multiple trade-offs to obtain arbitrary singularities of adaptive dynamics.
Kisdi, Éva
2015-04-01
Evolutionary singularities are central to the adaptive dynamics of evolving traits. The evolutionary singularities are strongly affected by the shape of any trade-off functions a model assumes, yet the trade-off functions are often chosen in an ad hoc manner, which may unjustifiably constrain the evolutionary dynamics exhibited by the model. To avoid this problem, critical function analysis has been used to find a trade-off function that yields a certain evolutionary singularity such as an evolutionary branching point. Here I extend this method to multiple trade-offs parameterized with a scalar strategy. I show that the trade-off functions can be chosen such that an arbitrary point in the viability domain of the trait space is a singularity of an arbitrary type, provided (next to certain non-degeneracy conditions) that the model has at least two environmental feedback variables and at least as many trade-offs as feedback variables. The proof is constructive, i.e., it provides an algorithm to find trade-off functions that yield the desired singularity. I illustrate the construction of trade-offs with an example where the virulence of a pathogen evolves in a small ecosystem of a host, its pathogen, a predator that attacks the host and an alternative prey of the predator.
Transient pressure analysis of fractured well in bi-zonal gas reservoirs
NASA Astrophysics Data System (ADS)
Zhao, Yu-Long; Zhang, Lie-Hui; Liu, Yong-hui; Hu, Shu-Yong; Liu, Qi-Guo
2015-05-01
For hydraulic fractured well, how to evaluate the properties of fracture and formation are always tough jobs and it is very complex to use the conventional method to do that, especially for partially penetrating fractured well. Although the source function is a very powerful tool to analyze the transient pressure for complex structure well, the corresponding reports on gas reservoir are rare. In this paper, the continuous point source functions in anisotropic reservoirs are derived on the basis of source function theory, Laplace transform method and Duhamel principle. Application of construction method, the continuous point source functions in bi-zonal gas reservoir with closed upper and lower boundaries are obtained. Sequentially, the physical models and transient pressure solutions are developed for fully and partially penetrating fractured vertical wells in this reservoir. Type curves of dimensionless pseudo-pressure and its derivative as function of dimensionless time are plotted as well by numerical inversion algorithm, and the flow periods and sensitive factors are also analyzed. The source functions and solutions of fractured well have both theoretical and practical application in well test interpretation for such gas reservoirs, especial for the well with stimulated reservoir volume around the well in unconventional gas reservoir by massive hydraulic fracturing which always can be described with the composite model.
Panizza, Elena; Branca, Rui M M; Oliviusson, Peter; Orre, Lukas M; Lehtiö, Janne
2017-07-03
Protein phosphorylation is involved in the regulation of most eukaryotic cells functions and mass spectrometry-based analysis has made major contributions to our understanding of this regulation. However, low abundance of phosphorylated species presents a major challenge in achieving comprehensive phosphoproteome coverage and robust quantification. In this study, we developed a workflow employing titanium dioxide phospho-enrichment coupled with isobaric labeling by Tandem Mass Tags (TMT) and high-resolution isoelectric focusing (HiRIEF) fractionation to perform in-depth quantitative phosphoproteomics starting with a low sample quantity. To benchmark the workflow, we analyzed HeLa cells upon pervanadate treatment or cell cycle arrest in mitosis. Analyzing 300 µg of peptides per sample, we identified 22,712 phosphorylation sites, of which 19,075 were localized with high confidence and 1,203 are phosphorylated tyrosine residues, representing 6.3% of all detected phospho-sites. HiRIEF fractions with the most acidic isoelectric points are enriched in multiply phosphorylated peptides, which represent 18% of all the phospho-peptides detected in the pH range 2.5-3.7. Cross-referencing with the PhosphoSitePlus database reveals 1,264 phosphorylation sites that have not been previously reported and kinase association analysis suggests that a subset of these may be functional during the mitotic phase.
Younis, Mustafa Z; Jabr, Samer; Smith, Pamela C; Al-Hajeri, Maha; Hartmann, Michael
2011-01-01
Academic research investigating health care costs in the Palestinian region is limited. Therefore, this study examines the costs of the cardiac catheterization unit of one of the largest hospitals in Palestine. We focus on costs of a cardiac catheterization unit and the increasing number of deaths over the past decade in the region due to cardiovascular diseases (CVDs). We employ cost-volume-profit (CVP) analysis to determine the unit's break-even point (BEP), and investigate expected benefits (EBs) of Palestinian government subsidies to the unit. Findings indicate variable costs represent 56 percent of the hospital's total costs. Based on the three functions of the cardiac catheterization unit, results also indicate that the number of patients receiving services exceed the break-even point in each function, despite the unit receiving a government subsidy. Our findings, although based on one hospital, will permit hospital management to realize the importance of unit costs in order to make informed financial decisions. The use of break-even analysis will allow area managers to plan minimum production capacity for the organization. The economic benefits for patients and the government from the unit may encourage government officials to focus efforts on increasing future subsidies to the hospital.
Jung, Kwanghee; Takane, Yoshio; Hwang, Heungsun; Woodward, Todd S
2016-06-01
We extend dynamic generalized structured component analysis (GSCA) to enhance its data-analytic capability in structural equation modeling of multi-subject time series data. Time series data of multiple subjects are typically hierarchically structured, where time points are nested within subjects who are in turn nested within a group. The proposed approach, named multilevel dynamic GSCA, accommodates the nested structure in time series data. Explicitly taking the nested structure into account, the proposed method allows investigating subject-wise variability of the loadings and path coefficients by looking at the variance estimates of the corresponding random effects, as well as fixed loadings between observed and latent variables and fixed path coefficients between latent variables. We demonstrate the effectiveness of the proposed approach by applying the method to the multi-subject functional neuroimaging data for brain connectivity analysis, where time series data-level measurements are nested within subjects.
Evans function computation for the stability of travelling waves
NASA Astrophysics Data System (ADS)
Barker, B.; Humpherys, J.; Lyng, G.; Lytle, J.
2018-04-01
In recent years, the Evans function has become an important tool for the determination of stability of travelling waves. This function, a Wronskian of decaying solutions of the eigenvalue equation, is useful both analytically and computationally for the spectral analysis of the linearized operator about the wave. In particular, Evans-function computation allows one to locate any unstable eigenvalues of the linear operator (if they exist); this allows one to establish spectral stability of a given wave and identify bifurcation points (loss of stability) as model parameters vary. In this paper, we review computational aspects of the Evans function and apply it to multidimensional detonation waves. This article is part of the theme issue `Stability of nonlinear waves and patterns and related topics'.
Changes in functional and structural brain connectome along the Alzheimer's disease continuum.
Filippi, Massimo; Basaia, Silvia; Canu, Elisa; Imperiale, Francesca; Magnani, Giuseppe; Falautano, Monica; Comi, Giancarlo; Falini, Andrea; Agosta, Federica
2018-05-09
The aim of this study was two-fold: (i) to investigate structural and functional brain network architecture in patients with Alzheimer's disease (AD) and amnestic mild cognitive impairment (aMCI), stratified in converters (c-aMCI) and non-converters (nc-aMCI) to AD; and to assess the relationship between healthy brain network functional connectivity and the topography of brain atrophy in patients along the AD continuum. Ninety-four AD patients, 47 aMCI patients (25 c-aMCI within 36 months) and 53 age- and sex-matched healthy controls were studied. Graph analysis and connectomics assessed global and local, structural and functional topological network properties and regional connectivity. Healthy topological features of brain regions were assessed based on their connectivity with the point of maximal atrophy (epicenter) in AD and aMCI patients. Brain network graph analysis properties were severely altered in AD patients. Structural brain network was already altered in c-aMCI patients relative to healthy controls in particular in the temporal and parietal brain regions, while functional connectivity did not change. Structural connectivity alterations distinguished c-aMCI from nc-aMCI cases. In both AD and c-aMCI, the point of maximal atrophy was located in left hippocampus (disease-epicenter). Brain regions most strongly connected with the disease-epicenter in the healthy functional connectome were also the most atrophic in both AD and c-aMCI patients. Progressive degeneration in the AD continuum is associated with an early breakdown of anatomical brain connections and follows the strongest connections with the disease-epicenter. These findings support the hypothesis that the topography of brain connectional architecture can modulate the spread of AD through the brain.
A placebo-controlled trial of itopride in functional dyspepsia.
Holtmann, Gerald; Talley, Nicholas J; Liebregts, Tobias; Adam, Birgit; Parow, Christopher
2006-02-23
The treatment of patients with functional dyspepsia remains unsatisfactory. We assessed the efficacy of itopride, a dopamine D2 antagonist with anti-acetylcholinesterase [corrected] effects, in patients with functional dyspepsia. Patients with functional dyspepsia were randomly assigned to receive either itopride (50, 100, or 200 mg three times daily) or placebo. After eight weeks of treatment, three primary efficacy end points were analyzed: the change from baseline in the severity of symptoms of functional dyspepsia (as assessed by the Leeds Dyspepsia Questionnaire), patients' global assessment of efficacy (the proportion of patients without symptoms or with marked improvement), and the severity of pain or fullness as rated on a five-grade scale. We randomly assigned 554 patients; 523 had outcome data and could be included in the analyses. After eight weeks, 41 percent of the patients receiving placebo were symptom-free or had marked improvement, as compared with 57 percent, 59 percent, and 64 percent receiving itopride at a dose of 50, 100, or 200 mg three times daily, respectively (P<0.05 for all comparisons between placebo and itopride). Although the symptom score improved significantly in all four groups, an overall analysis revealed that itopride was significantly superior to placebo, with the greatest symptom-score improvement in the 100- and 200-mg groups (-6.24 and -6.27, vs. -4.50 in the placebo group; P=0.05). Analysis of the combined end point of pain and fullness showed that itopride yielded a greater rate of response than placebo (73 percent vs. 63 percent, P=0.04). Itopride significantly improves symptoms in patients with functional dyspepsia. (ClinicalTrials.gov number, NCT00272103.). Copyright 2006 Massachusetts Medical Society.
Effects of pivoting neuromuscular training on pivoting control and proprioception.
Lee, Song Joo; Ren, Yupeng; Chang, Alison H; Geiger, François; Zhang, Li-Qun
2014-07-01
Pivoting neuromuscular control and proprioceptive acuity may play an important role in anterior cruciate ligament injuries. The goal of this study was to investigate whether pivoting off-axis intensity adjustable neuromuscular control training (POINT) could improve pivoting neuromuscular control, proprioceptive acuity, and functional performance. Among 41 subjects, 21 subjects participated in 18 sessions of POINT (three sessions per week for 6 wk), and 20 subjects served as controls who did their regular workout. Both groups received pre-, mid-, and postintervention evaluations. Propensity score analysis with multivariable regression adjustment was used to investigate the effect of training on pivoting neuromuscular control (pivoting instability, leg pivoting stiffness, maximum internal, and external pivoting angles), proprioceptive acuity, and functional performance in both groups. Compared with the control group, the training group significantly improved pivoting neuromuscular control as reduced pivoting instability, reduced maximum internal and external pivoting angles, increased leg pivoting stiffness, and decreased entropy of time to peak EMG in the gluteus maximus and lateral gastrocnemius under pivoting perturbations. Furthermore, the training group enhanced weight-bearing proprioceptive acuity and improved the single leg hop distance. Improvement of pivoting neuromuscular control in functional weight-bearing activities and task performances after POINT may help develop lower limb injury prevention and rehabilitation methods to reduce anterior cruciate ligament and other musculoskeletal injuries associated with pivoting sports.
Effects of Pivoting Neuromuscular Training on Pivoting Control and Proprioception
Lee, Song Joo; Ren, Yupeng; Chang, Alison H.; Geiger, François; Zhang, Li-Qun
2014-01-01
Purpose Pivoting neuromuscular control and proprioceptive acuity may play an important role in ACL injuries. The goal of this study was to investigate whether pivoting neuromuscular training on an offaxis elliptical trainer (POINT) could improve pivoting neuromuscular control, proprioceptive acuity, and functional performance. Methods Among 41 subjects, 21 subjects participated in 18 sessions of POINT (3 sessions/week for 6 weeks), and 20 subjects served as controls who did their regular workout. Both groups received pre-, mid-, and post-intervention evaluations. Propensity score analysis with multivariable regression adjustment was used to investigate the effect of training on pivoting neuromuscular control (pivoting instability, leg pivoting stiffness, maximum internal and external pivoting angles), proprioceptive acuity, and functional performance in both groups. Results Compared to the control group, the training group significantly improved pivoting neuromuscular control as reduced pivoting instability, reduced maximum internal and external pivoting angles, increased leg pivoting stiffness, and decreased entropy of time to peak EMG in the gluteus maximus and lateral gastrocnemius under pivoting perturbations. Furthermore, the training group enhanced weight-bearing proprioceptive acuity and improved the single leg hop distance. Conclusion Improvement of pivoting neuromuscular control in functional weight-bearing activities and task performances following POINT may help develop lower limb injury prevention and rehabilitation methods to reduce ACL and other musculoskeletal injuries associated with pivoting sports. PMID:24389517
Expression and prognostic relevance of PRAME in primary osteosarcoma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Pingxian; Zou, Changye; Yong, Bicheng
2012-03-23
Graphical abstract: High PRAME expression was associated with osteosarcoma patients' poor prognosis and lung metastasis. Highlights: Black-Right-Pointing-Pointer We analyzed and verified the role of PRAME in primary osteosarcoma. Black-Right-Pointing-Pointer High PRAME expression in osteosarcoma correlated to poor prognosis and lung metastasis. Black-Right-Pointing-Pointer PRAME siRNA knockdown significantly suppressed the proliferation, colony formation, and G1 cell cycle arrest in U-2OS cells. -- Abstract: The preferentially expressed antigen of melanoma (PRAME), a cancer-testis antigen with unknown function, is expressed in many human malignancies and is considered an attractive potential target for tumor immunotherapy. However, studies of its expression and function in osteosarcoma havemore » rarely been reported. In this study, we found that PRAME is expressed in five osteosarcoma cell lines and in more than 70% of osteosarcoma patient specimens. In addition, an immunohistochemical analysis showed that high PRAME expression was associated with poor prognosis and lung metastasis. Furthermore, PRAME siRNA knockdown significantly suppressed the proliferation, colony formation, and G1 cell cycle arrest in U-2OS cells. Our results suggest that PRAME plays an important role in cell proliferation and disease progression in osteosarcoma. However, the detail mechanisms of PRAME function in osteosarcoma require further investigation.« less
Effect of masking phase-only holograms on the quality of reconstructed images.
Deng, Yuanbo; Chu, Daping
2016-04-20
A phase-only hologram modulates the phase of the incident light and diffracts it efficiently with low energy loss because of the minimum absorption. Much research attention has been focused on how to generate phase-only holograms, and little work has been done to understand the effect and limitation of their partial implementation, possibly due to physical defects and constraints, in particular as in the practical situations where a phase-only hologram is confined or needs to be sliced or tiled. The present study simulates the effect of masking phase-only holograms on the quality of reconstructed images in three different scenarios with different filling factors, filling positions, and illumination intensity profiles. Quantitative analysis confirms that the width of the image point spread function becomes wider and the image quality decreases, as expected, when the filling factor decreases, and the image quality remains the same for different filling positions as well. The width of the image point spread function as derived from different filling factors shows a consistent behavior to that as measured directly from the reconstructed image, especially as the filling factor becomes small. Finally, mask profiles of different shapes and intensity distributions are shown to have more complicated effects on the image point spread function, which in turn affects the quality and textures of the reconstructed image.
New convergence results for the scaled gradient projection method
NASA Astrophysics Data System (ADS)
Bonettini, S.; Prato, M.
2015-09-01
The aim of this paper is to deepen the convergence analysis of the scaled gradient projection (SGP) method, proposed by Bonettini et al in a recent paper for constrained smooth optimization. The main feature of SGP is the presence of a variable scaling matrix multiplying the gradient, which may change at each iteration. In the last few years, extensive numerical experimentation showed that SGP equipped with a suitable choice of the scaling matrix is a very effective tool for solving large scale variational problems arising in image and signal processing. In spite of the very reliable numerical results observed, only a weak convergence theorem is provided establishing that any limit point of the sequence generated by SGP is stationary. Here, under the only assumption that the objective function is convex and that a solution exists, we prove that the sequence generated by SGP converges to a minimum point, if the scaling matrices sequence satisfies a simple and implementable condition. Moreover, assuming that the gradient of the objective function is Lipschitz continuous, we are also able to prove the {O}(1/k) convergence rate with respect to the objective function values. Finally, we present the results of a numerical experience on some relevant image restoration problems, showing that the proposed scaling matrix selection rule performs well also from the computational point of view.
NASA Astrophysics Data System (ADS)
Mercaldo, M. T.; Rabuffo, I.; De Cesare, L.; Caramico D'Auria, A.
2016-04-01
In this work we study the quantum phase transition, the phase diagram and the quantum criticality induced by the easy-plane single-ion anisotropy in a d-dimensional quantum spin-1 XY model in absence of an external longitudinal magnetic field. We employ the two-time Green function method by avoiding the Anderson-Callen decoupling of spin operators at the same sites which is of doubtful accuracy. Following the original Devlin procedure we treat exactly the higher order single-site anisotropy Green functions and use Tyablikov-like decouplings for the exchange higher order ones. The related self-consistent equations appear suitable for an analysis of the thermodynamic properties at and around second order phase transition points. Remarkably, the equivalence between the microscopic spin model and the continuous O(2) -vector model with transverse-Ising model (TIM)-like dynamics, characterized by a dynamic critical exponent z=1, emerges at low temperatures close to the quantum critical point with the single-ion anisotropy parameter D as the non-thermal control parameter. The zero-temperature critic anisotropy parameter Dc is obtained for dimensionalities d > 1 as a function of the microscopic exchange coupling parameter and the related numerical data for different lattices are found to be in reasonable agreement with those obtained by means of alternative analytical and numerical methods. For d > 2, and in particular for d=3, we determine the finite-temperature critical line ending in the quantum critical point and the related TIM-like shift exponent, consistently with recent renormalization group predictions. The main crossover lines between different asymptotic regimes around the quantum critical point are also estimated providing a global phase diagram and a quantum criticality very similar to the conventional ones.
NASA Astrophysics Data System (ADS)
Bañados, Máximo; Düring, Gustavo; Faraggi, Alberto; Reyes, Ignacio A.
2017-08-01
We study the thermodynamic phase diagram of three-dimensional s l (N ;R ) higher spin black holes. By analyzing the semiclassical partition function we uncover a rich structure that includes Hawking-Page transitions to the AdS3 vacuum, first order phase transitions among black hole states, and a second order critical point. Our analysis is explicit for N =4 but we extrapolate some of our conclusions to arbitrary N . In particular, we argue that even N is stable in the ensemble under consideration but odd N is not.
High degree interpolation polynomial in Newton form
NASA Technical Reports Server (NTRS)
Tal-Ezer, Hillel
1988-01-01
Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.
A low cost LST pointing control system
NASA Technical Reports Server (NTRS)
Glaese, J. R.; Kennel, H. F.; Nurre, G. S.; Seltzer, S. M.; Shelton, H. L.
1975-01-01
Vigorous efforts to reduce costs, coupled with changes in LST guidelines, took place in the Fall of 1974. These events made a new design of the LST and its Pointing and Attitude Control System possible. The major design changes are summarized as: an annular Support Systems Module; removal of image motion compensation; reaction wheels instead of CMG's; a magnetic torquer system to also perform the emergency and backup functions, eliminating the previously required mass expulsion system. Preliminary analysis indicates the Low Cost LST concept can meet the newly defined requirements and results in a significantly reduced development cost.
Propeller performance analysis and multidisciplinary optimization using a genetic algorithm
NASA Astrophysics Data System (ADS)
Burger, Christoph
A propeller performance analysis program has been developed and integrated into a Genetic Algorithm for design optimization. The design tool will produce optimal propeller geometries for a given goal, which includes performance and/or acoustic signature. A vortex lattice model is used for the propeller performance analysis and a subsonic compact source model is used for the acoustic signature determination. Compressibility effects are taken into account with the implementation of Prandtl-Glauert domain stretching. Viscous effects are considered with a simple Reynolds number based model to account for the effects of viscosity in the spanwise direction. An empirical flow separation model developed from experimental lift and drag coefficient data of a NACA 0012 airfoil is included. The propeller geometry is generated using a recently introduced Class/Shape function methodology to allow for efficient use of a wide design space. Optimizing the angle of attack, the chord, the sweep and the local airfoil sections, produced blades with favorable tradeoffs between single and multiple point optimizations of propeller performance and acoustic noise signatures. Optimizations using a binary encoded IMPROVE(c) Genetic Algorithm (GA) and a real encoded GA were obtained after optimization runs with some premature convergence. The newly developed real encoded GA was used to obtain the majority of the results which produced generally better convergence characteristics when compared to the binary encoded GA. The optimization trade-offs show that single point optimized propellers have favorable performance, but circulation distributions were less smooth when compared to dual point or multiobjective optimizations. Some of the single point optimizations generated propellers with proplets which show a loading shift to the blade tip region. When noise is included into the objective functions some propellers indicate a circulation shift to the inboard sections of the propeller as well as a reduction in propeller diameter. In addition the propeller number was increased in some optimizations to reduce the acoustic blade signature.
Tennant, Alan; Küçükdeveci, Ayse A; Kutlay, Sehim; Elhan, Atilla H
2006-03-23
The Middlesex Elderly Assessment of Mental State (MEAMS) was developed as a screening test to detect cognitive impairment in the elderly. It includes 12 subtests, each having a 'pass score'. A series of tasks were undertaken to adapt the measure for use in the adult population in Turkey and to determine the validity of existing cut points for passing subtests, given the wide range of educational level in the Turkish population. This study focuses on identifying and validating the scoring system of the MEAMS for Turkish adult population. After the translation procedure, 350 normal subjects and 158 acquired brain injury patients were assessed by the Turkish version of MEAMS. Initially, appropriate pass scores for the normal population were determined through ANOVA post-hoc tests according to age, gender and education. Rasch analysis was then used to test the internal construct validity of the scale and the validity of the cut points for pass scores on the pooled data by using Differential Item Functioning (DIF) analysis within the framework of the Rasch model. Data with the initially modified pass scores were analyzed. DIF was found for certain subtests by age and education, but not for gender. Following this, pass scores were further adjusted and data re-fitted to the model. All subtests were found to fit the Rasch model (mean item fit 0.184, SD 0.319; person fit -0.224, SD 0.557) and DIF was then found to be absent. Thus the final pass scores for all subtests were determined. The MEAMS offers a valid assessment of cognitive state for the adult Turkish population, and the revised cut points accommodate for age and education. Further studies are required to ascertain the validity in different diagnostic groups.
Merli, Marcello; Pavese, Alessandro
2018-03-01
The critical points analysis of electron density, i.e. ρ(x), from ab initio calculations is used in combination with the catastrophe theory to show a correlation between ρ(x) topology and the appearance of instability that may lead to transformations of crystal structures, as a function of pressure/temperature. In particular, this study focuses on the evolution of coalescing non-degenerate critical points, i.e. such that ∇ρ(x c ) = 0 and λ 1 , λ 2 , λ 3 ≠ 0 [λ being the eigenvalues of the Hessian of ρ(x) at x c ], towards degenerate critical points, i.e. ∇ρ(x c ) = 0 and at least one λ equal to zero. The catastrophe theory formalism provides a mathematical tool to model ρ(x) in the neighbourhood of x c and allows one to rationalize the occurrence of instability in terms of electron-density topology and Gibbs energy. The phase/state transitions that TiO 2 (rutile structure), MgO (periclase structure) and Al 2 O 3 (corundum structure) undergo because of pressure and/or temperature are here discussed. An agreement of 3-5% is observed between the theoretical model and experimental pressure/temperature of transformation.
NASA Astrophysics Data System (ADS)
Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.
2015-12-01
Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.
Efficient Algorithms for Segmentation of Item-Set Time Series
NASA Astrophysics Data System (ADS)
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
Einsiedel, T.; Freund, W.; Sander, S.; Trnavac, S.; Gebhard, F.
2008-01-01
The aim of this study was to investigate whether the final displacement of conservatively treated distal radius fractures can be predicted after primary reduction. We analysed the radiographic documents of 311 patients with a conservatively treated distal radius fracture at the time of injury, after reduction and after bony consolidation. We measured the dorsal angulation (DA), the radial angle (RA) and the radial shortening (RS) at each time point. The parameters were analysed separately for metaphyseally “stable” (A2, C1) and “unstable” (A3, C2, C3) fractures, according to the AO classification system. Spearman’s rank correlations and regression functions were determined for the analysis. The highest correlations were found for the DA between the time points ‘reduction’ and ‘complete healing’ (r = 0.75) and for the RA between the time points ‘reduction’ and ‘complete healing’ (r = 0.80). The DA and the RA after complete healing can be predicted from the regression functions. PMID:18504577
Serum Micronutrient Concentrations and Decline in Physical Function Among Older Persons
Bartali, Benedetta; Frongillo, Edward A.; Guralnik, Jack M.; Stipanuk, Martha H.; Allore, Heather G.; Cherubini, Antonio; Bandinelli, Stefania; Ferrucci, Luigi; Gill, Thomas M.
2009-01-01
Context Maintaining independence of older persons is a public health priority, and identifying the factors that contribute to decline in physical function is needed to prevent or postpone the disablement process. The potential deleterious effect of poor nutrition on decline in physical function in older persons is unclear. Objective To determine whether a low serum concentration of micronutrients is associated with subsequent decline in physical function among older men and women living in the community. Design, Setting, and Participants Longitudinal study of 698 community-living persons 65 years or older who were randomly selected from a population registry in Tuscany, Italy. Participants completed the baseline examination from November 1, 1998, through May 28, 2000, and the 3-year follow-up assessments from November 1, 2001, through March 30, 2003. Main Outcome Measure Decline in physical function was defined as a loss of at least 1 point in the Short Physical Performance Battery during the 3-year follow-up. Odds ratios (ORs) were calculated for the lowest quartile of each nutrient using the other 3 quartiles combined as the reference group. Two additional and complementary analytical approaches were used to confirm the validity of the results. Results The mean decline in the Short Physical Performance Battery score was 1.1 point. In a logistic regression analysis that was adjusted for potential confounders, only a low concentration of vitamin E (<1.1 μg/mL [<24.9 μmol/L]) was significantly associated with subsequent decline in physical function (OR, 1.62; 95% confidence interval, 1.11-2.36; P=.01 for association of lowest α-tocopherol quartile with at least a 1-point decline in physical function). In a general linear model, the concentration of vitamin E at baseline, when analyzed as a continuous measure, was significantly associated with the Short Physical Performance Battery score at follow-up after adjustment for potential confounders and Short Physical Performance Battery score at baseline (β=.023; P=.01). In a classification and regression tree analysis, age older than 81 years and vitamin E (in participants aged 70-80 years) were identified as the strongest determinants of decline in physical function (physical decline in 84% and 60%, respectively; misclassification error rate, 0.33). Conclusions These results provide empirical evidence that a low serum concentration of vitamin E is associated with subsequent decline in physical function among community-living older adults. Clinical trials may be warranted to determine whether an optimal concentration of vitamin E reduces functional decline and the onset of disability in older persons. PMID:18212315
A Short Note on the Scaling Function Constant Problem in the Two-Dimensional Ising Model
NASA Astrophysics Data System (ADS)
Bothner, Thomas
2018-02-01
We provide a simple derivation of the constant factor in the short-distance asymptotics of the tau-function associated with the 2-point function of the two-dimensional Ising model. This factor was first computed by Tracy (Commun Math Phys 142:297-311, 1991) via an exponential series expansion of the correlation function. Further simplifications in the analysis are due to Tracy and Widom (Commun Math Phys 190:697-721, 1998) using Fredholm determinant representations of the correlation function and Wiener-Hopf approximation results for the underlying resolvent operator. Our method relies on an action integral representation of the tau-function and asymptotic results for the underlying Painlevé-III transcendent from McCoy et al. (J Math Phys 18:1058-1092, 1977).
Comparative Analysis of Membership Function on Mamdani Fuzzy Inference System for Decision Making
NASA Astrophysics Data System (ADS)
harliana, Putri; Rahim, Robbi
2017-12-01
Membership function is a curve that shows mapping the input data points into the value or degree of membership which has an interval between 0 and 1. One way to get membership value is through a function approach. There are some membership functions can be used on mamdani fuzzy inference system. They are triangular, trapezoid, singleton, sigmoid, Gaussian, etc. In this paper only discuss three membership functions, are triangular, trapezoid and Gaussian. These three membership functions will be compared to see the difference in parameter values and results obtained. For case study in this paper is admission of students at popular school. There are three variable can be used, they are students’ report, IQ score and parents’ income. Which will then be created if-then rules.
Kan, Wei; Fang, Fengqin; Chen, Lin; Wang, Ruige; Deng, Qigang
2016-05-01
The sterile alpha motif (SAM) domain of the protein ANKS6, a protein-protein interaction domain, is responsible for autosomal dominant polycystic kidney disease. Although the disease is the result of the R823W point mutation in the SAM domain of the protein ANKS6, the molecular details are still unclear. We applied molecular dynamics simulations, the principal component analysis, and the molecular mechanics Poisson-Boltzmann surface area binding free energy calculation to explore the structural and dynamic effects of the R823W point mutation on the complex ANKS6-ANKS3 (PDB ID: 4NL9) in comparison to the wild proteins. The energetic analysis presents that the wild type has a more stable structure than the mutant. The R823W point mutation not only disrupts the structure of the ANKS6 SAM domain but also negatively affects the interaction of the ANKS6-ANKS3. These results further clarify the previous experiments to understand the ANKS6-ANKS3 interaction comprehensively. In summary, this study would provide useful suggestions to understand the interaction of these proteins and their fatal action on mediating kidney function.
Contour matching for a fish recognition and migration-monitoring system
NASA Astrophysics Data System (ADS)
Lee, Dah-Jye; Schoenberger, Robert B.; Shiozawa, Dennis; Xu, Xiaoqian; Zhan, Pengcheng
2004-12-01
Fish migration is being monitored year round to provide valuable information for the study of behavioral responses of fish to environmental variations. However, currently all monitoring is done by human observers. An automatic fish recognition and migration monitoring system is more efficient and can provide more accurate data. Such a system includes automatic fish image acquisition, contour extraction, fish categorization, and data storage. Shape is a very important characteristic and shape analysis and shape matching are studied for fish recognition. Previous work focused on finding critical landmark points on fish shape using curvature function analysis. Fish recognition based on landmark points has shown satisfying results. However, the main difficulty of this approach is that landmark points sometimes cannot be located very accurately. Whole shape matching is used for fish recognition in this paper. Several shape descriptors, such as Fourier descriptors, polygon approximation and line segments, are tested. A power cepstrum technique has been developed in order to improve the categorization speed using contours represented in tangent space with normalized length. Design and integration including image acquisition, contour extraction and fish categorization are discussed in this paper. Fish categorization results based on shape analysis and shape matching are also included.
Hillier, Susan; English, Coralie; Crotty, Maria; Segal, Leonie; Bernhardt, Julie; Esterman, Adrian
2011-12-01
There is strong evidence for a dose-response relationship between physical therapy early after stroke and recovery of function. The optimal method of maximizing physical therapy within finite health care resources is unknown. To determine the effectiveness and cost-effectiveness of two alternative models of physical therapy service delivery (seven-days per week therapy services or group circuit class therapy over five-days a week) to usual care for people receiving inpatient rehabilitation after stroke. Multicenter, three-armed randomized controlled trial with blinded assessment of outcomes. A total of 282 people admitted to inpatient rehabilitation facilities after stroke with an admission functional independence measure (FIM) score within the moderate range (total 40-80 points or motor 38-62 points) will be randomized to receive one of three interventions: • usual care therapy over five-days a week • standard care therapy over seven-days a week, or • group circuit class therapy over five-days a week. Participants will receive the allocated intervention for the length of their hospital stay. Analysis will be by intention-to-treat. The primary outcome measure is walking ability (six-minute walk test) at four-week postintervention with three- and six-month follow-up. Economic analysis will include a costing analysis based on length of hospital stay and staffing/resource costs and a cost-utility analysis (incremental quality of life per incremental cost, relative to usual care). Secondary outcomes include walking speed and independence, ability to perform activities of daily living, arm function, quality of life and participant satisfaction. © 2011 The Authors. International Journal of Stroke © 2011 World Stroke Organization.
de Souto Barreto, Philipe; Cesari, Matteo; Denormandie, Philippe; Armaingaud, Didier; Vellas, Bruno; Rolland, Yves
2017-09-01
To compare the effects of exercise with those of a structured nonphysical intervention on ability to perform activities of daily living (ADLs) and physical and cognitive function of persons with dementia (PWDs) living in nursing homes (NH). Cluster-randomized pilot-controlled trial. Seven French NHs. PWDs living in NHs. NHs were randomized to an exercise group (4 NHs, n = 47) or structured social activity group (3 NHs, n = 50) for a 24-week intervention performed twice per week for 60 minutes per session. The main endpoint was ADL performance (Alzheimer's Disease Cooperative Study Activities of Daily Living Inventory for Severe Alzheimer's Disease Scale (ADCS-ADL-sev); range 0-54, higher is better); secondary endpoints were overall cognitive function (Mini-Mental State Examination (MMSE)) and performance-based tests of physical function (Short Physical Performance Battery (SPPB), usual gait speed). Ninety-one participants with at least one postbaseline ADL assessment were included in efficacy analysis. Groups differed at baseline in terms of sex, neuropsychiatric symptoms, and nutritional status. Multilevel analysis adjusted for baseline differences between groups found no significant difference between effects of exercise and social activity (group-by-time interaction), with adjusted mean differences at 6 months of 1.9 points for ADCS-ADL-sev and 0.55 points for MMSE favoring social activity and 0.6 points for SPPB and 0.05 m/s favoring exercise. Adverse events did not differ between groups, except that the social activity group had more falls than the exercise group. A larger, longer trial is required to determine whether exercise has greater health benefits than nonphysical interventions for institutionalized PWDs. © 2017, Copyright the Authors Journal compilation © 2017, The American Geriatrics Society.
Quantification of topological changes of vorticity contours in two-dimensional Navier-Stokes flow.
Ohkitani, Koji; Al Sulti, Fayeza
2010-06-01
A characterization of reconnection of vorticity contours is made by direct numerical simulations of the two-dimensional Navier-Stokes flow at a relatively low Reynolds number. We identify all the critical points of the vorticity field and classify them by solving an eigenvalue problem of its Hessian matrix on the basis of critical-point theory. The numbers of hyperbolic (saddles) and elliptic (minima and maxima) points are confirmed to satisfy Euler's index theorem numerically. Time evolution of these indices is studied for a simple initial condition. Generally speaking, we have found that the indices are found to decrease in number with time. This result is discussed in connection with related works on streamline topology, in particular, the relationship between stagnation points and the dissipation. Associated elementary procedures in physical space, the merging of vortices, are studied in detail for a number of snapshots. A similar analysis is also done using the stream function.
NASA Technical Reports Server (NTRS)
Frew, A. M.; Eisenhut, D. F.; Farrenkopf, R. L.; Gates, R. F.; Iwens, R. P.; Kirby, D. K.; Mann, R. J.; Spencer, D. J.; Tsou, H. S.; Zaremba, J. G.
1972-01-01
The precision pointing control system (PPCS) is an integrated system for precision attitude determination and orientation of gimbaled experiment platforms. The PPCS concept configures the system to perform orientation of up to six independent gimbaled experiment platforms to design goal accuracy of 0.001 degrees, and to operate in conjunction with a three-axis stabilized earth-oriented spacecraft in orbits ranging from low altitude (200-2500 n.m., sun synchronous) to 24 hour geosynchronous, with a design goal life of 3 to 5 years. The system comprises two complementary functions: (1) attitude determination where the attitude of a defined set of body-fixed reference axes is determined relative to a known set of reference axes fixed in inertial space; and (2) pointing control where gimbal orientation is controlled, open-loop (without use of payload error/feedback) with respect to a defined set of body-fixed reference axes to produce pointing to a desired target.
Investigation of the Parameters of Sealed Triple-Point Cells for Cryogenic Gases
NASA Astrophysics Data System (ADS)
Fellmuth, B.; Wolber, L.
2011-01-01
An overview of the parameters of a large number of sealed triple-point cells for the cryogenic gases hydrogen, oxygen, neon, and argon is given that have been determined within the framework of an international star intercomparison to optimize the measurement of melting curves as well as to establish complete and reliable uncertainty budgets for the realization of temperature fixed points. Special emphasis is given to the question, whether the parameters are primarily influenced by the cell design or the properties of the fixed-point samples. For explaining surprisingly large periods of the thermal recovery after the heat pulses of the intermittent heating through the melting range, a simple model is developed based on a newly defined heat-capacity equivalent, which considers the heat of fusion and a melting-temperature inhomogeneity. The analysis of the recovery using a graded set of exponential functions containing different time constants is also explained in detail.
Isolation of rat adrenocortical mitochondria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solinas, Paola; Department of Medicine, Center for Mitochondrial Disease, School of Medicine, Case Western Reserve University, Cleveland, OH 44106; Fujioka, Hisashi
2012-10-12
Highlights: Black-Right-Pointing-Pointer A method for isolation of adrenocortical mitochondria from the adrenal gland of rats is described. Black-Right-Pointing-Pointer The purified isolated mitochondria show excellent morphological integrity. Black-Right-Pointing-Pointer The properties of oxidative phosphorylation are excellent. Black-Right-Pointing-Pointer The method increases the opportunity of direct analysis of adrenal mitochondria from small animals. -- Abstract: This report describes a relatively simple and reliable method for isolating adrenocortical mitochondria from rats in good, reasonably pure yield. These organelles, which heretofore have been unobtainable in isolated form from small laboratory animals, are now readily accessible. A high degree of mitochondrial purity is shown by the electronmore » micrographs, as well as the structural integrity of each mitochondrion. That these organelles have retained their functional integrity is shown by their high respiratory control ratios. In general, the biochemical performance of these adrenal cortical mitochondria closely mirrors that of typical hepatic or cardiac mitochondria.« less
The Musical Culture of an "Inuk" Teenager
ERIC Educational Resources Information Center
Piercey, Mary E.
2008-01-01
This article uses music as a point of entry into the understanding of Inuit culture. I demonstrate how the analysis of the song repertoire of an Inuk teenager reveals some functions and meanings that her song choices have for her in the particular Inuit culture of Arviat, Nunavut. I present four informally learned songs from my informant Gara…
Programme Costing - A Logical Step Toward Improved Management.
ERIC Educational Resources Information Center
McDougall, Ronald N.
The analysis of costs of university activities from a functional or program point of view, rather than an organizational unit basis, is not only an imperative for the planning and management of universities, but also a logical method of examing the costs of university operations. A task force of the Committee of Finance Officers-Universities of…
ERIC Educational Resources Information Center
Kirjavainen, Tanja
2012-01-01
Different stochastic frontier models for panel data are used to estimate education production functions and the efficiency of Finnish general upper secondary schools. Grades in the matriculation examination are used as an output and explained with the comprehensive school grade point average, parental socio-economic background, school resources,…
Night and Day: The Interaction Between an Academic Institution and Its Evening College.
ERIC Educational Resources Information Center
Jacobson, Myrtle S.
An organizational study of the dynamics of interaction between the parent college and one of its component units is presented. The analysis is not limited to formal organizational structure and function. At relevant points, the dynamics of informal groupings and relationships are introduced. The research involved examination of a vast number of…
ERIC Educational Resources Information Center
Parker, Edwin B.
SPIRES (Stanford Public Information Retrieval System) is a computerized information storage and retrieval system intended for use by students and faculty members who have little knowledge of computers but who need rapid and sophisticated retrieval and analysis. The functions and capabilities of the system from the user's point of view are…
The Concurrent Engineering Design Paradigm Is Now Fully Functional for Graphics Education
ERIC Educational Resources Information Center
Krueger, Thomas J.; Barr, Ronald E.
2007-01-01
Engineering design graphics education has come a long way in the past two decades. The emergence of solid geometric modeling technology has become the focal point for the graphical development of engineering design ideas. The main attraction of this 3-D modeling approach is the downstream application of the data base to analysis and…
Problem-Based Test: Functional Analysis of Mutant 16S rRNAs
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2010-01-01
Terms to be familiar with before you start to solve the test: ribosome, ribosomal subunits, antibiotics, point mutation, 16S, 5S, and 23S rRNA, Shine-Dalgarno sequence, mRNA, tRNA, palindrome, hairpin, restriction endonuclease, fMet-tRNA, peptidyl transferase, initiation, elongation, termination of translation, expression plasmid, transformation,…
USDA-ARS?s Scientific Manuscript database
The growing incidence of chronic wounds in the world population has prompted increased interest in chronic wound dressings with protease-modulating activity and protease point of care sensors to treat and enable monitoring of elevated protease-based wound pathology. However, the overall design featu...
Sawada, H
1995-10-01
This study aimed at descriptive understanding of traditional methods involved in locating fishing points and navigating to them in the sea, and investigate associated cognitive activities. Participant observations and interviews were conducted for more than 30 fishermen who employed hand-line or long-line fishing methods near Toyoshima Island, Hiroshima Prefecture. The main findings were: (1) Fishermen readily perceived environmental cues when locating fishing points, which enabled them to navigate to a correct point on the sea. (2) Their memory of fishing points was not verbal, but visual, directly tied to the cue perception, and was constantly renewed during fishing activities. (3) They grasped configurations of various natural conditions (e.g., swiftness of the tide, surface structure of the sea bottom) through tactile information from the fishing line, and comprehended their surroundings with accumulated knowledge and inductive inferences. And (4) their cognitive processes of perception, memory, and understanding were functionally coordinated in the series of fishing work.
Clavijo, Raul I; Kohn, Taylor P; Kohn, Jaden R; Ramasamy, Ranjith
2017-01-01
Low-intensity extracorporeal shock wave therapy (Li-ESWT) has been proposed as an effective non-invasive treatment option for erectile dysfunction (ED). To use systematic review and meta-analysis to assess the efficacy of Li-ESWT by comparing change in erectile function as assessed by the erectile function domain of the International Index of Erectile Function (IIEF-EF) in men undergoing Li-ESWT vs sham therapy for the treatment of ED. Systematic search was conducted of MEDLINE, EMBASE, and ClinicalTrials.gov for randomized controlled trials that were published in peer-reviewed journals or presented in abstract form of Li-ESWT used for the treatment of ED from January 2010 through March 2016. Randomized controlled trials were eligible for inclusion if they were published in the peer-reviewed literature and assessed erectile function outcomes using the IIEF-EF score. Estimates were pooled using random-effects meta-analysis. Change in IIEF-EF score after treatment with Li-ESWT in patients treated with active treatment vs sham Li-ESWT probes. Data were extracted from seven trials involving 602 participants. The average age was 60.7 years and the average follow-up was 19.8 weeks. There was a statistically significant improvement in pooled change in IIEF-EF score from baseline to follow-up in men undergoing Li-ESWT vs those undergoing sham therapy (6.40 points; 95% CI = 1.78-11.02; I 2 = 98.7%; P < .0001 vs 1.65 points; 95% CI = 0.92-2.39; I 2 = 64.6%; P < .0001; between-group difference, P = .047). Significant between-group differences were found for total treatment shocks received by patients (P < .0001). In this meta-analysis of seven randomized controlled trials, treatment of ED with Li-ESWT resulted in a significant increase in IIEF-EF scores. Copyright © 2016 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kinoshita, Takashi; Nohata, Nijiro; Fuse, Miki
Highlights: Black-Right-Pointing-Pointer Tumor suppressive microRNA-133a regulates moesin (MSN) expression in HNSCC. Black-Right-Pointing-Pointer Silencing of MSN in HNSCC cells suppressed proliferation, migration and invasion. Black-Right-Pointing-Pointer The expression level of MSN was significantly up-regulated in cancer tissues. -- Abstract: Recently, many studies suggest that microRNAs (miRNAs) contribute to the development, invasion and metastasis of various types of human cancers. Our recent study revealed that expression of microRNA-133a (miR-133a) was significantly reduced in head and neck squamous cell carcinoma (HNSCC) and that restoration of miR-133a inhibited cell proliferation, migration and invasion in HNSCC cell lines, suggesting that miR-133a function as a tumor suppressor.more » Genome-wide gene expression analysis of miR-133a transfectants and TargetScan database showed that moesin (MSN) was a promising candidate of miR-133a target gene. MSN is a member of the ERM (ezrin, radixin and moesin) protein family and ERM function as cross-linkers between plasma membrane and actin-based cytoskeleton. The functions of MSN in cancers are controversial in previous reports. In this study, we focused on MSN and investigated whether MSN was regulated by tumor suppressive miR-133a and contributed to HNSCC oncogenesis. Restoration of miR-133a in HNSCC cell lines (FaDu, HSC3, IMC-3 and SAS) suppressed the MSN expression both in mRNA and protein level. Silencing study of MSN in HNSCC cell lines demonstrated significant inhibitions of cell proliferation, migration and invasion activities in si-MSN transfectants. In clinical specimen with HNSCC, the expression level of MSN was significantly up-regulated in cancer tissues compared to adjacent non-cancerous tissues. These data suggest that MSN may function as oncogene and is regulated by tumor suppressive miR-133a. Our analysis data of novel tumor-suppressive miR-133a-mediated cancer pathways could provide new insights into the potential mechanisms of HNSCC oncogenesis.« less
Keller, Anastasia V P; Wainwright, Grace; Shum-Siu, Alice; Prince, Daniella; Hoeper, Alyssa; Martin, Emily; Magnuson, David S K
2017-02-01
After spinal cord injury (SCI) muscle contractures develop in the plegic limbs of many patients. Physical therapists commonly use stretching as an approach to avoid contractures and to maintain the extensibility of soft tissues. We found previously that a daily stretching protocol has a negative effect on locomotor recovery in rats with mild thoracic SCI. The purpose of the current study was to determine the effects of stretching on locomotor function at acute and chronic time points after moderately severe contusive SCI. Female Sprague-Dawley rats with 25 g-cm T10 contusion injuries received our standard 24-min stretching protocol starting 4 days (acutely) or 10 weeks (chronically) post-injury (5 days/week for 5 or 4 weeks, respectively). Locomotor function was assessed using the BBB (Basso, Beattie, and Bresnahan) Open Field Locomotor Scale, video-based kinematics, and gait analysis. Locomotor deficits were evident in the acute animals after only 5 days of stretching and increasing the perceived intensity of stretching at week 4 resulted in greater impairment. Stretching initiated chronically resulted in dramatic decrements in locomotor function because most animals had BBB scores of 0-3 for weeks 2, 3, and 4 of stretching. Locomotor function recovered to control levels for both groups within 2 weeks once daily stretching ceased. Histological analysis revealed no apparent signs of overt and persistent damage to muscles undergoing stretching. The current study extends our observations of the stretching phenomenon to a more clinically relevant moderately severe SCI animal model. The results are in agreement with our previous findings and further demonstrate that spinal cord locomotor circuitry is especially vulnerable to the negative effects of stretching at chronic time points. While the clinical relevance of this phenomenon remains unknown, we speculate that stretching may contribute to the lack of locomotor recovery in some patients.
Evolution of Cognitive Function After Transcatheter Aortic Valve Implantation.
Schoenenberger, Andreas W; Zuber, Chantal; Moser, André; Zwahlen, Marcel; Wenaweser, Peter; Windecker, Stephan; Carrel, Thierry; Stuck, Andreas E; Stortecky, Stefan
2016-10-01
This study aimed to assess the evolution of cognitive function after transcatheter aortic valve implantation (TAVI). Previous smaller studies reported conflicting results on the evolution of cognitive function after TAVI. In this prospective cohort, cognitive function was measured in 229 patients ≥70 years using the Mini Mental State Examination before and 6 months after TAVI. Cognitive deterioration or improvement was defined as change of ≥3 points decrease or increase in the Mini Mental State Examination score between baseline and follow-up. Cognitive deterioration was found in 29 patients (12.7%). Predictive analysis using logistic regression did not identify any statistically significant predictor of cognitive deterioration. A review of individual medical records in 8 patients with a major Mini Mental State Examination score decrease of ≥5 points revealed specific causes in 6 cases (postinterventional delirium in 2; postinterventional stroke, progressive renal failure, progressive heart failure, or combination of preexisting cerebrovascular disease and mild cognitive impairment in 1 each). Among 48 patients with impaired baseline cognition (Mini Mental State Examination score <26 points), 18 patients (37.5%) cognitively improved. The preinterventional aortic valve area was lower in patients who cognitively improved (median aortic valve area 0.60 cm 2 ) as compared with patients who did not improve (median aortic valve area 0.70 cm 2 ; P=0.01). This is the first study providing evidence that TAVI results in cognitive improvement among patients who had impaired preprocedural cognitive function, possibly related to hemodynamic improvement in patients with severe aortic stenosis. Our results confirm that some patients experience cognitive deterioration after TAVI. © 2016 American Heart Association, Inc.
Chronic abdominal wall pain misdiagnosed as functional abdominal pain.
van Assen, Tijmen; de Jager-Kievit, Jenneke W A J; Scheltinga, Marc R; Roumen, Rudi M H
2013-01-01
The abdominal wall is often neglected as a cause of chronic abdominal pain. The aim of this study was to identify chronic abdominal wall pain syndromes, such as anterior cutaneous nerve entrapment syndrome (ACNES), in a patient population diagnosed with functional abdominal pain, including irritable bowel syndrome, using a validated 18-item questionnaire as an identification tool. In this cross-sectional analysis, 4 Dutch primary care practices employing physicians who were unaware of the existence of ACNES were selected. A total of 535 patients ≥18 years old who were registered with a functional abdominal pain diagnosis were approached when they were symptomatic to complete the questionnaire (maximum 18 points). Responders who scored at least the 10-point cutoff value (sensitivity, 0.94; specificity, 0.92) underwent a diagnostic evaluation to establish their final diagnosis. The main outcome was the presence and prevalence of ACNES in a group of symptomatic patients diagnosed with functional abdominal pain. Of 535 patients, 304 (57%) responded; 167 subjects (31%) recently reporting symptoms completed the questionnaire. Of 23 patients who scored above the 10-point cutoff value, 18 were available for a diagnostic evaluation. In half of these subjects (n = 9) functional abdominal pain (including IBS) was confirmed. However, the other 9 patients were suffering from abdominal wall pain syndrome, 6 of whom were diagnosed with ACNES (3.6% prevalence rate of symptomatic subjects; 95% confidence interval, 1.7-7.6), whereas the remaining 3 harbored a painful lipoma, an abdominal herniation, and a painful scar. A clinically relevant portion of patients previously diagnosed with functional abdominal pain syndrome in a primary care environment suffers from an abdominal wall pain syndrome such as ACNES.
Homoclinic orbits and critical points of barrier functions
NASA Astrophysics Data System (ADS)
Cannarsa, Piermarco; Cheng, Wei
2015-06-01
We interpret the close link between the critical points of Mather's barrier functions and minimal homoclinic orbits with respect to the Aubry sets on {{T}}n . We also prove a critical point theorem for barrier functions and the existence of such homoclinic orbits on {{T}}2 as an application.
Infrared divergences for free quantum fields in cosmological spacetimes
NASA Astrophysics Data System (ADS)
Higuchi, Atsushi; Rendell, Nicola
2018-06-01
We investigate the nature of infrared divergences for the free graviton and inflaton two-point functions in flat Friedman–Lemaître–Robertson–Walker spacetime. These divergences arise because the momentum integral for these two-point functions diverges in the infrared. It is straightforward to see that the power of the momentum in the integrand can be increased by 2 in the infrared using large gauge transformations, which are sufficient for rendering these two-point functions infrared finite for slow-roll inflation. In other words, if the integrand of the momentum integral for these two-point functions behaves like , where p is the momentum, in the infrared, then it can be made to behave like by large gauge transformations. On the other hand, it is known that, if one smears these two-point functions in a gauge-invariant manner, the power of the momentum in the integrand is changed from to . This fact suggests that the power of the momentum in the integrand for these two-point functions can be increased by 4 using large gauge transformations. In this paper we show that this is indeed the case. Thus, the two-point functions for the graviton and inflaton fields can be made finite by large gauge transformations for a large class of potentials and states in single-field inflation.
Chen, Ruey; Chan, Pi-Tuan; Chu, Hsin; Lin, Yu-Cih; Chang, Pi-Chen; Chen, Chien-Yu; Chou, Kuei-Ru
2017-01-01
This is the first meta-analysis to compare the treatment effects and safety of administering donepezil alone versus a combination of memantine and donepezil to treat patients with moderate to severe Alzheimer Disease, particularly regarding cognitive functions, behavioral and psychological symptoms in dementia (BPSD), and global functions. PubMed, Medline, Embase, PsycINFO, and Cochrane databases were used to search for English and non-English articles for inclusion in the meta-analysis to evaluate the effect size and incidence of adverse drug reactions of different treatments. Compared with patients who received donepezil alone, those who received donepezil in combination with memantine exhibited limited improvements in cognitive functions (g = 0.378, p < .001), BPSD (g = -0.878, p < .001) and global functions (g = -0.585, p = .004). Gradual titration of memantine plus a fixed dose and gradual titration of donepezil as well as a fixed dose and gradual titration of memantine resulted in limited improvements in cognitive functions(g = 0.371, p = .005), BPSD(g = -0.913, p = .001), and global functions(g = -0.371, p = .001). Both in the 24th week and at the final evaluation point, the combination of donepezil and memantine led to greater improvement in cognitive functions, BPSD, and global functions than did donepezil alone in patients with moderate to severe Alzheimer Disease.
The engine fuel system fault analysis
NASA Astrophysics Data System (ADS)
Zhang, Yong; Song, Hanqiang; Yang, Changsheng; Zhao, Wei
2017-05-01
For improving the reliability of the engine fuel system, the typical fault factor of the engine fuel system was analyzed from the point view of structure and functional. The fault character was gotten by building the fuel system fault tree. According the utilizing of fault mode effect analysis method (FMEA), several factors of key component fuel regulator was obtained, which include the fault mode, the fault cause, and the fault influences. All of this made foundation for next development of fault diagnosis system.
Yamamoto, Tokihiro; Kabus, Sven; Bal, Matthieu; Bzdusek, Karl; Keall, Paul J; Wright, Cari; Benedict, Stanley H; Daly, Megan E
2018-05-04
Lung functional image guided radiation therapy (RT) that avoids irradiating highly functional regions has potential to reduce pulmonary toxicity following RT. Tumor regression during RT is common, leading to recovery of lung function. We hypothesized that computed tomography (CT) ventilation image-guided treatment planning reduces the functional lung dose compared to standard anatomic image-guided planning in 2 different scenarios with or without plan adaptation. CT scans were acquired before RT and during RT at 2 time points (16-20 Gy and 30-34 Gy) for 14 patients with locally advanced lung cancer. Ventilation images were calculated by deformable image registration of four-dimensional CT image data sets and image analysis. We created 4 treatment plans at each time point for each patient: functional adapted, anatomic adapted, functional unadapted, and anatomic unadapted plans. Adaptation was performed at 2 time points. Deformable image registration was used for accumulating dose and calculating a composite of dose-weighted ventilation used to quantify the lung accumulated dose-function metrics. The functional plans were compared with the anatomic plans for each scenario separately to investigate the hypothesis at a significance level of 0.05. Tumor volume was significantly reduced by 20% after 16 to 20 Gy (P = .02) and by 32% after 30 to 34 Gy (P < .01) on average. In both scenarios, the lung accumulated dose-function metrics were significantly lower in the functional plans than in the anatomic plans without compromising target volume coverage and adherence to constraints to critical structures. For example, functional planning significantly reduced the functional mean lung dose by 5.0% (P < .01) compared to anatomic planning in the adapted scenario and by 3.6% (P = .03) in the unadapted scenario. This study demonstrated significant reductions in the accumulated dose to the functional lung with CT ventilation image-guided planning compared to anatomic image-guided planning for patients showing tumor regression and changes in regional ventilation during RT. Copyright © 2018 Elsevier Inc. All rights reserved.
Higher order correlations of IRAS galaxies
NASA Technical Reports Server (NTRS)
Meiksin, Avery; Szapudi, Istvan; Szalay, Alexander
1992-01-01
The higher order irreducible angular correlation functions are derived up to the eight-point function, for a sample of 4654 IRAS galaxies, flux-limited at 1.2 Jy in the 60 microns band. The correlations are generally found to be somewhat weaker than those for the optically selected galaxies, consistent with the visual impression of looser clusters in the IRAS sample. It is found that the N-point correlation functions can be expressed as the symmetric sum of products of N - 1 two-point functions, although the correlations above the four-point function are consistent with zero. The coefficients are consistent with the hierarchical clustering scenario as modeled by Hamilton and by Schaeffer.
Calculating the n-point correlation function with general and efficient python code
NASA Astrophysics Data System (ADS)
Genier, Fred; Bellis, Matthew
2018-01-01
There are multiple approaches to understanding the evolution of large-scale structure in our universe and with it the role of baryonic matter, dark matter, and dark energy at different points in history. One approach is to calculate the n-point correlation function estimator for galaxy distributions, sometimes choosing a particular type of galaxy, such as luminous red galaxies. The standard way to calculate these estimators is with pair counts (for the 2-point correlation function) and with triplet counts (for the 3-point correlation function). These are O(n2) and O(n3) problems, respectively and with the number of galaxies that will be characterized in future surveys, having efficient and general code will be of increasing importance. Here we show a proof-of-principle approach to the 2-point correlation function that relies on pre-calculating galaxy locations in coarse “voxels”, thereby reducing the total number of necessary calculations. The code is written in python, making it easily accessible and extensible and is open-sourced to the community. Basic results and performance tests using SDSS/BOSS data will be shown and we discuss the application of this approach to the 3-point correlation function.
Level set method for image segmentation based on moment competition
NASA Astrophysics Data System (ADS)
Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai
2015-05-01
We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.
Two-point correlation functions in inhomogeneous and anisotropic cosmologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcori, Oton H.; Pereira, Thiago S., E-mail: otonhm@hotmail.com, E-mail: tspereira@uel.br
Two-point correlation functions are ubiquitous tools of modern cosmology, appearing in disparate topics ranging from cosmological inflation to late-time astrophysics. When the background spacetime is maximally symmetric, invariance arguments can be used to fix the functional dependence of this function as the invariant distance between any two points. In this paper we introduce a novel formalism which fixes this functional dependence directly from the isometries of the background metric, thus allowing one to quickly assess the overall features of Gaussian correlators without resorting to the full machinery of perturbation theory. As an application we construct the CMB temperature correlation functionmore » in one inhomogeneous (namely, an off-center LTB model) and two spatially flat and anisotropic (Bianchi) universes, and derive their covariance matrices in the limit of almost Friedmannian symmetry. We show how the method can be extended to arbitrary N -point correlation functions and illustrate its use by constructing three-point correlation functions in some simple geometries.« less
NASA Astrophysics Data System (ADS)
Shi, Yan-ting; Liu, Jie; Wang, Peng; Zhang, Xu-nuo; Wang, Jun-qiang; Guo, Liang
2017-05-01
With the implementation of water environment management in key basins in China, the monitoring and evaluation system of basins are in urgent need of innovation and upgrading. In view of the heavy workload of existing evaluation methods and the cumbersome calculation of multi-factor weighting method, the idea of using entroy method to assess river health based on aquatic ecological function regionalization was put forward. According to the monitoring data of songhua river in the year of 2011-2015, the entropy weight method was used to calculate the weight of 9 evaluation factors of 29 monitoring sections, and the river health assessment was carried out. In the study area, the river health status of the biodiversity conservation function area (4.111 point) was good, the water conservation function area (3.371 point), the habitat maintenance functional area (3.262 point), the agricultural production maintenance functional area (3.695 point) and the urban supporting functional area (3.399 point) was light pollution.
A Culturally-Specific Dance Intervention to Increase Functional Capacity in African American Women
Murrock, Carolyn J.; Gary, Faye A.
2013-01-01
This study examined a culturally-specific dance intervention on functional capacity in African American women at three time points. The intervention was two times per week for 8 weeks using two African American churches randomly assigned to either the experimental or comparison group, had 126 participants, ages 36–82 years. Analysis of covariance revealed that both groups improved over time and the only significant difference between groups was at 18 weeks. The increase at 18 weeks in the experimental group remained when controlling for baseline covariates. This study supported culturally-specific dance as an intervention to improve functional capacity in African American women. PMID:19202718
A culturally-specific dance intervention to increase functional capacity in African American women.
Murrock, Carolyn J; Gary, Faye A
2008-01-01
This study examined a culturally-specific dance intervention on functional capacity in African American women at three time points. The intervention was two times per week for 8 weeks using two African American churches randomly assigned to either the experimental or comparison group, had 126 participants, ages 36-82 years. Analysis of covariance revealed that both groups improved over time and the only significant difference between groups was at 18 weeks. The increase at 18 weeks in the experimental group remained when controlling for baseline covariates. This study supported culturally-specific dance as an intervention to improve functional capacity in African American women.
Tan, Tao; Yan, Jie
2016-01-01
The brief discussion is introduced in the paper on the academic thought of professor YAN Jie, the contemporary famous TCM doctor, on functional dyspepsia treated with acupuncture and moxibustion. Treatment based on "the three-regional acupoint selection" is applied to professor YAN's treatment for functional dyspepsia, in which, acupuncture is on Sibai (ST 2), Liangmen (ST 21) and Zusanli (ST 36), and the supplementary points are added accordingly. The academic thought is described as the combination of acupuncture and moxibustion based on strengthening healthy qi, supplemented by soothing the liver and psychological counseling. Also, an example is provided.
Ahmed, I; Salmon, L J; Waller, A; Watanabe, H; Roe, J P; Pinczewski, L A
2016-01-01
Oxidised zirconium was introduced as a material for femoral components in total knee arthroplasty (TKA) as an attempt to reduce polyethylene wear. However, the long-term survival of this component is not known. We performed a retrospective review of a prospectively collected database to assess the ten year survival and clinical and radiological outcomes of an oxidised zirconium total knee arthroplasty with the Genesis II prosthesis. The Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC), Knee Injury and Osteoarthritis Outcome Score (KOOS) and a patient satisfaction scale were used to assess outcome. A total of 303 consecutive TKAs were performed in 278 patients with a mean age of 68 years (45 to 89). The rate of survival ten years post-operatively as assessed using Kaplan-Meier analysis was 97% (95% confidence interval 94 to 99) with revision for any reason as the endpoint. There were no revisions for loosening, osteolysis or failure of the implant. There was a significant improvement in all components of the WOMAC score at final follow-up (p < 0.001). The mean individual components of the KOOS score for symptoms (82.4 points; 36 to 100), pain (87.5 points; 6 to 100), activities of daily life (84.9 points; 15 to 100) and quality of life (71.4 points; 6 to 100) were all at higher end of the scale. This study provides further supportive evidence that the oxidised zirconium TKA gives comparable rates of survival with other implants and excellent functional outcomes ten years post-operatively. Total knee arthroplasty with an oxidised zirconium femoral component gives comparable long-term rates of survival and functional outcomes with conventional implants. ©2016 The British Editorial Society of Bone & Joint Surgery.
Functional data analysis on ground reaction force of military load carriage increment
NASA Astrophysics Data System (ADS)
Din, Wan Rozita Wan; Rambely, Azmin Sham
2014-06-01
Analysis of ground reaction force on military load carriage is done through functional data analysis (FDA) statistical technique. The main objective of the research is to investigate the effect of 10% load increment and to find the maximum suitable load for the Malaysian military. Ten military soldiers age 31 ± 6.2 years, weigh 71.6 ± 10.4 kg and height of 166.3 ± 5.9 cm carrying different military load range from 0% body weight (BW) up to 40% BW participated in an experiment to gather the GRF and kinematic data using Vicon Motion Analysis System, Kirstler force plates and thirty nine body markers. The analysis is conducted in sagittal, medial lateral and anterior posterior planes. The results show that 10% BW load increment has an effect when heel strike and toe-off for all the three planes analyzed with P-value less than 0.001 at 0.05 significant levels. FDA proves to be one of the best statistical techniques in analyzing the functional data. It has the ability to handle filtering, smoothing and curve aligning according to curve features and points of interest.
Structure-functional prediction and analysis of cancer mutation effects in protein kinases.
Dixit, Anshuman; Verkhivker, Gennady M
2014-01-01
A central goal of cancer research is to discover and characterize the functional effects of mutated genes that contribute to tumorigenesis. In this study, we provide a detailed structural classification and analysis of functional dynamics for members of protein kinase families that are known to harbor cancer mutations. We also present a systematic computational analysis that combines sequence and structure-based prediction models to characterize the effect of cancer mutations in protein kinases. We focus on the differential effects of activating point mutations that increase protein kinase activity and kinase-inactivating mutations that decrease activity. Mapping of cancer mutations onto the conformational mobility profiles of known crystal structures demonstrated that activating mutations could reduce a steric barrier for the movement from the basal "low" activity state to the "active" state. According to our analysis, the mechanism of activating mutations reflects a combined effect of partial destabilization of the kinase in its inactive state and a concomitant stabilization of its active-like form, which is likely to drive tumorigenesis at some level. Ultimately, the analysis of the evolutionary and structural features of the major cancer-causing mutational hotspot in kinases can also aid in the correlation of kinase mutation effects with clinical outcomes.
Electronic zero-point fluctuation forces inside circuit components
Leonhardt, Ulf
2018-01-01
One of the most intriguing manifestations of quantum zero-point fluctuations are the van der Waals and Casimir forces, often associated with vacuum fluctuations of the electromagnetic field. We study generalized fluctuation potentials acting on internal degrees of freedom of components in electrical circuits. These electronic Casimir-like potentials are induced by the zero-point current fluctuations of any general conductive circuit. For realistic examples of an electromechanical capacitor and a superconducting qubit, our results reveal the possibility of tunable forces between the capacitor plates, or the level shifts of the qubit, respectively. Our analysis suggests an alternative route toward the exploration of Casimir-like fluctuation potentials, namely, by characterizing and measuring them as a function of parameters of the environment. These tunable potentials may be useful for future nanoelectromechanical and quantum technologies. PMID:29719863
Optimum design point for a closed-cycle OTEC system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ikegami, Yasuyuki; Uehara, Haruo
1994-12-31
Performance analysis is performed for optimum design point of a closed-cycle Ocean Thermal Energy Conversion (OTEC) system. Calculations are made for an OTEC model plant with a gross power of 100 MW, which was designed by the optimization method proposed by Uehara and Ikegami for the design conditions of 21 C--29 C warm sea water temperature and 4 C cold sea water temperature. Ammonia is used as working fluid. Plate type evaporator and condenser are used as heat exchangers. The length of the cold sea water pipe is 1,000 m. This model plant is a floating-type OTEC plant. The objectivemore » function of optimum design point is defined as the total heat transfer area of heat exchangers per the annual net power.« less
Shape dependence of holographic Rényi entropy in general dimensions
Bianchi, Lorenzo; Chapman, Shira; Dong, Xi; ...
2016-11-29
We present a holographic method for computing the response of Rényi entropies in conformal field theories to small shape deformations around a flat (or spherical) entangling surface. Our strategy employs the stress tensor one-point function in a deformed hyperboloid background and relates it to the coefficient in the two-point function of the displacement operator. We obtain explicit numerical results for d = 3, · · · , 6 spacetime dimensions, and also evaluate analytically the limits where the Rényi index approaches 1 and 0 in general dimensions. We use our results to extend the work of 1602.08493 and disprove amore » set of conjectures in the literature regarding the relation between the Rényi shape dependence and the conformal weight of the twist operator. As a result, we also extend our analysis beyond leading order in derivatives in the bulk theory by studying Gauss-Bonnet gravity.« less
Enhanced t -3/2 long-time tail for the stress-stress time correlation function
NASA Astrophysics Data System (ADS)
Evans, Denis J.
1980-01-01
Nonequilibrium molecular dynamics is used to calculate the spectrum of shear viscosity for a Lennard-Jones fluid. The calculated zero-frequency shear viscosity agrees well with experimental argon results for the two state points considered. The low-frequency behavior of shear viscosity is dominated by an ω 1/2 cusp. Analysis of the form of this cusp reveals that the stress-stress time correlation function exhibits a t -3/2 "long-time tail." It is shown that for the state points studied, the amplitude of this long-time tail is between 12 and 150 times larger than what has been predicted theoretically. If the low-frequency results are truly asymptotic, they imply that the cross and potential contributions to the Kubo-Green integrand for shear viscosity exhibit a t -3/2 long-time tail. This result contradicts the established theory of such processes.
From spinning conformal blocks to matrix Calogero-Sutherland models
NASA Astrophysics Data System (ADS)
Schomerus, Volker; Sobko, Evgeny
2018-04-01
In this paper we develop further the relation between conformal four-point blocks involving external spinning fields and Calogero-Sutherland quantum mechanics with matrix-valued potentials. To this end, the analysis of [1] is extended to arbitrary dimensions and to the case of boundary two-point functions. In particular, we construct the potential for any set of external tensor fields. Some of the resulting Schrödinger equations are mapped explicitly to the known Casimir equations for 4-dimensional seed conformal blocks. Our approach furnishes solutions of Casimir equations for external fields of arbitrary spin and dimension in terms of functions on the conformal group. This allows us to reinterpret standard operations on conformal blocks in terms of group-theoretic objects. In particular, we shall discuss the relation between the construction of spinning blocks in any dimension through differential operators acting on seed blocks and the action of left/right invariant vector fields on the conformal group.
From random microstructures to representative volume elements
NASA Astrophysics Data System (ADS)
Zeman, J.; Šejnoha, M.
2007-06-01
A unified treatment of random microstructures proposed in this contribution opens the way to efficient solutions of large-scale real world problems. The paper introduces a notion of statistically equivalent periodic unit cell (SEPUC) that replaces in a computational step the actual complex geometries on an arbitrary scale. A SEPUC is constructed such that its morphology conforms with images of real microstructures. Here, the appreciated two-point probability function and the lineal path function are employed to classify, from the statistical point of view, the geometrical arrangement of various material systems. Examples of statistically equivalent unit cells constructed for a unidirectional fibre tow, a plain weave textile composite and an irregular-coursed masonry wall are given. A specific result promoting the applicability of the SEPUC as a tool for the derivation of homogenized effective properties that are subsequently used in an independent macroscopic analysis is also presented.
Czakó, Gábor; Kaledin, Alexey L; Bowman, Joel M
2010-04-28
We report the implementation of a previously suggested method to constrain a molecular system to have mode-specific vibrational energy greater than or equal to the zero-point energy in quasiclassical trajectory calculations [J. M. Bowman et al., J. Chem. Phys. 91, 2859 (1989); W. H. Miller et al., J. Chem. Phys. 91, 2863 (1989)]. The implementation is made practical by using a technique described recently [G. Czako and J. M. Bowman, J. Chem. Phys. 131, 244302 (2009)], where a normal-mode analysis is performed during the course of a trajectory and which gives only real-valued frequencies. The method is applied to the water dimer, where its effectiveness is shown by computing mode energies as a function of integration time. Radial distribution functions are also calculated using constrained quasiclassical and standard classical molecular dynamics at low temperature and at 300 K and compared to rigorous quantum path integral calculations.
Modification of the G-phonon mode of graphene by nitrogen doping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lukashev, Pavel V., E-mail: pavel.lukashev@uni.edu; Hurley, Noah; Zhao, Liuyan
2016-01-25
The effect of nitrogen doping on the phonon spectra of graphene is analyzed. In particular, we employ first-principles calculations and scanning Raman analysis to investigate the dependence of phonon frequencies in graphene on the concentration of nitrogen dopants. We demonstrate that the G phonon frequency shows oscillatory behavior as a function of nitrogen concentration. We analyze different mechanisms which could potentially be responsible for this behavior, such as Friedel charge oscillations around the localized nitrogen impurity atom, the bond length change between nitrogen impurity and its nearest neighbor carbon atoms, and the long-range interactions of the nitrogen point defects. Wemore » show that the bond length change and the long range interaction of point defects are possible mechanisms responsible for the oscillatory behavior of the G frequency as a function of nitrogen concentration. At the same time, Friedel charge oscillations are unlikely to contribute to this behavior.« less
Performance analysis for mixed FSO/RF Nakagami-m and Exponentiated Weibull dual-hop airborne systems
NASA Astrophysics Data System (ADS)
Jing, Zhao; Shang-hong, Zhao; Wei-hu, Zhao; Ke-fan, Chen
2017-06-01
In this paper, the performances of mixed free-space optical (FSO)/radio frequency (RF) systems are presented based on the decode-and-forward relaying. The Exponentiated Weibull fading channel with pointing error effect is adopted for the atmospheric fluctuation of FSO channel and the RF link undergoes the Nakagami-m fading. We derived the analytical expression for cumulative distribution function (CDF) of equivalent signal-to-noise ratio (SNR). The novel mathematical presentations of outage probability and average bit-error-rate (BER) are developed based on the Meijer's G function. The analytical results show an accurately match to the Monte-Carlo simulation results. The outage and BER performance for the mixed system by decode-and-forward relay are investigated considering atmospheric turbulence and pointing error condition. The effect of aperture averaging is evaluated in all atmospheric turbulence conditions as well.
Patwary, Nurmohammed; Preza, Chrysanthe
2015-01-01
A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634
NASA Technical Reports Server (NTRS)
Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.
1995-01-01
We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.
van der Waals criticality in AdS black holes: A phenomenological study
NASA Astrophysics Data System (ADS)
Bhattacharya, Krishnakanta; Majhi, Bibhas Ranjan; Samanta, Saurav
2017-10-01
Anti-de Sitter black holes exhibit van der Waals-type phase transition. In the extended phase-space formalism, the critical exponents for any spacetime metric are identical to the standard ones. Motivated by this fact, we give a general expression for the Helmholtz free energy near the critical point, which correctly reproduces these exponents. The idea is similar to the Landau model, which gives a phenomenological description of the usual second-order phase transition. Here, two main inputs are taken into account for the analysis: (a) black holes should have van der Waals-like isotherms, and (b) free energy can be expressed solely as a function of thermodynamic volume and horizon temperature. Resulting analysis shows that the form of Helmholtz free energy correctly encapsulates the features of the Landau function. We also discuss the isolated critical point accompanied by nonstandard values of critical exponents. The whole formalism is then extended to two other criticalities, namely, Y -X and T -S (based on the standard; i.e., nonextended phase space), where X and Y are generalized force and displacement, whereas T and S are the horizon temperature and entropy. We observe that in the former case Gibbs free energy plays the role of Landau function, whereas in the later case, that role is played by the internal energy (here, it is the black hole mass). Our analysis shows that, although the existence of a van der Waals phase transition depends on the explicit form of the black hole metric, the values of the critical exponents are universal in nature.
Wu, Frederick; Zitzmann, Michael; Heiselman, Darell; Donatucci, Craig; Knorr, Jack; Patel, Ankur B; Kinchen, Kraig
2016-08-01
Evidence from well-designed studies documenting the benefit of testosterone replacement therapy as a function of patient demographic and clinical characteristics is lacking. To determine demographic and clinical predictors of treatment outcomes in hypogonadal men with low sex drive, low energy, and/or erectile dysfunction. Post hoc analysis of a randomized, multicenter, double-blinded, placebo-controlled, 16-week study of 715 hypogonadal men (mean age = 55.3 years, age range = 19-92 years) presenting with low sex drive and/or low energy who received placebo or testosterone solution 2% for 12 weeks. Two levels defined patient-reported improvement (PRI) in sex drive or energy: level 1 was at least "a little better" and level 2 was at least "much better" in energy or sex drive on the Patient Global Impression of Improvement at study end point. PRI in erectile function was stratified by erectile dysfunction severity at baseline as measured by the erectile function domain of the International Index for Erectile Function: mild at baseline (change of 2), moderate at baseline (change of 5), and severe at baseline (change of 7). Associations of demographic and clinical characteristics with PRI were calculated with stepwise forward multiple logistic regression analysis. Odds ratios represented the likelihood of PRI in symptoms among variable categories. Higher levels of end-point testosterone were associated with higher rates of PRI (at levels 1 and 2) in sex drive and energy (P < .001 for the two comparisons). Lower baseline testosterone levels were associated with higher rates of level 1 PRI in sex drive (P = .028); and classic hypogonadism (vs non-classic hypogonadism) was associated with higher rates of level 2 PRI in sex drive (P = .005) and energy (P = .006). When assessing the potential for improvements in men with testosterone deficiency using patient-reported outcome questionnaires, possible predictors of treatment outcomes to consider include the etiology of hypogonadism and testosterone levels (baseline and end point). Copyright © 2016. Published by Elsevier Inc.
Analysis of short single rest/activation epoch fMRI by self-organizing map neural network
NASA Astrophysics Data System (ADS)
Erberich, Stephan G.; Dietrich, Thomas; Kemeny, Stefan; Krings, Timo; Willmes, Klaus; Thron, Armin; Oberschelp, Walter
2000-04-01
Functional magnet resonance imaging (fMRI) has become a standard non invasive brain imaging technique delivering high spatial resolution. Brain activation is determined by magnetic susceptibility of the blood oxygen level (BOLD effect) during an activation task, e.g. motor, auditory and visual tasks. Usually box-car paradigms have 2 - 4 rest/activation epochs with at least an overall of 50 volumes per scan in the time domain. Statistical test based analysis methods need a large amount of repetitively acquired brain volumes to gain statistical power, like Student's t-test. The introduced technique based on a self-organizing neural network (SOM) makes use of the intrinsic features of the condition change between rest and activation epoch and demonstrated to differentiate between the conditions with less time points having only one rest and one activation epoch. The method reduces scan and analysis time and the probability of possible motion artifacts from the relaxation of the patients head. Functional magnet resonance imaging (fMRI) of patients for pre-surgical evaluation and volunteers were acquired with motor (hand clenching and finger tapping), sensory (ice application), auditory (phonological and semantic word recognition task) and visual paradigms (mental rotation). For imaging we used different BOLD contrast sensitive Gradient Echo Planar Imaging (GE-EPI) single-shot pulse sequences (TR 2000 and 4000, 64 X 64 and 128 X 128, 15 - 40 slices) on a Philips Gyroscan NT 1.5 Tesla MR imager. All paradigms were RARARA (R equals rest, A equals activation) with an epoch width of 11 time points each. We used the self-organizing neural network implementation described by T. Kohonen with a 4 X 2 2D neuron map. The presented time course vectors were clustered by similar features in the 2D neuron map. Three neural networks were trained and used for labeling with the time course vectors of one, two and all three on/off epochs. The results were also compared by using a Kolmogorov-Smirnov statistical test of all 66 time points. To remove non- periodical time courses from training an auto-correlation function and bandwidth limiting Fourier filtering in combination with Gauss temporal smoothing was used. None of the trained maps, with one, two and three epochs, were significantly different which indicates that the feature space of only one on/off epoch is sufficient to differentiate between the rest and task condition. We found, that without pre-processing of the data no meaningful results can be achieved because of the huge amount of the non-activated and background voxels represents the majority of the features and is therefore learned by the SOM. Thus it is crucial to remove unnecessary capacity load of the neural network by selection of the training input, using auto-correlation function and/or Fourier spectrum analysis. However by reducing the time points to one rest and one activation epoch either strong auto- correlation or a precise periodical frequency is vanishing. Self-organizing maps can be used to separate rest and activation epochs of with only a 1/3 of the usually acquired time points. Because of the nature of the SOM technique, the pattern or feature separation, only the presence of a state change between the conditions is necessary for differentiation. Also the variance of the individual hemodynamic response function (HRF) and the variance of the spatial different regional cerebral blood flow (rCBF) is learned from the subject and not compared with a fixed model done by statistical evaluation. We found that reducing the information to only a few time points around the BOLD effect was not successful due to delays of rCBF and the insufficient extension of the BOLD feature in the time space. Especially for patient routine observation and pre-surgical planing a reduced scan time is of interest.
NASA Astrophysics Data System (ADS)
Wang, X.; Tu, C. Y.; He, J.; Wang, L.
2017-12-01
It has been a longstanding debate on what the nature of Elsässer variables z- observed in the Alfvénic solar wind is. It is widely believed that z- represents inward propagating Alfvén waves and undergoes non-linear interaction with z+ to produce energy cascade. However, z- variations sometimes show nature of convective structures. Here we present a new data analysis on z- autocorrelation functions to get some definite information on its nature. We find that there is usually a break point on the z- auto-correlation function when the fluctuations show nearly pure Alfvénicity. The break point observed by Helios-2 spacecraft near 0.3 AU is at the first time lag ( 81 s), where the autocorrelation coefficient has the value less than that at zero-time lag by a factor of more than 0.4. The autocorrelation function breaks also appear in the WIND observations near 1 AU. The z- autocorrelation function is separated by the break into two parts: fast decreasing part and slowly decreasing part, which cannot be described in a whole by an exponential formula. The breaks in the z- autocorrelation function may represent that the z- time series are composed of high-frequency white noise and low-frequency apparent structures, which correspond to the flat and steep parts of the function, respectively. This explanation is supported by a simple test with a superposition of an artificial random data series and a smoothed random data series. Since in many cases z- autocorrelation functions do not decrease very quickly at large time lag and cannot be considered as the Lanczos type, no reliable value for correlation-time can be derived. Our results showed that in these cases with high Alfvénicity, z- should not be considered as inward-propagating wave. The power-law spectrum of z+ should be made by fluid turbulence cascade process presented by Kolmogorov.
NASA Astrophysics Data System (ADS)
Barkeshli, Sina
A relatively simple and efficient closed form asymptotic representation of the microstrip dyadic surface Green's function is developed. The large parameter in this asymptotic development is proportional to the lateral separation between the source and field points along the planar microstrip configuration. Surprisingly, this asymptotic solution remains accurate even for very small (almost two tenths of a wavelength) lateral separation of the source and field points. The present asymptotic Green's function will thus allow a very efficient calculation of the currents excited on microstrip antenna patches/feed lines and monolithic millimeter and microwave integrated circuit (MIMIC) elements based on a moment method (MM) solution of an integral equation for these currents. The kernal of the latter integral equation is the present asymptotic form of the microstrip Green's function. It is noted that the conventional Sommerfeld integral representation of the microstrip surface Green's function is very poorly convergent when used in this MM formulation. In addition, an efficient exact steepest descent path integral form employing a radially propagating representation of the microstrip dyadic Green's function is also derived which exhibits a relatively faster convergence when compared to the conventional Sommerfeld integral representation. The same steepest descent form could also be obtained by deforming the integration contour of the conventional Sommerfeld representation; however, the radially propagating integral representation exhibits better convergence properties for laterally separated source and field points even before the steepest descent path of integration is used. Numerical results based on the efficient closed form asymptotic solution for the microstrip surface Green's function developed in this work are presented for the mutual coupling between a pair of dipoles on a single layer grounded dielectric slab. The accuracy of the latter calculations is confirmed by comparison with results based on an exact integral representation for that Green's function.
Theoretical analysis for the specific heat and thermal parameters of solid C60
NASA Astrophysics Data System (ADS)
Soto, J. R.; Calles, A.; Castro, J. J.
1997-08-01
We present the results of a theoretical analysis for the thermal parameters and phonon contribution to the specific heat in solid C60. The phonon contribution to the specific heat is calculated through the solution of the corresponding dynamical matrix, for different points in the Brillouin zone, and the construccion of the partial and generalized phonon density of states. The force constants are obtained from a first principle calculation, using a SCF Hartree-Fock wave function from the Gaussian 92 program. The thermal parameters reported are the effective temperatures and vibrational amplitudes as a function of temperature. Using this model we present a parametization scheme in order to reproduce the general behaviour of the experimental specific heat for these materials.
NASA Astrophysics Data System (ADS)
Ciancio, P. M.; Rossit, C. A.; Laura, P. A. A.
2007-05-01
This study is concerned with the vibration analysis of a cantilevered rectangular anisotropic plate when a concentrated mass is rigidly attached to its center point. Based on the classical theory of anisotropic plates, the Ritz method is employed to perform the analysis. The deflection of the plate is approximated by a set of beam functions in each principal coordinate direction. The influence of the mass magnitude on the natural frequencies and modal shapes of vibration is studied for a boron-epoxy plate and also in the case of a generic anisotropic material. The classical Ritz method with beam functions as the spatial approximation proved to be a suitable procedure to solve a problem of this analytical complexity.
Ultrasound beam transmission using a discretely orthogonal Gaussian aperture basis
NASA Astrophysics Data System (ADS)
Roberts, R. A.
2018-04-01
Work is reported on development of a computational model for ultrasound beam transmission at an arbitrary geometry transmission interface for generally anisotropic materials. The work addresses problems encountered when the fundamental assumptions of ray theory do not hold, thereby introducing errors into ray-theory-based transmission models. Specifically, problems occur when the asymptotic integral analysis underlying ray theory encounters multiple stationary phase points in close proximity, due to focusing caused by concavity on either the entry surface or a material slowness surface. The approach presented here projects integrands over both the transducer aperture and the entry surface beam footprint onto a Gaussian-derived basis set, thereby distributing the integral over a summation of second-order phase integrals which are amenable to single stationary phase point analysis. Significantly, convergence is assured provided a sufficiently fine distribution of basis functions is used.