Sample records for projection matrix model

  1. The performance evaluation model of mining project founded on the weight optimization entropy value method

    NASA Astrophysics Data System (ADS)

    Mao, Chao; Chen, Shou

    2017-01-01

    According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.

  2. The genealogical decomposition of a matrix population model with applications to the aggregation of stages.

    PubMed

    Bienvenu, François; Akçay, Erol; Legendre, Stéphane; McCandlish, David M

    2017-06-01

    Matrix projection models are a central tool in many areas of population biology. In most applications, one starts from the projection matrix to quantify the asymptotic growth rate of the population (the dominant eigenvalue), the stable stage distribution, and the reproductive values (the dominant right and left eigenvectors, respectively). Any primitive projection matrix also has an associated ergodic Markov chain that contains information about the genealogy of the population. In this paper, we show that these facts can be used to specify any matrix population model as a triple consisting of the ergodic Markov matrix, the dominant eigenvalue and one of the corresponding eigenvectors. This decomposition of the projection matrix separates properties associated with lineages from those associated with individuals. It also clarifies the relationships between many quantities commonly used to describe such models, including the relationship between eigenvalue sensitivities and elasticities. We illustrate the utility of such a decomposition by introducing a new method for aggregating classes in a matrix population model to produce a simpler model with a smaller number of classes. Unlike the standard method, our method has the advantage of preserving reproductive values and elasticities. It also has conceptually satisfying properties such as commuting with changes of units. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. The Impact of Goal Setting and Empowerment on Governmental Matrix Organizations

    DTIC Science & Technology

    1993-09-01

    shared. In a study of matrix management, Eduardo Vasconcellos further describes various matrix structures in the Galbraith model. In a functional...Technology/LAR, Wright-Patterson AFB OH, 1992. Vasconcellos , Eduardo . "A Model For a Better Understanding of the Matrix Structure," IEEE Transactions on...project matrix, the project manager maintains more influence and the structure lies to the right-of center ( Vasconcellos , 1979:58). Different Types of

  4. The accuracy of matrix population model projections for coniferous trees in the Sierra Nevada, California

    USGS Publications Warehouse

    van Mantgem, P.J.; Stephenson, N.L.

    2005-01-01

    1 We assess the use of simple, size-based matrix population models for projecting population trends for six coniferous tree species in the Sierra Nevada, California. We used demographic data from 16 673 trees in 15 permanent plots to create 17 separate time-invariant, density-independent population projection models, and determined differences between trends projected from initial surveys with a 5-year interval and observed data during two subsequent 5-year time steps. 2 We detected departures from the assumptions of the matrix modelling approach in terms of strong growth autocorrelations. We also found evidence of observation errors for measurements of tree growth and, to a more limited degree, recruitment. Loglinear analysis provided evidence of significant temporal variation in demographic rates for only two of the 17 populations. 3 Total population sizes were strongly predicted by model projections, although population dynamics were dominated by carryover from the previous 5-year time step (i.e. there were few cases of recruitment or death). Fractional changes to overall population sizes were less well predicted. Compared with a null model and a simple demographic model lacking size structure, matrix model projections were better able to predict total population sizes, although the differences were not statistically significant. Matrix model projections were also able to predict short-term rates of survival, growth and recruitment. Mortality frequencies were not well predicted. 4 Our results suggest that simple size-structured models can accurately project future short-term changes for some tree populations. However, not all populations were well predicted and these simple models would probably become more inaccurate over longer projection intervals. The predictive ability of these models would also be limited by disturbance or other events that destabilize demographic rates. ?? 2005 British Ecological Society.

  5. Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Qi, Jinyi

    2011-03-01

    Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.

  6. EFFECTS OF CHRONIC STRESS ON WILDLIFE POPULATIONS: A POPULATION MODELING APPROACH AND CASE STUDY

    EPA Science Inventory

    This chapter describes a matrix modeling approach to characterize and project risks to wildlife populations subject to chronic stress. Population matrix modeling was used to estimate effects of one class of environmental contaminants, dioxin-like compounds (DLCs), to populations ...

  7. Interactions between core and matrix thalamocortical projections in human sleep spindle synchronization

    PubMed Central

    Bonjean, Maxime; Baker, Tanya; Bazhenov, Maxim; Cash, Sydney; Halgren, Eric; Sejnowski, Terrence

    2012-01-01

    Sleep spindles, which are bursts of 11–15 Hz that occur during non-REM sleep, are highly synchronous across the scalp when measured with EEG, but have low spatial coherence and exhibit low correlation with EEG signals when simultaneously measured with MEG spindles in humans. We developed a computational model to explore the hypothesis that the spatial coherence of the EEG spindle is a consequence of diffuse matrix projections of the thalamus to layer 1 compared to the focal projections of the core pathway to layer 4 recorded by the MEG. Increasing the fanout of thalamocortical connectivity in the matrix pathway while keeping the core pathway fixed led to increased synchrony of the spindle activity in the superficial cortical layers in the model. In agreement with cortical recordings, the latency for spindles to spread from the core to the matrix was independent of the thalamocortical fanout but highly dependent on the probability of connections between cortical areas. PMID:22496571

  8. PROJECTED POPULATION-LEVEL EFFECTS OF THIOBENCARB EXPOSURE ON THE MYSID, AMERICAMYSIS BAHIA, AND EXTINCTION PROBABILITY IN A CONCENTRATION-DECAY EXPOSURE SYSTEM

    EPA Science Inventory



    Population-level effects of the mysid, Americamysis bahia, exposed to varying thiobencarb concentrations were estimated using stage-structured matrix models. A deterministic density-independent matrix model estimated the decrease in population growth rate, l, with increas...

  9. Forward problem solution as the operator of filtered and back projection matrix to reconstruct the various method of data collection and the object element model in electrical impedance tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ain, Khusnul; Physics Department - Airlangga University, Surabaya – Indonesia, khusnulainunair@yahoo.com; Kurniadi, Deddy

    2015-04-16

    Back projection reconstruction has been implemented to get the dynamical image in electrical impedance tomography. However the implementation is still limited in method of adjacent data collection and circular object element model. The study aims to develop the methods of back projection as reconstruction method that has the high speed, accuracy, and flexibility, which can be used for various methods of data collection and model of the object element. The proposed method uses the forward problem solution as the operator of filtered and back projection matrix. This is done through a simulation study on several methods of data collection andmore » various models of the object element. The results indicate that the developed method is capable of producing images, fastly and accurately for reconstruction of the various methods of data collection and models of the object element.« less

  10. Continuum modeling of large lattice structures: Status and projections

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Mikulas, Martin M., Jr.

    1988-01-01

    The status and some recent developments of continuum modeling for large repetitive lattice structures are summarized. Discussion focuses on a number of aspects including definition of an effective substitute continuum; characterization of the continuum model; and the different approaches for generating the properties of the continuum, namely, the constitutive matrix, the matrix of mass densities, and the matrix of thermal coefficients. Also, a simple approach is presented for generating the continuum properties. The approach can be used to generate analytic and/or numerical values of the continuum properties.

  11. Direct Retrieval of Exterior Orientation Parameters Using A 2-D Projective Transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seedahmed, Gamal H.

    2006-09-01

    Direct solutions are very attractive because they obviate the need for initial approximations associated with non-linear solutions. The Direct Linear Transformation (DLT) establishes itself as a method of choice for direct solutions in photogrammetry and other fields. The use of the DLT with coplanar object space points leads to a rank deficient model. This rank deficient model leaves the DLT defined up to a 2-D projective transformation, which makes the direct retrieval of the exterior orientation parameters (EOPs) a non-trivial task. This paper presents a novel direct algorithm to retrieve the EOPs from the 2-D projective transformation. It is basedmore » on a direct relationship between the 2-D projective transformation and the collinearity model using homogeneous coordinates representation. This representation offers a direct matrix correspondence between the 2-D projective transformation parameters and the collinearity model parameters. This correspondence lends itself to a direct matrix factorization to retrieve the EOPs. An important step in the proposed algorithm is a normalization process that provides the actual link between the 2-D projective transformation and the collinearity model. This paper explains the theoretical basis of the proposed algorithm as well as the necessary steps for its practical implementation. In addition, numerical examples are provided to demonstrate its validity.« less

  12. An efficient variable projection formulation for separable nonlinear least squares problems.

    PubMed

    Gan, Min; Li, Han-Xiong

    2014-05-01

    We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.

  13. The Feasibility of Analytic Models for Academic Planning: A Preliminary Analysis of Seven Quarters of Observations on the "Induced Course Load Matrix."

    ERIC Educational Resources Information Center

    Jewett, Frank I.; And Others

    This paper reports on a project undertaken at Humboldt State College, California, to estimate the coefficients of the so-called "induced course load matrix," perhaps the single most vital component of some models that are being developed to aid administrative planning and decisionmaking in institutions of higher education. Chapter I, the…

  14. Components of a Model for Forecasting Future Status of Selected Social Indicators. Department of Education Project on Social Indicators. Technical Report No. 3.

    ERIC Educational Resources Information Center

    Collazo, Andres; And Others

    Since a great number of variables influence future educational outcomes, forecasting possible trends is a complex task. One such model, the cross-impact matrix, has been developed. The use of this matrix in forecasting future values of social indicators of educational outcomes is described. Variables associated with educational outcomes are used…

  15. Project Effectiveness and the Balance of Power in Matrix Organizations: An Exploratory Study.

    DTIC Science & Technology

    1986-09-01

    Vasconcellos , Eduardo . "A Model for a Better Understanding of the Matrix Structure." IEEE Transactions on Engineering Management, EM-26: 56-65 (August...coercive power correlated negatively with degree of support (39:219-220). Vasconcellos recognized the five common power sources referenced above and...effect. The second variable identified by Vasconcellos was used to differentiate matrix structures. He felt that it was necessary to differentiate

  16. Dynamic SPECT reconstruction from few projections: a sparsity enforced matrix factorization approach

    NASA Astrophysics Data System (ADS)

    Ding, Qiaoqiao; Zan, Yunlong; Huang, Qiu; Zhang, Xiaoqun

    2015-02-01

    The reconstruction of dynamic images from few projection data is a challenging problem, especially when noise is present and when the dynamic images are vary fast. In this paper, we propose a variational model, sparsity enforced matrix factorization (SEMF), based on low rank matrix factorization of unknown images and enforced sparsity constraints for representing both coefficients and bases. The proposed model is solved via an alternating iterative scheme for which each subproblem is convex and involves the efficient alternating direction method of multipliers (ADMM). The convergence of the overall alternating scheme for the nonconvex problem relies upon the Kurdyka-Łojasiewicz property, recently studied by Attouch et al (2010 Math. Oper. Res. 35 438) and Attouch et al (2013 Math. Program. 137 91). Finally our proof-of-concept simulation on 2D dynamic images shows the advantage of the proposed method compared to conventional methods.

  17. Use of an Inverse Method for Time Series to Estimate the Dynamics of and Management Strategies for the Box Jellyfish Carybdea marsupialis.

    PubMed

    Bordehore, Cesar; Fuentes, Verónica L; Segarra, Jose G; Acevedo, Melisa; Canepa, Antonio; Raventós, Josep

    2015-01-01

    Frequently, population ecology of marine organisms uses a descriptive approach in which their sizes and densities are plotted over time. This approach has limited usefulness for design strategies in management or modelling different scenarios. Population projection matrix models are among the most widely used tools in ecology. Unfortunately, for the majority of pelagic marine organisms, it is difficult to mark individuals and follow them over time to determine their vital rates and built a population projection matrix model. Nevertheless, it is possible to get time-series data to calculate size structure and densities of each size, in order to determine the matrix parameters. This approach is known as a "demographic inverse problem" and it is based on quadratic programming methods, but it has rarely been used on aquatic organisms. We used unpublished field data of a population of cubomedusae Carybdea marsupialis to construct a population projection matrix model and compare two different management strategies to lower population to values before year 2008 when there was no significant interaction with bathers. Those strategies were by direct removal of medusae and by reducing prey. Our results showed that removal of jellyfish from all size classes was more effective than removing only juveniles or adults. When reducing prey, the highest efficiency to lower the C. marsupialis population occurred when prey depletion affected prey of all medusae sizes. Our model fit well with the field data and may serve to design an efficient management strategy or build hypothetical scenarios such as removal of individuals or reducing prey. TThis This sdfsdshis method is applicable to other marine or terrestrial species, for which density and population structure over time are available.

  18. Computational Everyday Life Human Behavior Model as Servicable Knowledge

    NASA Astrophysics Data System (ADS)

    Motomura, Yoichi; Nishida, Yoshifumi

    A project called `Open life matrix' is not only a research activity but also real problem solving as an action research. This concept is realized by large-scale data collection, probabilistic causal structure model construction and information service providing using the model. One concrete outcome of this project is childhood injury prevention activity in new team consist of hospital, government, and many varieties of researchers. The main result from the project is a general methodology to apply probabilistic causal structure models as servicable knowledge for action research. In this paper, the summary of this project and future direction to emphasize action research driven by artificial intelligence technology are discussed.

  19. Using multiple climate projections for assessing hydrological response to climate change in the Thukela River Basin, South Africa

    NASA Astrophysics Data System (ADS)

    Graham, L. Phil; Andersson, Lotta; Horan, Mark; Kunz, Richard; Lumsden, Trevor; Schulze, Roland; Warburton, Michele; Wilk, Julie; Yang, Wei

    This study used climate change projections from different regional approaches to assess hydrological effects on the Thukela River Basin in KwaZulu-Natal, South Africa. Projecting impacts of future climate change onto hydrological systems can be undertaken in different ways and a variety of effects can be expected. Although simulation results from global climate models (GCMs) are typically used to project future climate, different outcomes from these projections may be obtained depending on the GCMs themselves and how they are applied, including different ways of downscaling from global to regional scales. Projections of climate change from different downscaling methods, different global climate models and different future emissions scenarios were used as input to simulations in a hydrological model to assess climate change impacts on hydrology. A total of 10 hydrological change simulations were made, resulting in a matrix of hydrological response results. This matrix included results from dynamically downscaled climate change projections from the same regional climate model (RCM) using an ensemble of three GCMs and three global emissions scenarios, and from statistically downscaled projections using results from five GCMs with the same emissions scenario. Although the matrix of results does not provide complete and consistent coverage of potential uncertainties from the different methods, some robust results were identified. In some regards, the results were in agreement and consistent for the different simulations. For others, particularly rainfall, the simulations showed divergence. For example, all of the statistically downscaled simulations showed an annual increase in precipitation and corresponding increase in river runoff, while the RCM downscaled simulations showed both increases and decreases in runoff. According to the two projections that best represent runoff for the observed climate, increased runoff would generally be expected for this basin in the future. Dealing with such variability in results is not atypical for assessing climate change impacts in Africa and practitioners are faced with how to interpret them. This work highlights the need for additional, well-coordinated regional climate downscaling for the region to further define the range of uncertainties involved.

  20. System matrix computation vs storage on GPU: A comparative study in cone beam CT.

    PubMed

    Matenine, Dmitri; Côté, Geoffroi; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe

    2018-02-01

    Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersection distances between the trajectories of photons and the object, also called ray tracing or system matrix computation. This work focused on the thin-ray model is aimed at comparing different system matrix handling strategies using graphical processing units (GPUs). In this work, the system matrix is modeled by thin rays intersecting a regular grid of box-shaped voxels, known to be an accurate representation of the forward projection operator in CT. However, an uncompressed system matrix exceeds the random access memory (RAM) capacities of typical computers by one order of magnitude or more. Considering the RAM limitations of GPU hardware, several system matrix handling methods were compared: full storage of a compressed system matrix, on-the-fly computation of its coefficients, and partial storage of the system matrix with partial on-the-fly computation. These methods were tested on geometries mimicking a cone beam CT (CBCT) acquisition of a human head. Execution times of three routines of interest were compared: forward projection, backprojection, and ordered-subsets convex (OSC) iteration. A fully stored system matrix yielded the shortest backprojection and OSC iteration times, with a 1.52× acceleration for OSC when compared to the on-the-fly approach. Nevertheless, the maximum problem size was bound by the available GPU RAM and geometrical symmetries. On-the-fly coefficient computation did not require symmetries and was shown to be the fastest for forward projection. It also offered reasonable execution times of about 176.4 ms per view per OSC iteration for a detector of 512 × 448 pixels and a volume of 384 3 voxels, using commodity GPU hardware. Partial system matrix storage has shown a performance similar to the on-the-fly approach, while still relying on symmetries. Partial system matrix storage was shown to yield the lowest relative performance. On-the-fly ray tracing was shown to be the most flexible method, yielding reasonable execution times. A fully stored system matrix allowed for the lowest backprojection and OSC iteration times and may be of interest for certain performance-oriented applications. © 2017 American Association of Physicists in Medicine.

  1. Deformation Response and Life of Metallic Composites

    NASA Technical Reports Server (NTRS)

    Lissenden, Cliff J.

    2005-01-01

    The project was initially funded for one year (for $100,764) to investigate the potential of particulate reinforced metals for aeropropulsion applications and to generate fatigue results that quantify the mean stress effect for a titanium alloy matrix material (TIMETAL 21S). The project was continued for a second year (for $85,000) to more closely investigate cyclic deformation, especially ratcheting, of the titanium alloy matrix at elevated temperature. Equipment was purchased (for $19,000) to make the experimental program feasible; this equipment included an extensometer calibrator and a multi-channel signal conditioning amplifier. The project was continued for a third year ($50,000) to conduct cyclic relaxation experiments aimed at validating the elastic-viscoelastic-viscoplastic model that NASA GRC had developed for the titanium alloy. Finally, a one-year no cost extension was granted to enable continued analysis of the experimental results and model comparisons.

  2. Projection of postgraduate students flow with a smoothing matrix transition diagram of Markov chain

    NASA Astrophysics Data System (ADS)

    Rahim, Rahela; Ibrahim, Haslinda; Adnan, Farah Adibah

    2013-04-01

    This paper presents a case study of modeling postgraduate students flow at the College of Art and Sciences, Universiti Utara Malaysia. First, full time postgraduate students and the semester they were in are identified. Then administrative data were used to estimate the transitions between these semesters for the year 2001-2005 periods. Markov chain model is developed to calculate the -5 and -10 years projection of postgraduate students flow at the college. The optimization question addressed in this study is 'Which transitions would sustain the desired structure in the dynamic situation such as trend towards graduation?' The smoothed transition probabilities are proposed to estimate the transition probabilities matrix of 16 × 16. The results shows that using smoothed transition probabilities, the projection number of postgraduate students enrolled in the respective semesters are closer to actual than using the conventional steady states transition probabilities.

  3. Spin-Projected Matrix Product States: Versatile Tool for Strongly Correlated Systems.

    PubMed

    Li, Zhendong; Chan, Garnet Kin-Lic

    2017-06-13

    We present a new wave function ansatz that combines the strengths of spin projection with the language of matrix product states (MPS) and matrix product operators (MPO) as used in the density matrix renormalization group (DMRG). Specifically, spin-projected matrix product states (SP-MPS) are constructed as [Formula: see text], where [Formula: see text] is the spin projector for total spin S and |Ψ MPS (N,M) ⟩ is an MPS wave function with a given particle number N and spin projection M. This new ansatz possesses several attractive features: (1) It provides a much simpler route to achieve spin adaptation (i.e., to create eigenfunctions of Ŝ 2 ) compared to explicitly incorporating the non-Abelian SU(2) symmetry into the MPS. In particular, since the underlying state |Ψ MPS (N,M) ⟩ in the SP-MPS uses only Abelian symmetries, one does not need the singlet embedding scheme for nonsinglet states, as normally employed in spin-adapted DMRG, to achieve a single consistent variationally optimized state. (2) Due to the use of |Ψ MPS (N,M) ⟩ as its underlying state, the SP-MPS can be closely connected to broken-symmetry mean-field states. This allows one to straightforwardly generate the large number of broken-symmetry guesses needed to explore complex electronic landscapes in magnetic systems. Further, this connection can be exploited in the future development of quantum embedding theories for open-shell systems. (3) The sum of MPOs representation for the Hamiltonian and spin projector [Formula: see text] naturally leads to an embarrassingly parallel algorithm for computing expectation values and optimizing SP-MPS. (4) Optimizing SP-MPS belongs to the variation-after-projection (VAP) class of spin-projected theories. Unlike usual spin-projected theories based on determinants, the SP-MPS ansatz can be made essentially exact simply by increasing the bond dimensions in |Ψ MPS (N,M) ⟩. Computing excited states is also simple by imposing orthogonality constraints, which are simple to implement with MPS. To illustrate the versatility of SP-MPS, we formulate algorithms for the optimization of ground and excited states, develop perturbation theory based on SP-MPS, and describe how to evaluate spin-independent and spin-dependent properties such as the reduced density matrices. We demonstrate the numerical performance of SP-MPS with applications to several models typical of strong correlation, including the Hubbard model, and [2Fe-2S] and [4Fe-4S] model complexes.

  4. Research and Development Project Priotization. An Annotated Bibliography.

    DTIC Science & Technology

    1980-04-01

    matrix) theory provides the answer in any particular 17 problem. The matrix used is a table to express the number of votes cast for each motion...the majority-rule model and the game model. In 1964, Aumana’s chapter in Shelly and Bryan’s book [187] briefly described ordinal utility ranking...propositions to cast doubt on the existence of Bergson-Samuelson SWFs. They demonstrated that it was impossible to find a "reasonable" Bergson

  5. Tensor discriminant color space for face recognition.

    PubMed

    Wang, Su-Jing; Yang, Jian; Zhang, Na; Zhou, Chun-Guang

    2011-09-01

    Recent research efforts reveal that color may provide useful information for face recognition. For different visual tasks, the choice of a color space is generally different. How can a color space be sought for the specific face recognition problem? To address this problem, this paper represents a color image as a third-order tensor and presents the tensor discriminant color space (TDCS) model. The model can keep the underlying spatial structure of color images. With the definition of n-mode between-class scatter matrices and within-class scatter matrices, TDCS constructs an iterative procedure to obtain one color space transformation matrix and two discriminant projection matrices by maximizing the ratio of these two scatter matrices. The experiments are conducted on two color face databases, AR and Georgia Tech face databases, and the results show that both the performance and the efficiency of the proposed method are better than those of the state-of-the-art color image discriminant model, which involve one color space transformation matrix and one discriminant projection matrix, specifically in a complicated face database with various pose variations.

  6. YES. The Young-adult Employment Supports Project. School-to-Work Outreach Project 1998 Exemplary Model/Practice/Strategy.

    ERIC Educational Resources Information Center

    Minnesota Univ., Minneapolis. Inst. on Community Integration.

    The Young Adults Employment Supports Project (YES) of Matrix Research Institute (MRI) has been identified as an exemplary school-to-work program that includes students with disabilities. The program serves young persons with serious emotional disorders between the ages of 17-22 throughout Philadelphia who are preparing to exit special education…

  7. CMC Research at NASA Glenn in 2016: Recent Progress and Plans

    NASA Technical Reports Server (NTRS)

    Grady, Joseph E.

    2016-01-01

    As part of NASA's Aeronautical Sciences project, Glenn Research Center has developed advanced fiber and matrix constituents for a 2700 degrees Fahrenheit CMC (Ceramic Matrix Composite) for turbine engine applications. Fiber and matrix development and characterization will be reviewed. Resulting improvements in CMC mechanical properties and durability will be summarized. Plans for 2015 will be described, including development and validation of models predicting effects of the engine environment on durability of SiCSiC composites with Environmental Barrier Coatings (EBCs).

  8. Exploring multicollinearity using a random matrix theory approach.

    PubMed

    Feher, Kristen; Whelan, James; Müller, Samuel

    2012-01-01

    Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with `low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.

  9. Optimized Projection Matrix for Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Xu, Jianping; Pi, Yiming; Cao, Zongjie

    2010-12-01

    Compressive sensing (CS) is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF) design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.

  10. Development and application of a density dependent matrix ...

    EPA Pesticide Factsheets

    Ranging along the Atlantic coast from US Florida to the Maritime Provinces of Canada, the Atlantic killifish (Fundulus heteroclitus) is an important and well-studied model organism for understanding the effects of pollutants and other stressors in estuarine and marine ecosystems. Matrix population models are useful tools for ecological risk assessment because they integrate effects across the life cycle, provide a linkage between endpoints observed in the individual and ecological risk to the population as a whole, and project outcomes for many generations in the future. We developed a density dependent matrix population model for Atlantic killifish by modifying a model developed for fathead minnow (Pimephales promelas) that has proved to be extremely useful, e.g. to incorporate data from laboratory studies and project effects of endocrine disrupting chemicals. We developed a size-structured model (as opposed to one that is based upon developmental stages or age class structure) so that we could readily incorporate output from a Dynamic Energy Budget (DEB) model, currently under development. Due to a lack of sufficient data to accurately define killifish responses to density dependence, we tested a number of scenarios realistic for other fish species in order to demonstrate the outcome of including this ecologically important factor. We applied the model using published data for killifish exposed to dioxin-like compounds, and compared our results to those using

  11. On the matrix Fourier filtering problem for a class of models of nonlinear optical systems with a feedback

    NASA Astrophysics Data System (ADS)

    Razgulin, A. V.; Sazonova, S. V.

    2017-09-01

    A novel statement of the Fourier filtering problem based on the use of matrix Fourier filters instead of conventional multiplier filters is considered. The basic properties of the matrix Fourier filtering for the filters in the Hilbert-Schmidt class are established. It is proved that the solutions with a finite energy to the periodic initial boundary value problem for the quasi-linear functional differential diffusion equation with the matrix Fourier filtering Lipschitz continuously depend on the filter. The problem of optimal matrix Fourier filtering is formulated, and its solvability for various classes of matrix Fourier filters is proved. It is proved that the objective functional is differentiable with respect to the matrix Fourier filter, and the convergence of a version of the gradient projection method is also proved.

  12. Supervised orthogonal discriminant subspace projects learning for face recognition.

    PubMed

    Chen, Yu; Xu, Xiao-Hong

    2014-02-01

    In this paper, a new linear dimension reduction method called supervised orthogonal discriminant subspace projection (SODSP) is proposed, which addresses high-dimensionality of data and the small sample size problem. More specifically, given a set of data points in the ambient space, a novel weight matrix that describes the relationship between the data points is first built. And in order to model the manifold structure, the class information is incorporated into the weight matrix. Based on the novel weight matrix, the local scatter matrix as well as non-local scatter matrix is defined such that the neighborhood structure can be preserved. In order to enhance the recognition ability, we impose an orthogonal constraint into a graph-based maximum margin analysis, seeking to find a projection that maximizes the difference, rather than the ratio between the non-local scatter and the local scatter. In this way, SODSP naturally avoids the singularity problem. Further, we develop an efficient and stable algorithm for implementing SODSP, especially, on high-dimensional data set. Moreover, the theoretical analysis shows that LPP is a special instance of SODSP by imposing some constraints. Experiments on the ORL, Yale, Extended Yale face database B and FERET face database are performed to test and evaluate the proposed algorithm. The results demonstrate the effectiveness of SODSP. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Using Dynamic Multi-Task Non-Negative Matrix Factorization to Detect the Evolution of User Preferences in Collaborative Filtering

    PubMed Central

    Ju, Bin; Qian, Yuntao; Ye, Minchao; Ni, Rong; Zhu, Chenxi

    2015-01-01

    Predicting what items will be selected by a target user in the future is an important function for recommendation systems. Matrix factorization techniques have been shown to achieve good performance on temporal rating-type data, but little is known about temporal item selection data. In this paper, we developed a unified model that combines Multi-task Non-negative Matrix Factorization and Linear Dynamical Systems to capture the evolution of user preferences. Specifically, user and item features are projected into latent factor space by factoring co-occurrence matrices into a common basis item-factor matrix and multiple factor-user matrices. Moreover, we represented both within and between relationships of multiple factor-user matrices using a state transition matrix to capture the changes in user preferences over time. The experiments show that our proposed algorithm outperforms the other algorithms on two real datasets, which were extracted from Netflix movies and Last.fm music. Furthermore, our model provides a novel dynamic topic model for tracking the evolution of the behavior of a user over time. PMID:26270539

  14. Using Dynamic Multi-Task Non-Negative Matrix Factorization to Detect the Evolution of User Preferences in Collaborative Filtering.

    PubMed

    Ju, Bin; Qian, Yuntao; Ye, Minchao; Ni, Rong; Zhu, Chenxi

    2015-01-01

    Predicting what items will be selected by a target user in the future is an important function for recommendation systems. Matrix factorization techniques have been shown to achieve good performance on temporal rating-type data, but little is known about temporal item selection data. In this paper, we developed a unified model that combines Multi-task Non-negative Matrix Factorization and Linear Dynamical Systems to capture the evolution of user preferences. Specifically, user and item features are projected into latent factor space by factoring co-occurrence matrices into a common basis item-factor matrix and multiple factor-user matrices. Moreover, we represented both within and between relationships of multiple factor-user matrices using a state transition matrix to capture the changes in user preferences over time. The experiments show that our proposed algorithm outperforms the other algorithms on two real datasets, which were extracted from Netflix movies and Last.fm music. Furthermore, our model provides a novel dynamic topic model for tracking the evolution of the behavior of a user over time.

  15. Transformation Abilities: A Reanalysis and Confirmation of SOI Theory.

    ERIC Educational Resources Information Center

    Khattab, Ali-Maher; And Others

    1987-01-01

    Confirmatory factor analysis was used to reanalyze correlational data from selected variables in Guilford's Aptitudes Research Project. Results indicated Guilford's model reproduced the original correlation matrix more closely than other models. Most of Guilford's tests indicated high loadings on their hypothesized factors. (GDC)

  16. Error due to unresolved scales in estimation problems for atmospheric data assimilation

    NASA Astrophysics Data System (ADS)

    Janjic, Tijana

    The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only modeling of the covariance matrix obtained by evaluating the covariance function at the observation points. We first assumed that this covariance matrix is stationary and that the unresolved scales are not correlated between the observation points, i.e., the matrix is diagonal, and that the values along the diagonal are constant. Tests with these assumptions were unsuccessful, indicating that a more sophisticated model of the covariance is needed for assimilation of data with nonstationary spectrum. A new method for modeling the covariance matrix based on an extended set of modeling assumptions is proposed. First, it is assumed that the covariance matrix is diagonal, that is, that the unresolved scales are not correlated between the observation points. It is postulated that the values on the diagonal depend on a wavenumber that is characteristic for the unresolved part of the spectrum. It is further postulated that this characteristic wavenumber can be diagnosed from the observations and from the estimate of the projection of the state that is being estimated. It is demonstrated that the new method successfully overcomes previously encountered difficulties.

  17. The paradox of managing a project-oriented matrix: establishing coherence within chaos.

    PubMed

    Greiner, L E; Schein, V E

    1981-01-01

    Projects that require the flexible coordination of multidisciplinary teams have tended to adopt a matrix structure to accomplish complex tasks. Yet these project-oriented matrix structures themselves require careful coordination if they are to realize the objectives set for them. The authors identify the basic organizational questions that project-oriented matrix organizations must face. They examine the relationship between responsibility and authority; the tradeoffs between economic efficiency and the technical quality of the work produced; and the sensitive issues of managing individualistic, highly trained professionals while also maintaining group cohesiveness.

  18. Force Project Technology Presentation to the NRCC

    DTIC Science & Technology

    2014-02-04

    Functional Bridge components Smart Odometer Adv Pretreatment Smart Bridge Multi-functional Gap Crossing Fuel Automated Tracking System Adv...comprehensive matrix of candidate composite material systems and textile reinforcement architectures via modeling/analyses and testing. Product(s...Validated Dynamic Modeling tool based on parametric study using material models to reliably predict the textile mechanics of the hose

  19. EVALUATION OF THE EFFICACY OF EXTRAPOLATION POPULATION MODELING TO PREDICT THE DYNAMICS OF AMERICAMYSIS BAHIA POPULATIONS IN THE LABORATORY

    EPA Science Inventory

    An age-classified projection matrix model has been developed to extrapolate the chronic (28-35d) demographic responses of Americamysis bahia (formerly Mysidopsis bahia) to population-level response. This study was conducted to evaluate the efficacy of this model for predicting t...

  20. Stage-Structured Population Dynamics of AEDES AEGYPTI

    NASA Astrophysics Data System (ADS)

    Yusoff, Nuraini; Budin, Harun; Ismail, Salemah

    Aedes aegypti is the main vector in the transmission of dengue fever, a vector-borne disease affecting world population living in tropical and sub-tropical countries. Better understanding of the dynamics of its population growth will help in the efforts of controlling the spread of this disease. In looking at the population dynamics of Aedes aegypti, this paper explored the stage-structured modeling of the population growth of the mosquito using the matrix population model. The life cycle of the mosquito was divided into five stages: eggs, larvae, pupae, adult1 and adult2. Developmental rates were obtained for the average Malaysian temperature and these were used in constructing the transition matrix for the matrix model. The model, which was based only on temperature, projected that the population of Aedes aegypti will blow up with time, which is not realistic. For further work, other factors need to be taken into account to obtain a more realistic result.

  1. Free Fermions and the Classical Compact Groups

    NASA Astrophysics Data System (ADS)

    Cunden, Fabio Deelan; Mezzadri, Francesco; O'Connell, Neil

    2018-06-01

    There is a close connection between the ground state of non-interacting fermions in a box with classical (absorbing, reflecting, and periodic) boundary conditions and the eigenvalue statistics of the classical compact groups. The associated determinantal point processes can be extended in two natural directions: (i) we consider the full family of admissible quantum boundary conditions (i.e., self-adjoint extensions) for the Laplacian on a bounded interval, and the corresponding projection correlation kernels; (ii) we construct the grand canonical extensions at finite temperature of the projection kernels, interpolating from Poisson to random matrix eigenvalue statistics. The scaling limits in the bulk and at the edges are studied in a unified framework, and the question of universality is addressed. Whether the finite temperature determinantal processes correspond to the eigenvalue statistics of some matrix models is, a priori, not obvious. We complete the picture by constructing a finite temperature extension of the Haar measure on the classical compact groups. The eigenvalue statistics of the resulting grand canonical matrix models (of random size) corresponds exactly to the grand canonical measure of free fermions with classical boundary conditions.

  2. Modeling stiffness loss in boron/aluminum below the fatigue limit

    NASA Technical Reports Server (NTRS)

    Johnson, W. S.

    1982-01-01

    Boron/aluminum can develop significant internal matrix cracking when fatigued. These matrix cracks can result in a 40 percent secant modulus loss in some laminates, even when fatigued below the fatigue limit. It is shown that the same amount of fatigue damage will develop during stress or strain-controlled tests. Stacking sequence has little influence on secant modulus loss. The secant modulus loss in unidirectional composites is small, whereas the losses are substantial in laminates containing off-axis plies. A simple analysis is presented that predicts unnotched laminate secant modulus loss due to fatigue. The analysis is based upon the elastic modulus and Poisson's ratio of the fiber and matrix, fiber volume fraction, fiber orientations, and the cyclic-hardened yield stress of the matrix material. Excellent agreement was achieved between model predictions and experimental results. With this model, designers can project the material stiffness loss for design load or strain levels and assess the feasibility of its use in stiffness-critical parts.

  3. Tensor-GMRES method for large sparse systems of nonlinear equations

    NASA Technical Reports Server (NTRS)

    Feng, Dan; Pulliam, Thomas H.

    1994-01-01

    This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.

  4. Organism and population-level ecological models for ...

    EPA Pesticide Factsheets

    Ecological risk assessment typically focuses on animal populations as endpoints for regulatory ecotoxicology. Scientists at USEPA are developing models for animal populations exposed to a wide range of chemicals from pesticides to emerging contaminants. Modeled taxa include aquatic and terrestrial invertebrates, fish, amphibians, and birds, and employ a wide range of methods, from matrix-based projection models to mechanistic bioenergetics models and spatially explicit population models. not applicable

  5. Project - line interaction implementing projects in JPL's Matrix

    NASA Technical Reports Server (NTRS)

    Baroff, Lynn E.

    2006-01-01

    Can programmatic and line organizations really work interdependently, to accomplish their work as a community? Does the matrix produce a culture in which individuals take personal responsibility for both immediate mission success and long-term growth? What is the secret to making a matrix enterprise actually work? This paper will consider those questions, and propose that developing an effective project-line partnership demands primary attention to personal interactions among people. Many potential problems can be addressed by careful definition of roles, responsibilities, and work processes for both parts of the matrix -- and by deliberate and clear communication between project and line organizations and individuals.

  6. Discriminative Projection Selection Based Face Image Hashing

    NASA Astrophysics Data System (ADS)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  7. Visualization of x-ray computer tomography using computer-generated holography

    NASA Astrophysics Data System (ADS)

    Daibo, Masahiro; Tayama, Norio

    1998-09-01

    The theory converted from x-ray projection data to the hologram directly by combining the computer tomography (CT) with the computer generated hologram (CGH), is proposed. The purpose of this study is to offer the theory for realizing the all- electronic and high-speed seeing through 3D visualization system, which is for the application to medical diagnosis and non- destructive testing. First, the CT is expressed using the pseudo- inverse matrix which is obtained by the singular value decomposition. CGH is expressed in the matrix style. Next, `projection to hologram conversion' (PTHC) matrix is calculated by the multiplication of phase matrix of CGH with pseudo-inverse matrix of the CT. Finally, the projection vector is converted to the hologram vector directly, by multiplication of the PTHC matrix with the projection vector. Incorporating holographic analog computation into CT reconstruction, it becomes possible that the calculation amount is drastically reduced. We demonstrate the CT cross section which is reconstituted by He-Ne laser in the 3D space from the real x-ray projection data acquired by x-ray television equipment, using our direct conversion technique.

  8. PROJECTING POPULATION-LEVEL RESPONSE OF PURPLE SEA URCHINS TO LEAD CONTAMINATION FOR AN ESTUARINE ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    As part of an ecological risk assessment case study at the Portsmouth naval Shipyard (PNS), Kittery, Maine, USA, the population level effects of lead exposure to purple sea urchin, Arbacia punctulata, were investigated using a stage-classified matrix population model. The model d...

  9. The Effects of Measurement Error on Statistical Models for Analyzing Change. Final Report.

    ERIC Educational Resources Information Center

    Dunivant, Noel

    The results of six major projects are discussed including a comprehensive mathematical and statistical analysis of the problems caused by errors of measurement in linear models for assessing change. In a general matrix representation of the problem, several new analytic results are proved concerning the parameters which affect bias in…

  10. CMC Research at NASA Glenn in 2015: Recent Progress and Plans

    NASA Technical Reports Server (NTRS)

    Grady, Joseph E.

    2015-01-01

    As part of NASAs Aeronautical Sciences project, Glenn Research Center has developed advanced fiber and matrix constituents for a 2700F CMC for turbine engine applications. Fiber and matrix development and characterization will be reviewed. Resulting improvements in CMC mechanical properties and durability will be summarized. Plans for 2015 will be described, including development and validation of models predicting effects of the engine environment on durability of SiC/SiC composites with Environmental Barrier Coatings

  11. 25 CFR Appendix A to Subpart C - IRR High Priority Project Scoring Matrix

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false IRR High Priority Project Scoring Matrix A Appendix A to...—IRR High Priority Project Scoring Matrix Score 10 5 3 1 0 Accident and fatality rate for candidate...,000 or less 250,001-500,000 500,001-750,000 Over 750,000. Geographic isolation No external access to...

  12. 25 CFR Appendix A to Subpart C - IRR High Priority Project Scoring Matrix

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 1 2013-04-01 2013-04-01 false IRR High Priority Project Scoring Matrix A Appendix A to...—IRR High Priority Project Scoring Matrix Score 10 5 3 1 0 Accident and fatality rate for candidate...,000 or less 250,001-500,000 500,001-750,000 Over 750,000. Geographic isolation No external access to...

  13. 25 CFR Appendix A to Subpart C - IRR High Priority Project Scoring Matrix

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 1 2014-04-01 2014-04-01 false IRR High Priority Project Scoring Matrix A Appendix A to...—IRR High Priority Project Scoring Matrix Score 10 5 3 1 0 Accident and fatality rate for candidate...,000 or less 250,001-500,000 500,001-750,000 Over 750,000. Geographic isolation No external access to...

  14. 25 CFR Appendix A to Subpart C - IRR High Priority Project Scoring Matrix

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true IRR High Priority Project Scoring Matrix A Appendix A to...—IRR High Priority Project Scoring Matrix Score 10 5 3 1 0 Accident and fatality rate for candidate...,000 or less 250,001-500,000 500,001-750,000 Over 750,000. Geographic isolation No external access to...

  15. A sparse matrix-vector multiplication based algorithm for accurate density matrix computations on systems of millions of atoms

    NASA Astrophysics Data System (ADS)

    Ghale, Purnima; Johnson, Harley T.

    2018-06-01

    We present an efficient sparse matrix-vector (SpMV) based method to compute the density matrix P from a given Hamiltonian in electronic structure computations. Our method is a hybrid approach based on Chebyshev-Jackson approximation theory and matrix purification methods like the second order spectral projection purification (SP2). Recent methods to compute the density matrix scale as O(N) in the number of floating point operations but are accompanied by large memory and communication overhead, and they are based on iterative use of the sparse matrix-matrix multiplication kernel (SpGEMM), which is known to be computationally irregular. In addition to irregularity in the sparse Hamiltonian H, the nonzero structure of intermediate estimates of P depends on products of H and evolves over the course of computation. On the other hand, an expansion of the density matrix P in terms of Chebyshev polynomials is straightforward and SpMV based; however, the resulting density matrix may not satisfy the required constraints exactly. In this paper, we analyze the strengths and weaknesses of the Chebyshev-Jackson polynomials and the second order spectral projection purification (SP2) method, and propose to combine them so that the accurate density matrix can be computed using the SpMV computational kernel only, and without having to store the density matrix P. Our method accomplishes these objectives by using the Chebyshev polynomial estimate as the initial guess for SP2, which is followed by using sparse matrix-vector multiplications (SpMVs) to replicate the behavior of the SP2 algorithm for purification. We demonstrate the method on a tight-binding model system of an oxide material containing more than 3 million atoms. In addition, we also present the predicted behavior of our method when applied to near-metallic Hamiltonians with a wide energy spectrum.

  16. Modeling of Damage Initiation and Progression in a SiC/SiC Woven Ceramic Matrix Composite

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Goldberg, Robert K.; Bonacuse, Peter J.

    2012-01-01

    The goal of an ongoing project at NASA Glenn is to investigate the effects of the complex microstructure of a woven ceramic matrix composite and its variability on the effective properties and the durability of the material. Detailed analysis of these complex microstructures may provide clues for the material scientists who `design the material? or to structural analysts and designers who `design with the material? regarding damage initiation and damage propagation. A model material system, specifically a five-harness satin weave architecture CVI SiC/SiC composite composed of Sylramic-iBN fibers and a SiC matrix, has been analyzed. Specimens of the material were serially sectioned and polished to capture the detailed images of fiber tows, matrix and porosity. Open source analysis tools were used to isolate various constituents and finite elements models were then generated from simplified models of those images. Detailed finite element analyses were performed that examine how the variability in the local microstructure affected the macroscopic behavior as well as the local damage initiation and progression. Results indicate that the locations where damage initiated and propagated is linked to specific microstructural features.

  17. A framework for studying transient dynamics of population projection matrix models.

    PubMed

    Stott, Iain; Townley, Stuart; Hodgson, David James

    2011-09-01

    Empirical models are central to effective conservation and population management, and should be predictive of real-world dynamics. Available modelling methods are diverse, but analysis usually focuses on long-term dynamics that are unable to describe the complicated short-term time series that can arise even from simple models following ecological disturbances or perturbations. Recent interest in such transient dynamics has led to diverse methodologies for their quantification in density-independent, time-invariant population projection matrix (PPM) models, but the fragmented nature of this literature has stifled the widespread analysis of transients. We review the literature on transient analyses of linear PPM models and synthesise a coherent framework. We promote the use of standardised indices, and categorise indices according to their focus on either convergence times or transient population density, and on either transient bounds or case-specific transient dynamics. We use a large database of empirical PPM models to explore relationships between indices of transient dynamics. This analysis promotes the use of population inertia as a simple, versatile and informative predictor of transient population density, but criticises the utility of established indices of convergence times. Our findings should guide further development of analyses of transient population dynamics using PPMs or other empirical modelling techniques. © 2011 Blackwell Publishing Ltd/CNRS.

  18. Metapopulation dynamics of a Burrowing Owl (Speotyto cunicularia) population in Colorado

    Treesearch

    R. Scott Lutz; David L. Plumpton

    1997-01-01

    We banded 555 Burrowing Owls (Speotyto cunicularia) either as adults (after hatch year; AHY) or as young of the year (hatch year; HY) and used capture-recapture models to estimate survival and recapture rates and Leslie matrix models to project population growth over time at the 6,900-ha Rocky Mountain Arsenal National Wildlife Refuge (RMANWR),...

  19. Growth model for uneven-aged loblolly pine stands : simulations and management implications

    Treesearch

    C.-R. Lin; J. Buongiorno; Jeffrey P. Prestemon; K. E. Skog

    1998-01-01

    A density-dependent matrix growth model of uneven-aged loblolly pine stands was developed with data from 991 permanent plots in the southern United States. The model predicts the number of pine, soft hardwood, and hard hardwood trees in 13 diameter classes, based on equations for ingrowth, upgrowth, and mortality. Projections of 6 to 10 years agreed with the growth...

  20. Final Report for "Implimentation and Evaluation of Multigrid Linear Solvers into Extended Magnetohydrodynamic Codes for Petascale Computing"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srinath Vadlamani; Scott Kruger; Travis Austin

    Extended magnetohydrodynamic (MHD) codes are used to model the large, slow-growing instabilities that are projected to limit the performance of International Thermonuclear Experimental Reactor (ITER). The multiscale nature of the extended MHD equations requires an implicit approach. The current linear solvers needed for the implicit algorithm scale poorly because the resultant matrices are so ill-conditioned. A new solver is needed, especially one that scales to the petascale. The most successful scalable parallel processor solvers to date are multigrid solvers. Applying multigrid techniques to a set of equations whose fundamental modes are dispersive waves is a promising solution to CEMM problems.more » For the Phase 1, we implemented multigrid preconditioners from the HYPRE project of the Center for Applied Scientific Computing at LLNL via PETSc of the DOE SciDAC TOPS for the real matrix systems of the extended MHD code NIMROD which is a one of the primary modeling codes of the OFES-funded Center for Extended Magnetohydrodynamic Modeling (CEMM) SciDAC. We implemented the multigrid solvers on the fusion test problem that allows for real matrix systems with success, and in the process learned about the details of NIMROD data structures and the difficulties of inverting NIMROD operators. The further success of this project will allow for efficient usage of future petascale computers at the National Leadership Facilities: Oak Ridge National Laboratory, Argonne National Laboratory, and National Energy Research Scientific Computing Center. The project will be a collaborative effort between computational plasma physicists and applied mathematicians at Tech-X Corporation, applied mathematicians Front Range Scientific Computations, Inc. (who are collaborators on the HYPRE project), and other computational plasma physicists involved with the CEMM project.« less

  1. Extracellular Matrix Biomarkers for Diagnosis, Prognosis, Imaging, and Targeting

    DTIC Science & Technology

    2015-09-01

    collaboration with the Lindquist lab. Funding Support: Please see previously provided other support and changes noted below. Name: Doris Tabassum ...Project: Doris Tabassum has generated cell line models of heterogeneity with different metastatic capability. Additional funds from Dr. Polyak’s grants

  2. Thalamocortical and intracortical laminar connectivity determines sleep spindle properties.

    PubMed

    Krishnan, Giri P; Rosen, Burke Q; Chen, Jen-Yung; Muller, Lyle; Sejnowski, Terrence J; Cash, Sydney S; Halgren, Eric; Bazhenov, Maxim

    2018-06-27

    Sleep spindles are brief oscillatory events during non-rapid eye movement (NREM) sleep. Spindle density and synchronization properties are different in MEG versus EEG recordings in humans and also vary with learning performance, suggesting spindle involvement in memory consolidation. Here, using computational models, we identified network mechanisms that may explain differences in spindle properties across cortical structures. First, we report that differences in spindle occurrence between MEG and EEG data may arise from the contrasting properties of the core and matrix thalamocortical systems. The matrix system, projecting superficially, has wider thalamocortical fanout compared to the core system, which projects to middle layers, and requires the recruitment of a larger population of neurons to initiate a spindle. This property was sufficient to explain lower spindle density and higher spatial synchrony of spindles in the superficial cortical layers, as observed in the EEG signal. In contrast, spindles in the core system occurred more frequently but less synchronously, as observed in the MEG recordings. Furthermore, consistent with human recordings, in the model, spindles occurred independently in the core system but the matrix system spindles commonly co-occurred with core spindles. We also found that the intracortical excitatory connections from layer III/IV to layer V promote spindle propagation from the core to the matrix system, leading to widespread spindle activity. Our study predicts that plasticity of intra- and inter-cortical connectivity can potentially be a mechanism for increased spindle density as has been observed during learning.

  3. Two Approaches of Studying Singularity of Projective Conics

    ERIC Educational Resources Information Center

    Broyles, Chris; Muller, Lars; Tikoo, Mohan; Wang, Haohao

    2010-01-01

    The singularity of a projective conic can be determined via the associated matrix to the implicit equation of the projective conic. In this expository article, we will first derive a known result for determining the singularity of a projective conic via the associated matrix. Then we will introduce the concepts of [mu]-basis of the parametric…

  4. Scale-Dependent Fracture-Matrix Interactions And Their Impact on Radionuclide Transport - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detwiler, Russell

    Matrix diffusion and adsorption within a rock matrix are widely regarded as important mechanisms for retarding the transport of radionuclides and other solutes in fractured rock (e.g., Neretnieks, 1980; Tang et al., 1981; Maloszewski and Zuber, 1985; Novakowski and Lapcevic, 1994; Jardine et al., 1999; Zhou and Xie, 2003; Reimus et al., 2003a,b). When remediation options are being evaluated for old sources of contamination, where a large fraction of contaminants reside within the rock matrix, slow diffusion out of the matrix greatly increases the difficulty and timeframe of remediation. Estimating the rates of solute exchange between fractures and the adjacentmore » rock matrix is a critical factor in quantifying immobilization and/or remobilization of DOE-relevant contaminants within the subsurface. In principle, the most rigorous approach to modeling solute transport with fracture-matrix interaction would be based on local-scale coupled advection-diffusion/dispersion equations for the rock matrix and in discrete fractures that comprise the fracture network (Discrete Fracture Network and Matrix approach, hereinafter referred to as DFNM approach), fully resolving aperture variability in fractures and matrix property heterogeneity. However, such approaches are computationally demanding, and thus, many predictive models rely upon simplified models. These models typically idealize fracture rock masses as a single fracture or system of parallel fractures interacting with slabs of porous matrix or as a mobile-immobile or multi-rate mass transfer system. These idealizations provide tractable approaches for interpreting tracer tests and predicting contaminant mobility, but rely upon a fitted effective matrix diffusivity or mass-transfer coefficients. However, because these fitted parameters are based upon simplified conceptual models, their effectiveness at predicting long-term transport processes remains uncertain. Evidence of scale dependence of effective matrix diffusion coefficients obtained from tracer tests highlights this point and suggests that the underlying mechanisms and relationship between rock and fracture properties are not fully understood in large complex fracture networks. In this project, we developed a high-resolution DFN model of solute transport in fracture networks to explore and quantify the mechanisms that control transport in complex fracture networks and how these may give rise to observed scale-dependent matrix diffusion coefficients. Results demonstrate that small scale heterogeneity in the flow field caused by local aperture variability within individual fractures can lead to long-tailed breakthrough curves indicative of matrix diffusion, even in the absence of interactions with the fracture matrix. Furthermore, the temporal and spatial scale dependence of these processes highlights the inability of short-term tracer tests to estimate transport parameters that will control long-term fate and transport of contaminants in fractured aquifers.« less

  5. Risk assessment for two bird species in northern Wisconsin

    Treesearch

    Megan M. Friggens; Stephen N. Matthews

    2012-01-01

    Species distribution models for 147 bird species have been derived using climate, elevation, and distribution of current tree species as potential predictors (Matthews et al. 2011). In this case study, a risk matrix was developed for two bird species (fig. A2-5), with projected change in bird habitat (the x axis) based on models of changing suitable habitat resulting...

  6. Demographic projection of high-elevation white pines infected with white pine blister rust: a nonlinear disease model

    Treesearch

    S. G. Field; A. W. Schoettle; J. G. Klutsch; S. J. Tavener; M. F. Antolin

    2012-01-01

    Matrix population models have long been used to examine and predict the fate of threatened populations. However, the majority of these efforts concentrate on long-term equilibrium dynamics of linear systems and their underlying assumptions and, therefore, omit the analysis of transience. Since management decisions are typically concerned with the short term (

  7. The Reflective Teacher Leader: An Action Research Model

    ERIC Educational Resources Information Center

    Furtado, Leena; Anderson, Dawnette

    2012-01-01

    This study presents four teacher reflections from action research projects ranging from kindergarten to adult school improvements. A teacher leadership matrix guided participants to connect teaching and learning theory to best practices by exploring uncharted territory within an iterative cycle of research and action. Teachers developed the…

  8. A Weighted and Directed Interareal Connectivity Matrix for Macaque Cerebral Cortex

    PubMed Central

    Markov, N. T.; Ercsey-Ravasz, M. M.; Ribeiro Gomes, A. R.; Lamy, C.; Magrou, L.; Vezoli, J.; Misery, P.; Falchier, A.; Quilodran, R.; Gariel, M. A.; Sallet, J.; Gamanut, R.; Huissoud, C.; Clavagnier, S.; Giroud, P.; Sappey-Marinier, D.; Barone, P.; Dehay, C.; Toroczkai, Z.; Knoblauch, K.; Van Essen, D. C.; Kennedy, H.

    2014-01-01

    Retrograde tracer injections in 29 of the 91 areas of the macaque cerebral cortex revealed 1,615 interareal pathways, a third of which have not previously been reported. A weight index (extrinsic fraction of labeled neurons [FLNe]) was determined for each area-to-area pathway. Newly found projections were weaker on average compared with the known projections; nevertheless, the 2 sets of pathways had extensively overlapping weight distributions. Repeat injections across individuals revealed modest FLNe variability given the range of FLNe values (standard deviation <1 log unit, range 5 log units). The connectivity profile for each area conformed to a lognormal distribution, where a majority of projections are moderate or weak in strength. In the G29 × 29 interareal subgraph, two-thirds of the connections that can exist do exist. Analysis of the smallest set of areas that collects links from all 91 nodes of the G29 × 91 subgraph (dominating set analysis) confirms the dense (66%) structure of the cortical matrix. The G29 × 29 subgraph suggests an unexpectedly high incidence of unidirectional links. The directed and weighted G29 × 91 connectivity matrix for the macaque will be valuable for comparison with connectivity analyses in other species, including humans. It will also inform future modeling studies that explore the regularities of cortical networks. PMID:23010748

  9. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    PubMed

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  10. High-dimensional statistical inference: From vector to matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Anru

    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA < 1/3, deltak A+ thetak,kA < 1, or deltatkA < √( t - 1)/t for any given constant t ≥ 4/3 guarantee the exact recovery of all k sparse signals in the noiseless case through the constrained ℓ1 minimization, and similarly in affine rank minimization delta rM < 1/3, deltar M + thetar, rM < 1, or deltatrM< √( t - 1)/t ensure the exact reconstruction of all matrices with rank at most r in the noiseless case via the constrained nuclear norm minimization. Moreover, for any epsilon > 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The estimator is easy to implement via convex programming and performs well numerically. The techniques and main results developed in the chapter also have implications to other related statistical problems. An application to estimation of spiked covariance matrices from one-dimensional random projections is considered. The results demonstrate that it is still possible to accurately estimate the covariance matrix of a high-dimensional distribution based only on one-dimensional projections. For the third part of the thesis, we consider another setting of low-rank matrix completion. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.

  11. 25 CFR Appendix A to Subpart C - IRR High Priority Project Scoring Matrix

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...—IRR High Priority Project Scoring Matrix Score 10 5 3 1 0 Accident and fatality rate for candidate route 1 Severe X Moderate Minimal No accidents. Years since last IRR construction project completed... elements Addresses 1 element. 1 National Highway Traffic Safety Board standards. 2 Total funds requested...

  12. An Overview of Materials Structures for Extreme Environments Efforts for 2015 SBIR Phases I and II

    NASA Technical Reports Server (NTRS)

    Nguyen, Hung D.; Steele, Gynelle C.

    2017-01-01

    Technological innovation is the overall focus of NASA's Small Business Innovation Research (SBIR) program. The program invests in the development of innovative concepts and technologies to help NASA's mission directorates address critical research and development needs for Agency projects. This report highlights innovative SBIR 2015 Phase I and II projects that specifically address areas in Materials and Structures for Extreme Environments, one of six core competencies at NASA Glenn Research Center. Each article describes an innovation, defines its technical objective, and highlights NASA applications as well as commercial and industrial applications. Ten technologies are featured: metamaterials-inspired aerospace structures, metallic joining to advanced ceramic composites, multifunctional polyolefin matrix composite structures, integrated reacting fluid dynamics and predictive materials degradation models for propulsion system conditions, lightweight inflatable structural airlock (LISA), copolymer materials for fused deposition modeling 3-D printing of nonstandard plastics, Type II strained layer superlattice materials development for space-based focal plane array applications, hydrogenous polymer-regolith composites for radiation-shielding materials, a ceramic matrix composite environmental barrier coating durability model, and advanced composite truss printing for large solar array structures. This report serves as an opportunity for NASA engineers, researchers, program managers, and other personnel to learn about innovations in this technology area as well as possibilities for collaboration with innovative small businesses that could benefit NASA programs and projects.

  13. ARO/ARL Site Visit Project Review Presentation

    DTIC Science & Technology

    2012-07-01

    hemodynamic activity. 7/11/2012 Challenges 10 complicated equipment gradient artifact BCG artifact How to combine the data? bias field auditory...noise 7/11/2012 11 A B C + + – – Our Solutions EEG+ BCG BCG 7/11/2012 12 (Goldman et al., Neuroimage 2009) Observing latent...Discrimination Results model human subjects Neurometric curve NOT optimized to match psychometric curve! human subjects model ( matrix word

  14. An efficient hydro-mechanical model for coupled multi-porosity and discrete fracture porous media

    NASA Astrophysics Data System (ADS)

    Yan, Xia; Huang, Zhaoqin; Yao, Jun; Li, Yang; Fan, Dongyan; Zhang, Kai

    2018-02-01

    In this paper, a numerical model is developed for coupled analysis of deforming fractured porous media with multiscale fractures. In this model, the macro-fractures are modeled explicitly by the embedded discrete fracture model, and the supporting effects of fluid and fillings in these fractures are represented explicitly in the geomechanics model. On the other hand, matrix and micro-fractures are modeled by a multi-porosity model, which aims to accurately describe the transient matrix-fracture fluid exchange process. A stabilized extended finite element method scheme is developed based on the polynomial pressure projection technique to address the displacement oscillation along macro-fracture boundaries. After that, the mixed space discretization and modified fixed stress sequential implicit methods based on non-matching grids are applied to solve the coupling model. Finally, we demonstrate the accuracy and application of the proposed method to capture the coupled hydro-mechanical impacts of multiscale fractures on fractured porous media.

  15. Image Matrix Processor for Volumetric Computations Final Report CRADA No. TSB-1148-95

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberson, G. Patrick; Browne, Jolyon

    The development of an Image Matrix Processor (IMP) was proposed that would provide an economical means to perform rapid ray-tracing processes on volume "Giga Voxel" data sets. This was a multi-phased project. The objective of the first phase of the IMP project was to evaluate the practicality of implementing a workstation-based Image Matrix Processor for use in volumetric reconstruction and rendering using hardware simulation techniques. Additionally, ARACOR and LLNL worked together to identify and pursue further funding sources to complete a second phase of this project.

  16. Short Course on Implementation of Zone Technology in the Repair and Overhaul Environment

    DTIC Science & Technology

    1996-04-01

    Pier Zone & Sys Pier/DD/Staging Zone Management Approach Varies Function to Project Project/Matrix Project/Matrix Project Project Fig. 9-3. Nature of...intractable problems that currently exist. Nature can give us many clues. If only we could harness the material that makes the dolphin’s outer shell so smooth...the natural effect of requiring peak manning and confined outfitting schedules. Through the application of system oriented logic to actual work accom

  17. Projection matrix acquisition for cone-beam computed tomography iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Shi, Wenlong; Zhang, Caixin; Gao, Zongzhao

    2017-02-01

    Projection matrix is an essential and time-consuming part in computed tomography (CT) iterative reconstruction. In this article a novel calculation algorithm of three-dimensional (3D) projection matrix is proposed to quickly acquire the matrix for cone-beam CT (CBCT). The CT data needed to be reconstructed is considered as consisting of the three orthogonal sets of equally spaced and parallel planes, rather than the individual voxels. After getting the intersections the rays with the surfaces of the voxels, the coordinate points and vertex is compared to obtain the index value that the ray traversed. Without considering ray-slope to voxel, it just need comparing the position of two points. Finally, the computer simulation is used to verify the effectiveness of the algorithm.

  18. Evaluation of the Matrix Project. Interchange 77.

    ERIC Educational Resources Information Center

    McIvor, Gill; Moodie, Kristina

    The Matrix Project is a program that has been established in central Scotland with the aim of reducing the risk of offending and anti-social behavior among vulnerable children. The project provides a range of services to children between eight and 11 years of age who are at risk in the local authority areas of Clackmannanshire, Falkirk and…

  19. Merging Multi-model CMIP5/PMIP3 Past-1000 Ensemble Simulations with Tree Ring Proxy Data by Optimal Interpolation Approach

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Luo, Yong; Xing, Pei; Nie, Suping; Tian, Qinhua

    2015-04-01

    Two sets of gridded annual mean surface air temperature in past millennia over the Northern Hemisphere was constructed employing optimal interpolation (OI) method so as to merge the tree ring proxy records with the simulations from CMIP5 (the fifth phase of the Climate Model Intercomparison Project). Both the uncertainties in proxy reconstruction and model simulations can be taken into account applying OI algorithm. For better preservation of physical coordinated features and spatial-temporal completeness of climate variability in 7 copies of model results, we perform the Empirical Orthogonal Functions (EOF) analysis to truncate the ensemble mean field as the first guess (background field) for OI. 681 temperature sensitive tree-ring chronologies are collected and screened from International Tree Ring Data Bank (ITRDB) and Past Global Changes (PAGES-2k) project. Firstly, two methods (variance matching and linear regression) are employed to calibrate the tree ring chronologies with instrumental data (CRUTEM4v) individually. In addition, we also remove the bias of both the background field and proxy records relative to instrumental dataset. Secondly, time-varying background error covariance matrix (B) and static "observation" error covariance matrix (R) are calculated for OI frame. In our scheme, matrix B was calculated locally, and "observation" error covariance are partially considered in R matrix (the covariance value between the pairs of tree ring sites that are very close to each other would be counted), which is different from the traditional assumption that R matrix should be diagonal. Comparing our results, it turns out that regional averaged series are not sensitive to the selection for calibration methods. The Quantile-Quantile plots indicate regional climatologies based on both methods are tend to be more agreeable with regional reconstruction of PAGES-2k in 20th century warming period than in little ice age (LIA). Lager volcanic cooling response over Asia and Europe in context of recent millennium are detected in our datasets than that revealed in regional reconstruction from PAGES-2k network. Verification experiments have showed that the merging approach really reconcile the proxy data and model ensemble simulations in an optimal way (with smaller errors than both of them). Further research is needed to improve the error estimation on them.

  20. A Fast Gradient Method for Nonnegative Sparse Regression With Self-Dictionary

    NASA Astrophysics Data System (ADS)

    Gillis, Nicolas; Luce, Robert

    2018-01-01

    A nonnegative matrix factorization (NMF) can be computed efficiently under the separability assumption, which asserts that all the columns of the given input data matrix belong to the cone generated by a (small) subset of them. The provably most robust methods to identify these conic basis columns are based on nonnegative sparse regression and self dictionaries, and require the solution of large-scale convex optimization problems. In this paper we study a particular nonnegative sparse regression model with self dictionary. As opposed to previously proposed models, this model yields a smooth optimization problem where the sparsity is enforced through linear constraints. We show that the Euclidean projection on the polyhedron defined by these constraints can be computed efficiently, and propose a fast gradient method to solve our model. We compare our algorithm with several state-of-the-art methods on synthetic data sets and real-world hyperspectral images.

  1. Appendix 2: Risk-based framework and risk case studies. Risk Assessment for two bird species in northern Wisconsin.

    Treesearch

    Megan M. Friggens; Stephen N. Matthews

    2012-01-01

    Species distribution models for 147 bird species have been derived using climate, elevation, and distribution of current tree species as potential predictors (Matthews et al. 2011). In this case study, a risk matrix was developed for two bird species (fig. A2-5), with projected change in bird habitat (the x axis) based on models of changing suitable habitat resulting...

  2. Application of the Backward-Smoothing Extended Kalman Filter to Attitude Estimation and Prediction using Radar Observations

    DTIC Science & Technology

    2009-06-01

    projection, the measurement matrix is of rank 3. This is known as the rank theorem and enables the matrix Y to be factored into the product of two...Interface 270 6.5.5. Bistatic Radar Measurements 271 6.5.6. Computer Aided Image-Model Matching 273 8 6.5.7. Tomasi-Kanade Factorization Method...the vectors of an orthonormal basis satisfy the following scalar- product relations (2 p. 239): = i. ir = n (2.1) i-j = i-k j-k 0 i-i= j • j = k-k

  3. Climate change threatens polar bear populations: a stochastic demographic analysis.

    PubMed

    Hunter, Christine M; Caswell, Hal; Runge, Michael C; Regehr, Eric V; Amstrup, Steve C; Stirling, Ian

    2010-10-01

    The polar bear (Ursus maritimus) depends on sea ice for feeding, breeding, and movement. Significant reductions in Arctic sea ice are forecast to continue because of climate warming. We evaluated the impacts of climate change on polar bears in the southern Beaufort Sea by means of a demographic analysis, combining deterministic, stochastic, environment-dependent matrix population models with forecasts of future sea ice conditions from IPCC general circulation models (GCMs). The matrix population models classified individuals by age and breeding status; mothers and dependent cubs were treated as units. Parameter estimates were obtained from a capture-recapture study conducted from 2001 to 2006. Candidate statistical models allowed vital rates to vary with time and as functions of a sea ice covariate. Model averaging was used to produce the vital rate estimates, and a parametric bootstrap procedure was used to quantify model selection and parameter estimation uncertainty. Deterministic models projected population growth in years with more extensive ice coverage (2001-2003) and population decline in years with less ice coverage (2004-2005). LTRE (life table response experiment) analysis showed that the reduction in lambda in years with low sea ice was due primarily to reduced adult female survival, and secondarily to reduced breeding. A stochastic model with two environmental states, good and poor sea ice conditions, projected a declining stochastic growth rate, log lambdas, as the frequency of poor ice years increased. The observed frequency of poor ice years since 1979 would imply log lambdas approximately - 0.01, which agrees with available (albeit crude) observations of population size. The stochastic model was linked to a set of 10 GCMs compiled by the IPCC; the models were chosen for their ability to reproduce historical observations of sea ice and were forced with "business as usual" (A1B) greenhouse gas emissions. The resulting stochastic population projections showed drastic declines in the polar bear population by the end of the 21st century. These projections were instrumental in the decision to list the polar bear as a threatened species under the U.S. Endangered Species Act.

  4. Climate change threatens polar bear populations: A stochastic demographic analysis

    USGS Publications Warehouse

    Hunter, C.M.; Caswell, H.; Runge, M.C.; Regehr, E.V.; Amstrup, Steven C.; Stirling, I.

    2010-01-01

    The polar bear (Ursus maritimus) depends on sea ice for feeding, breeding, and movement. Significant reductions in Arctic sea ice are forecast to continue because of climate warming. We evaluated the impacts of climate change on polar bears in the southern Beaufort Sea by means of a demographic analysis, combining deterministic, stochastic, environment-dependent matrix population models with forecasts of future sea ice conditions from IPCC general circulation models (GCMs). The matrix population models classified individuals by age and breeding status; mothers and dependent cubs were treated as units. Parameter estimates were obtained from a capture-recapture study conducted from 2001 to 2006. Candidate statistical models allowed vital rates to vary with time and as functions of a sea ice covariate. Model averaging was used to produce the vital rate estimates, and a parametric bootstrap procedure was used to quantify model selection and parameter estimation uncertainty. Deterministic models projected population growth in years with more extensive ice coverage (2001-2003) and population decline in years with less ice coverage (2004-2005). LTRE (life table response experiment) analysis showed that the reduction in ?? in years with low sea ice was due primarily to reduced adult female survival, and secondarily to reduced breeding. A stochastic model with two environmental states, good and poor sea ice conditions, projected a declining stochastic growth rate, log ??s, as the frequency of poor ice years increased. The observed frequency of poor ice years since 1979 would imply log ??s ' - 0.01, which agrees with available (albeit crude) observations of population size. The stochastic model was linked to a set of 10 GCMs compiled by the IPCC; the models were chosen for their ability to reproduce historical observations of sea ice and were forced with "business as usual" (A1B) greenhouse gas emissions. The resulting stochastic population projections showed drastic declines in the polar bear population by the end of the 21st century. These projections were instrumental in the decision to list the polar bear as a threatened species under the U.S. Endangered Species Act. ?? 2010 by the Ecological Society of America.

  5. Projection methods for the numerical solution of Markov chain models

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.

  6. Design of a projection display screen with vanishing color shift for rear-projection HDTV

    NASA Astrophysics Data System (ADS)

    Liu, Xiu; Zhu, Jin-lin

    1996-09-01

    Using bi-convex cylinder lens with matrix structure, the transmissive projection display screen with high contrast and wider viewing angle has been widely used in large rear projection TV and video projectors, it obtained a inhere color shift and puzzled the designer of display screen for RGB projection tube in-line adjustment. Based on the method of light beam racing, the general software of designing projection display screen has been developed and the computer model of vanishing color shift for rear projection HDTV has bee completed. This paper discussed the practical designing method to vanish the defect of color shift and mentioned the relations between the primary optical parameters of display screen and relative geometry sizes of lens' surface. The distributions of optical gain to viewing angle and the influences on engineering design are briefly analyzed.

  7. MOVES-Matrix and distributed computing for microscale line source dispersion analysis.

    PubMed

    Liu, Haobing; Xu, Xiaodan; Rodgers, Michael O; Xu, Yanzhi Ann; Guensler, Randall L

    2017-07-01

    MOVES and AERMOD are the U.S. Environmental Protection Agency's recommended models for use in project-level transportation conformity and hot-spot analysis. However, the structure and algorithms involved in running MOVES make analyses cumbersome and time-consuming. Likewise, the modeling setup process, including extensive data requirements and required input formats, in AERMOD lead to a high potential for analysis error in dispersion modeling. This study presents a distributed computing method for line source dispersion modeling that integrates MOVES-Matrix, a high-performance emission modeling tool, with the microscale dispersion models CALINE4 and AERMOD. MOVES-Matrix was prepared by iteratively running MOVES across all possible iterations of vehicle source-type, fuel, operating conditions, and environmental parameters to create a huge multi-dimensional emission rate lookup matrix. AERMOD and CALINE4 are connected with MOVES-Matrix in a distributed computing cluster using a series of Python scripts. This streamlined system built on MOVES-Matrix generates exactly the same emission rates and concentration results as using MOVES with AERMOD and CALINE4, but the approach is more than 200 times faster than using the MOVES graphical user interface. Because AERMOD requires detailed meteorological input, which is difficult to obtain, this study also recommends using CALINE4 as a screening tool for identifying the potential area that may exceed air quality standards before using AERMOD (and identifying areas that are exceedingly unlikely to exceed air quality standards). CALINE4 worst case method yields consistently higher concentration results than AERMOD for all comparisons in this paper, as expected given the nature of the meteorological data employed. The paper demonstrates a distributed computing method for line source dispersion modeling that integrates MOVES-Matrix with the CALINE4 and AERMOD. This streamlined system generates exactly the same emission rates and concentration results as traditional way to use MOVES with AERMOD and CALINE4, which are regulatory models approved by the U.S. EPA for conformity analysis, but the approach is more than 200 times faster than implementing the MOVES model. We highlighted the potentially significant benefit of using CALINE4 as screening tool for identifying potential area that may exceeds air quality standards before using AERMOD, which requires much more meteorology input than CALINE4.

  8. Kraus operator solutions to a fermionic master equation describing a thermal bath and their matrix representation

    NASA Astrophysics Data System (ADS)

    Xiang-Guo, Meng; Ji-Suo, Wang; Hong-Yi, Fan; Cheng-Wei, Xia

    2016-04-01

    We solve the fermionic master equation for a thermal bath to obtain its explicit Kraus operator solutions via the fermionic state approach. The normalization condition of the Kraus operators is proved. The matrix representation for these solutions is obtained, which is incongruous with the result in the book completed by Nielsen and Chuang [Quantum Computation and Quantum Information, Cambridge University Press, 2000]. As especial cases, we also present the Kraus operator solutions to master equations for describing the amplitude-decay model and the diffusion process at finite temperature. Project supported by the National Natural Science Foundation of China (Grant No. 11347026), the Natural Science Foundation of Shandong Province, China (Grant Nos. ZR2013AM012 and ZR2012AM004), and the Research Fund for the Doctoral Program and Scientific Research Project of Liaocheng University, Shandong Province, China.

  9. Display technologies; Proceedings of the Meeting, National Chiao Tung Univ., Hsinchu, Taiwan, Dec. 17, 18, 1992

    NASA Astrophysics Data System (ADS)

    Chen, Shu-Hsia; Wu, Shin-Tson

    1992-10-01

    A broad range of interdisciplinary subjects related to display technologies is addressed, with emphasis on high-definition displays, CRTs, projection displays, materials for display application, flat-panel displays, display modeling, and polymer-dispersed liquid crystals. Particular attention is given to a CRT approach to high-definition television display, a superhigh-resolution electron gun for color display CRT, a review of active-matrix liquid-crystal displays, color design for LCD parameters in projection and direct-view applications, annealing effects on ZnS:TbF3 electroluminescent devices prepared by RF sputtering, polycrystalline silicon thin film transistors with low-temperature gate dielectrics, refractive index dispersions of liquid crystals, a new rapid-response polymer-dispersed liquid-crystal material, and improved liquid crystals for active-matrix displays using high-tilt-orientation layers. (No individual items are abstracted in this volume)

  10. Project Solo; Newsletter Number Twenty.

    ERIC Educational Resources Information Center

    Pittsburgh Univ., PA. Project Solo.

    Three Project Solo modules are presented. They are designed to teach the concepts of elementary matrix operation, matrix multiplication, and finite-state automata. Together with the module on communication matrices from Newsletter #17 they form a well motivated but structured path to expertise in this area. (JY)

  11. Stochastic model of the NASA/MSFC ground facility for large space structures with uncertain parameters: The maximum entropy approach, part 2

    NASA Technical Reports Server (NTRS)

    Hsia, Wei Shen

    1989-01-01

    A validated technology data base is being developed in the areas of control/structures interaction, deployment dynamics, and system performance for Large Space Structures (LSS). A Ground Facility (GF), in which the dynamics and control systems being considered for LSS applications can be verified, was designed and built. One of the important aspects of the GF is to verify the analytical model for the control system design. The procedure is to describe the control system mathematically as well as possible, then to perform tests on the control system, and finally to factor those results into the mathematical model. The reduction of the order of a higher order control plant was addressed. The computer program was improved for the maximum entropy principle adopted in Hyland's MEOP method. The program was tested against the testing problem. It resulted in a very close match. Two methods of model reduction were examined: Wilson's model reduction method and Hyland's optimal projection (OP) method. Design of a computer program for Hyland's OP method was attempted. Due to the difficulty encountered at the stage where a special matrix factorization technique is needed in order to obtain the required projection matrix, the program was successful up to the finding of the Linear Quadratic Gaussian solution but not beyond. Numerical results along with computer programs which employed ORACLS are presented.

  12. Fast mean and variance computation of the diffuse sound transmission through finite-sized thick and layered wall and floor systems

    NASA Astrophysics Data System (ADS)

    Decraene, Carolina; Dijckmans, Arne; Reynders, Edwin P. B.

    2018-05-01

    A method is developed for computing the mean and variance of the diffuse field sound transmission loss of finite-sized layered wall and floor systems that consist of solid, fluid and/or poroelastic layers. This is achieved by coupling a transfer matrix model of the wall or floor to statistical energy analysis subsystem models of the adjacent room volumes. The modal behavior of the wall is approximately accounted for by projecting the wall displacement onto a set of sinusoidal lateral basis functions. This hybrid modal transfer matrix-statistical energy analysis method is validated on multiple wall systems: a thin steel plate, a polymethyl methacrylate panel, a thick brick wall, a sandwich panel, a double-leaf wall with poro-elastic material in the cavity, and a double glazing. The predictions are compared with experimental data and with results obtained using alternative prediction methods such as the transfer matrix method with spatial windowing, the hybrid wave based-transfer matrix method, and the hybrid finite element-statistical energy analysis method. These comparisons confirm the prediction accuracy of the proposed method and the computational efficiency against the conventional hybrid finite element-statistical energy analysis method.

  13. Bayesian source term determination with unknown covariance of measurements

    NASA Astrophysics Data System (ADS)

    Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav

    2017-04-01

    Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  14. Solving large tomographic linear systems: size reduction and error estimation

    NASA Astrophysics Data System (ADS)

    Voronin, Sergey; Mikesell, Dylan; Slezak, Inna; Nolet, Guust

    2014-10-01

    We present a new approach to reduce a sparse, linear system of equations associated with tomographic inverse problems. We begin by making a modification to the commonly used compressed sparse-row format, whereby our format is tailored to the sparse structure of finite-frequency (volume) sensitivity kernels in seismic tomography. Next, we cluster the sparse matrix rows to divide a large matrix into smaller subsets representing ray paths that are geographically close. Singular value decomposition of each subset allows us to project the data onto a subspace associated with the largest eigenvalues of the subset. After projection we reject those data that have a signal-to-noise ratio (SNR) below a chosen threshold. Clustering in this way assures that the sparse nature of the system is minimally affected by the projection. Moreover, our approach allows for a precise estimation of the noise affecting the data while also giving us the ability to identify outliers. We illustrate the method by reducing large matrices computed for global tomographic systems with cross-correlation body wave delays, as well as with surface wave phase velocity anomalies. For a massive matrix computed for 3.7 million Rayleigh wave phase velocity measurements, imposing a threshold of 1 for the SNR, we condensed the matrix size from 1103 to 63 Gbyte. For a global data set of multiple-frequency P wave delays from 60 well-distributed deep earthquakes we obtain a reduction to 5.9 per cent. This type of reduction allows one to avoid loss of information due to underparametrizing models. Alternatively, if data have to be rejected to fit the system into computer memory, it assures that the most important data are preserved.

  15. Sparse PCA with Oracle Property.

    PubMed

    Gu, Quanquan; Wang, Zhaoran; Liu, Han

    In this paper, we study the estimation of the k -dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank- k , and attains a [Formula: see text] statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets.

  16. Sparse PCA with Oracle Property

    PubMed Central

    Gu, Quanquan; Wang, Zhaoran; Liu, Han

    2014-01-01

    In this paper, we study the estimation of the k-dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank-k, and attains a s/n statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets. PMID:25684971

  17. Coordination of Scheduling Clinical Externship or Clinical Practice Experiences for Students in Physical Therapy Educational Programs.

    ERIC Educational Resources Information Center

    Patterson, Robert K.; Kass, Susan H.

    A project to coordinate the scheduling of allied health occupations students for clinical practice or externship experiences in Southeast Florida is described. A model clinical facility utilization and time schedule matrix was developed for four programs: the physical therapy programs at Florida International University (FIU) and the University of…

  18. Factors associated with continuance commitment to FAA matrix teams.

    DOT National Transportation Integrated Search

    1993-11-01

    Several organizations within the FAA employ matrix teams to achieve cross-functional coordination. Matrix team members typically represent different organizational functions required for project accomplishment (e.g., research and development, enginee...

  19. Modelling dynamic fronto-parietal behaviour during minimally invasive surgery--a Markovian trip distribution approach.

    PubMed

    Leff, Daniel Richard; Orihuela-Espina, Felipe; Leong, Julian; Darzi, Ara; Yang, Guang-Zhong

    2008-01-01

    Learning to perform Minimally Invasive Surgery (MIS) requires considerable attention, concentration and spatial ability. Theoretically, this leads to activation in executive control (prefrontal) and visuospatial (parietal) centres of the brain. A novel approach is presented in this paper for analysing the flow of fronto-parietal haemodynamic behaviour and the associated variability between subjects. Serially acquired functional Near Infrared Spectroscopy (fNIRS) data from fourteen laparoscopic novices at different stages of learning is projected into a low-dimensional 'geospace', where sequentially acquired data is mapped to different locations. A trip distribution matrix based on consecutive directed trips between locations in the geospace reveals confluent fronto-parietal haemodynamic changes and a gravity model is applied to populate this matrix. To model global convergence in haemodynamic behaviour, a Markov chain is constructed and by comparing sequential haemodynamic distributions to the Markov's stationary distribution, inter-subject variability in learning an MIS task can be identified.

  20. Experimental placement of stone matrix asphalt (SMA) : project F-STP-017P(89)E Auburn, Court Street.

    DOT National Transportation Integrated Search

    2003-04-01

    In October 1999 the Maine Department of Transportation utilized stone matrix asphalt to resurface an : intersection in Auburn, Maine. The experimental placement of SMA was part of a pavement project F-STP-017P(89)E. The intersection is at the junctio...

  1. Face verification with balanced thresholds.

    PubMed

    Yan, Shuicheng; Xu, Dong; Tang, Xiaoou

    2007-01-01

    The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.

  2. A projection operator method for the analysis of magnetic neutron form factors

    NASA Astrophysics Data System (ADS)

    Kaprzyk, S.; Van Laar, B.; Maniawski, F.

    1981-03-01

    A set of projection operators in matrix form has been derived on the basis of decomposition of the spin density into a series of fully symmetrized cubic harmonics. This set of projection operators allows a formulation of the Fourier analysis of magnetic form factors in a convenient way. The presented method is capable of checking the validity of various theoretical models used for spin density analysis up to now. The general formalism is worked out in explicit form for the fcc and bcc structures and deals with that part of spin density which is contained within the sphere inscribed in the Wigner-Seitz cell. This projection operator method has been tested on the magnetic form factors of nickel and iron.

  3. An Omnidirectional Vision Sensor Based on a Spherical Mirror Catadioptric System.

    PubMed

    Barone, Sandro; Carulli, Marina; Neri, Paolo; Paoli, Alessandro; Razionale, Armando Viviano

    2018-01-31

    The combination of mirrors and lenses, which defines a catadioptric sensor, is widely used in the computer vision field. The definition of a catadioptric sensors is based on three main features: hardware setup, projection modelling and calibration process. In this paper, a complete description of these aspects is given for an omnidirectional sensor based on a spherical mirror. The projection model of a catadioptric system can be described by the forward projection task (FP, from 3D scene point to 2D pixel coordinates) and backward projection task (BP, from 2D coordinates to 3D direction of the incident light). The forward projection of non-central catadioptric vision systems, typically obtained by using curved mirrors, is usually modelled by using a central approximation and/or by adopting iterative approaches. In this paper, an analytical closed-form solution to compute both forward and backward projection for a non-central catadioptric system with a spherical mirror is presented. In particular, the forward projection is reduced to a 4th order polynomial by determining the reflection point on the mirror surface through the intersection between a sphere and an ellipse. A matrix format of the implemented models, suitable for fast point clouds handling, is also described. A robust calibration procedure is also proposed and applied to calibrate a catadioptric sensor by determining the mirror radius and center with respect to the camera.

  4. An Omnidirectional Vision Sensor Based on a Spherical Mirror Catadioptric System

    PubMed Central

    Barone, Sandro; Carulli, Marina; Razionale, Armando Viviano

    2018-01-01

    The combination of mirrors and lenses, which defines a catadioptric sensor, is widely used in the computer vision field. The definition of a catadioptric sensors is based on three main features: hardware setup, projection modelling and calibration process. In this paper, a complete description of these aspects is given for an omnidirectional sensor based on a spherical mirror. The projection model of a catadioptric system can be described by the forward projection task (FP, from 3D scene point to 2D pixel coordinates) and backward projection task (BP, from 2D coordinates to 3D direction of the incident light). The forward projection of non-central catadioptric vision systems, typically obtained by using curved mirrors, is usually modelled by using a central approximation and/or by adopting iterative approaches. In this paper, an analytical closed-form solution to compute both forward and backward projection for a non-central catadioptric system with a spherical mirror is presented. In particular, the forward projection is reduced to a 4th order polynomial by determining the reflection point on the mirror surface through the intersection between a sphere and an ellipse. A matrix format of the implemented models, suitable for fast point clouds handling, is also described. A robust calibration procedure is also proposed and applied to calibrate a catadioptric sensor by determining the mirror radius and center with respect to the camera. PMID:29385051

  5. JTEC panel on display technologies in Japan

    NASA Technical Reports Server (NTRS)

    Tannas, Lawrence E., Jr.; Glenn, William E.; Credelle, Thomas; Doane, J. William; Firester, Arthur H.; Thompson, Malcolm

    1992-01-01

    This report is one in a series of reports that describes research and development efforts in Japan in the area of display technologies. The following are included in this report: flat panel displays (technical findings, liquid crystal display development and production, large flat panel displays (FPD's), electroluminescent displays and plasma panels, infrastructure in Japan's FPD industry, market and projected sales, and new a-Si active matrix liquid crystal display (AMLCD) factory); materials for flat panel displays (liquid crystal materials, and light-emissive display materials); manufacturing and infrastructure of active matrix liquid crystal displays (manufacturing logistics and equipment); passive matrix liquid crystal displays (LCD basics, twisted nematics LCD's, supertwisted nematic LCD's, ferroelectric LCD's, and a comparison of passive matrix LCD technology); active matrix technology (basic active matrix technology, investment environment, amorphous silicon, polysilicon, and commercial products and prototypes); and projection displays (comparison of Japanese and U.S. display research, and technical evaluation of work).

  6. Computation of ancestry scores with mixed families and unrelated individuals.

    PubMed

    Zhou, Yi-Hui; Marron, James S; Wright, Fred A

    2018-03-01

    The issue of robustness to family relationships in computing genotype ancestry scores such as eigenvector projections has received increased attention in genetic association, and is particularly challenging when sets of both unrelated individuals and closely related family members are included. The current standard is to compute loadings (left singular vectors) using unrelated individuals and to compute projected scores for remaining family members. However, projected ancestry scores from this approach suffer from shrinkage toward zero. We consider two main novel strategies: (i) matrix substitution based on decomposition of a target family-orthogonalized covariance matrix, and (ii) using family-averaged data to obtain loadings. We illustrate the performance via simulations, including resampling from 1000 Genomes Project data, and analysis of a cystic fibrosis dataset. The matrix substitution approach has similar performance to the current standard, but is simple and uses only a genotype covariance matrix, while the family-average method shows superior performance. Our approaches are accompanied by novel ancillary approaches that provide considerable insight, including individual-specific eigenvalue scree plots. © 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  7. Use of Taguchi design of experiments to optimize and increase robustness of preliminary designs

    NASA Technical Reports Server (NTRS)

    Carrasco, Hector R.

    1992-01-01

    The research performed this summer includes the completion of work begun last summer in support of the Air Launched Personnel Launch System parametric study, providing support on the development of the test matrices for the plume experiments in the Plume Model Investigation Team Project, and aiding in the conceptual design of a lunar habitat. After the conclusion of last years Summer Program, the Systems Definition Branch continued with the Air Launched Personnel Launch System (ALPLS) study by running three experiments defined by L27 Orthogonal Arrays. Although the data was evaluated during the academic year, the analysis of variance and the final project review were completed this summer. The Plume Model Investigation Team (PLUMMIT) was formed by the Engineering Directorate to develop a consensus position on plume impingement loads and to validate plume flowfield models. In order to obtain a large number of individual correlated data sets for model validation, a series of plume experiments was planned. A preliminary 'full factorial' test matrix indicated that 73,024 jet firings would be necessary to obtain all of the information requested. As this was approximately 100 times more firings than the scheduled use of Vacuum Chamber A would permit, considerable effort was needed to reduce the test matrix and optimize it with respect to the specific objectives of the program. Part of the First Lunar Outpost Project deals with Lunar Habitat. Requirements for the habitat include radiation protection, a safe haven for occasional solar flare storms, an airlock module as well as consumables to support 34 extra vehicular activities during a 45 day mission. The objective for the proposed work was to collaborate with the Habitat Team on the development and reusability of the Logistics Modules.

  8. Modeling runoff and sediment yield from a terraced watershed using WEPP

    Treesearch

    Mary Carla McCullough; Dean E. Eisenhauer; Michael G. Dosskey

    2008-01-01

    The watershed version of WEPP (Water Erosion Prediction Project) was used to estimate 50-year runoff and sediment yields for a 291 ha watershed in eastern Nebraska that is 90% terraced and which has no historical gage data. The watershed has a complex matrix of elements, including terraced and non-terraced subwatersheds, multiple combinations of soils and land...

  9. Development of an EMC3-EIRENE Synthetic Imaging Diagnostic

    NASA Astrophysics Data System (ADS)

    Meyer, William; Allen, Steve; Samuell, Cameron; Lore, Jeremy

    2017-10-01

    2D and 3D flow measurements are critical for validating numerical codes such as EMC3-EIRENE. Toroidal symmetry assumptions preclude tomographic reconstruction of 3D flows from single camera views. In addition, the resolution of the grids utilized in numerical code models can easily surpass the resolution of physical camera diagnostic geometries. For these reasons we have developed a Synthetic Imaging Diagnostic capability for forward projection comparisons of EMC3-EIRENE model solutions with the line integrated images from the Doppler Coherence Imaging diagnostic on DIII-D. The forward projection matrix is 2.8 Mpixel by 6.4 Mcells for the non-axisymmetric case we present. For flow comparisons, both simple line integral, and field aligned component matrices must be calculated. The calculation of these matrices is a massive embarrassingly parallel problem and performed with a custom dispatcher that allows processing platforms to join mid-problem as they become available, or drop out if resources are needed for higher priority tasks. The matrices are handled using standard sparse matrix techniques. Prepared by LLNL under Contract DE-AC52-07NA27344. This material is based upon work supported by the U.S. DOE, Office of Science, Office of Fusion Energy Sciences. LLNL-ABS-734800.

  10. Face recognition based on two-dimensional discriminant sparse preserving projection

    NASA Astrophysics Data System (ADS)

    Zhang, Dawei; Zhu, Shanan

    2018-04-01

    In this paper, a supervised dimensionality reduction algorithm named two-dimensional discriminant sparse preserving projection (2DDSPP) is proposed for face recognition. In order to accurately model manifold structure of data, 2DDSPP constructs within-class affinity graph and between-class affinity graph by the constrained least squares (LS) and l1 norm minimization problem, respectively. Based on directly operating on image matrix, 2DDSPP integrates graph embedding (GE) with Fisher criterion. The obtained projection subspace preserves within-class neighborhood geometry structure of samples, while keeping away samples from different classes. The experimental results on the PIE and AR face databases show that 2DDSPP can achieve better recognition performance.

  11. Metal-matrix composites: Status and prospects

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Applications of metal matrix composites for air frames and jet engine components are discussed. The current state of the art in primary and secondary fabrication is presented. The present and projected costs were analyzed to determine the cost effectiveness of metal matrix composites. The various types of metal matrix composites and their characteristics are described.

  12. Assessment of CO2 Storage Potential in Naturally Fractured Reservoirs With Dual-Porosity Models

    NASA Astrophysics Data System (ADS)

    March, Rafael; Doster, Florian; Geiger, Sebastian

    2018-03-01

    Naturally Fractured Reservoirs (NFR's) have received little attention as potential CO2 storage sites. Two main facts deter from storage projects in fractured reservoirs: (1) CO2 tends to be nonwetting in target formations and capillary forces will keep CO2 in the fractures, which typically have low pore volume; and (2) the high conductivity of the fractures may lead to increased spatial spreading of the CO2 plume. Numerical simulations are a powerful tool to understand the physics behind brine-CO2 flow in NFR's. Dual-porosity models are typically used to simulate multiphase flow in fractured formations. However, existing dual-porosity models are based on crude approximations of the matrix-fracture fluid transfer processes and often fail to capture the dynamics of fluid exchange accurately. Therefore, more accurate transfer functions are needed in order to evaluate the CO2 transfer to the matrix. This work presents an assessment of CO2 storage potential in NFR's using dual-porosity models. We investigate the impact of a system of fractures on storage in a saline aquifer, by analyzing the time scales of brine drainage by CO2 in the matrix blocks and the maximum CO2 that can be stored in the rock matrix. A new model to estimate drainage time scales is developed and used in a transfer function for dual-porosity simulations. We then analyze how injection rates should be limited in order to avoid early spill of CO2 (lost control of the plume) on a conceptual anticline model. Numerical simulations on the anticline show that naturally fractured reservoirs may be used to store CO2.

  13. Selective and Responsive Nanopore-Filled Membranes

    DTIC Science & Technology

    2011-03-14

    Materials Science and Engineering Poster Competition 15. Chen, H.; Elabd, Y.A. Ionic Liquid Polymers: Electrospinning and Solution Properties. Fall...hydrophilic ionic polymer gels within a hydrophobic polymer host matrix. The specific tasks of this project include (1) synthesizing stimuli...on polymer-polymer nanocomposites of hydrophilic ionic polymer gels within a hydrophobic polymer host matrix. The specific tasks of this project

  14. Uncertainty Modeling for Structural Control Analysis and Synthesis

    NASA Technical Reports Server (NTRS)

    Campbell, Mark E.; Crawley, Edward F.

    1996-01-01

    The development of an accurate model of uncertainties for the control of structures that undergo a change in operational environment, based solely on modeling and experimentation in the original environment is studied. The application used throughout this work is the development of an on-orbit uncertainty model based on ground modeling and experimentation. A ground based uncertainty model consisting of mean errors and bounds on critical structural parameters is developed. The uncertainty model is created using multiple data sets to observe all relevant uncertainties in the system. The Discrete Extended Kalman Filter is used as an identification/parameter estimation method for each data set, in addition to providing a covariance matrix which aids in the development of the uncertainty model. Once ground based modal uncertainties have been developed, they are localized to specific degrees of freedom in the form of mass and stiffness uncertainties. Two techniques are presented: a matrix method which develops the mass and stiffness uncertainties in a mathematical manner; and a sensitivity method which assumes a form for the mass and stiffness uncertainties in macroelements and scaling factors. This form allows the derivation of mass and stiffness uncertainties in a more physical manner. The mass and stiffness uncertainties of the ground based system are then mapped onto the on-orbit system, and projected to create an analogous on-orbit uncertainty model in the form of mean errors and bounds on critical parameters. The Middeck Active Control Experiment is introduced as experimental verification for the localization and projection methods developed. In addition, closed loop results from on-orbit operations of the experiment verify the use of the uncertainty model for control analysis and synthesis in space.

  15. Experimental placement of stone matrix asphalt : project STP-8724 (00) X South Portland.

    DOT National Transportation Integrated Search

    2004-01-01

    In September 2003 the Maine Department of Transportation used stone matrix asphalt and Superpave to : renovate two intersections in South Portland, Maine. The experimental placement of stone matrix asphalt : (SMA) and Superpave with modified binder w...

  16. Representing Matrix Cracks Through Decomposition of the Deformation Gradient Tensor in Continuum Damage Mechanics Methods

    NASA Technical Reports Server (NTRS)

    Leone, Frank A., Jr.

    2015-01-01

    A method is presented to represent the large-deformation kinematics of intraply matrix cracks and delaminations in continuum damage mechanics (CDM) constitutive material models. The method involves the additive decomposition of the deformation gradient tensor into 'crack' and 'bulk material' components. The response of the intact bulk material is represented by a reduced deformation gradient tensor, and the opening of an embedded cohesive interface is represented by a normalized cohesive displacement-jump vector. The rotation of the embedded interface is tracked as the material deforms and as the crack opens. The distribution of the total local deformation between the bulk material and the cohesive interface components is determined by minimizing the difference between the cohesive stress and the bulk material stress projected onto the cohesive interface. The improvements to the accuracy of CDM models that incorporate the presented method over existing approaches are demonstrated for a single element subjected to simple shear deformation and for a finite element model of a unidirectional open-hole tension specimen. The material model is implemented as a VUMAT user subroutine for the Abaqus/Explicit finite element software. The presented deformation gradient decomposition method reduces the artificial load transfer across matrix cracks subjected to large shearing deformations, and avoids the spurious secondary failure modes that often occur in analyses based on conventional progressive damage models.

  17. Directed electromagnetic wave propagation in 1D metamaterial: Projecting operators method

    NASA Astrophysics Data System (ADS)

    Ampilogov, Dmitrii; Leble, Sergey

    2016-07-01

    We consider a boundary problem for 1D electrodynamics modeling of a pulse propagation in a metamaterial medium. We build and apply projecting operators to a Maxwell system in time domain that allows to split the linear propagation problem to directed waves for a material relations with general dispersion. Matrix elements of the projectors act as convolution integral operators. For a weak nonlinearity we generalize the linear results still for arbitrary dispersion and derive the system of interacting right/left waves with combined (hybrid) amplitudes. The result is specified for the popular metamaterial model with Drude formula for both permittivity and permeability coefficients. We also discuss and investigate stationary solutions of the system related to some boundary regimes.

  18. Random Matrix Theory in molecular dynamics analysis.

    PubMed

    Palese, Luigi Leonardo

    2015-01-01

    It is well known that, in some situations, principal component analysis (PCA) carried out on molecular dynamics data results in the appearance of cosine-shaped low index projections. Because this is reminiscent of the results obtained by performing PCA on a multidimensional Brownian dynamics, it has been suggested that short-time protein dynamics is essentially nothing more than a noisy signal. Here we use Random Matrix Theory to analyze a series of short-time molecular dynamics experiments which are specifically designed to be simulations with high cosine content. We use as a model system the protein apoCox17, a mitochondrial copper chaperone. Spectral analysis on correlation matrices allows to easily differentiate random correlations, simply deriving from the finite length of the process, from non-random signals reflecting the intrinsic system properties. Our results clearly show that protein dynamics is not really Brownian also in presence of the cosine-shaped low index projections on principal axes. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Microbial Paleontology, Mineralogy and Geochemistry of Modern and Ancient Thermal Spring Deposits and Their Recognition on the Early Earth and Mars"

    NASA Technical Reports Server (NTRS)

    Farmer, Jack D.

    2004-01-01

    The vision of this project was to improve our understanding of the processes by which microbiological information is captured and preserved in rapidly mineralizing sedimentary environments. Specifically, the research focused on the ways in which microbial mats and biofilms influence the sedimentology, geochemistry and paleontology of modem hydrothermal spring deposits in Yellowstone national Park and their ancient analogs. Toward that goal, we sought to understand how the preservation of fossil biosignatures is affected by 1) taphonomy- the natural degradation processes that affect an organism from the time of its death, until its discovery as a fossil and 2) diagenesis- longer-term, post-depositional processes, including cementation and matrix recrystallization, which collectively affect the mineral matrix that contains fossil biosignature information. Early objectives of this project included the development of observational frameworks (facies models) and methods (highly-integrated, interdisciplinary approaches) that could be used to explore for hydrothermal deposits in ancient terranes on Earth, and eventually on Mars.

  20. Spectral-Spatial Shared Linear Regression for Hyperspectral Image Classification.

    PubMed

    Haoliang Yuan; Yuan Yan Tang

    2017-04-01

    Classification of the pixels in hyperspectral image (HSI) is an important task and has been popularly applied in many practical applications. Its major challenge is the high-dimensional small-sized problem. To deal with this problem, lots of subspace learning (SL) methods are developed to reduce the dimension of the pixels while preserving the important discriminant information. Motivated by ridge linear regression (RLR) framework for SL, we propose a spectral-spatial shared linear regression method (SSSLR) for extracting the feature representation. Comparing with RLR, our proposed SSSLR has the following two advantages. First, we utilize a convex set to explore the spatial structure for computing the linear projection matrix. Second, we utilize a shared structure learning model, which is formed by original data space and a hidden feature space, to learn a more discriminant linear projection matrix for classification. To optimize our proposed method, an efficient iterative algorithm is proposed. Experimental results on two popular HSI data sets, i.e., Indian Pines and Salinas demonstrate that our proposed methods outperform many SL methods.

  1. AOF LTAO mode: reconstruction strategy and first test results

    NASA Astrophysics Data System (ADS)

    Oberti, Sylvain; Kolb, Johann; Le Louarn, Miska; La Penna, Paolo; Madec, Pierre-Yves; Neichel, Benoit; Sauvage, Jean-François; Fusco, Thierry; Donaldson, Robert; Soenke, Christian; Suárez Valles, Marcos; Arsenault, Robin

    2016-07-01

    GALACSI is the Adaptive Optics (AO) system serving the instrument MUSE in the framework of the Adaptive Optics Facility (AOF) project. Its Narrow Field Mode (NFM) is a Laser Tomography AO (LTAO) mode delivering high resolution in the visible across a small Field of View (FoV) of 7.5" diameter around the optical axis. From a reconstruction standpoint, GALACSI NFM intends to optimize the correction on axis by estimating the turbulence in volume via a tomographic process, then projecting the turbulence profile onto one single Deformable Mirror (DM) located in the pupil, close to the ground. In this paper, the laser tomographic reconstruction process is described. Several methods (virtual DM, virtual layer projection) are studied, under the constraint of a single matrix vector multiplication. The pseudo-synthetic interaction matrix model and the LTAO reconstructor design are analysed. Moreover, the reconstruction parameter space is explored, in particular the regularization terms. Furthermore, we present here the strategy to define the modal control basis and split the reconstruction between the Low Order (LO) loop and the High Order (HO) loop. Finally, closed loop performance obtained with a 3D turbulence generator will be analysed with respect to the most relevant system parameters to be tuned.

  2. Differential synaptology of vGluT2-containing thalamostriatal afferents between the patch and matrix compartments in rats.

    PubMed

    Raju, Dinesh V; Shah, Deep J; Wright, Terrence M; Hall, Randy A; Smith, Yoland

    2006-11-10

    The striatum is divided into two compartments named the patch (or striosome) and the matrix. Although these two compartments can be differentiated by their neurochemical content or afferent and efferent projections, the synaptology of inputs to these striatal regions remains poorly characterized. By using the vesicular glutamate transporters vGluT1 and vGluT2, as markers of corticostriatal and thalamostriatal projections, respectively, we demonstrate a differential pattern of synaptic connections of these two pathways between the patch and the matrix compartments. We also demonstrate that the majority of vGluT2-immunolabeled axon terminals form axospinous synapses, suggesting that thalamic afferents, like corticostriatal inputs, terminate preferentially onto spines in the striatum. Within both compartments, more than 90% of vGluT1-containing terminals formed axospinous synapses, whereas 87% of vGluT2-positive terminals within the patch innervated dendritic spines, but only 55% did so in the matrix. To characterize further the source of thalamic inputs that could account for the increase in axodendritic synapses in the matrix, we undertook an electron microscopic analysis of the synaptology of thalamostriatal afferents to the matrix compartments from specific intralaminar, midline, relay, and associative thalamic nuclei in rats. Approximately 95% of PHA-L-labeled terminals from the central lateral, midline, mediodorsal, lateral dorsal, anteroventral, and ventral anterior/ventral lateral nuclei formed axospinous synapses, a pattern reminiscent of corticostriatal afferents but strikingly different from thalamostriatal projections arising from the parafascicular nucleus (PF), which terminated onto dendritic shafts. These findings provide the first evidence for a differential pattern of synaptic organization of thalamostriatal glutamatergic inputs to the patch and matrix compartments. Furthermore, they demonstrate that the PF is the sole source of significant axodendritic thalamic inputs to striatal projection neurons. These observations pave the way for understanding differential regulatory mechanisms of striatal outflow from the patch and matrix compartments by thalamostriatal afferents. 2006 Wiley-Liss, Inc.

  3. Matrix Management in DoD: An Annotated Bibliography

    DTIC Science & Technology

    1984-04-01

    ADDRESS 10 PROGRAM ELEMENT. PROJECT, TASK AREA & WORK UNIT NUMBERS ACSC/EDCC, MAXWELL AFB AL 36112 1 1. CONTROLLING OFFICE NAME AND ADDRESS 12 ...completes their message that matrix orga- nization is the likely format of the multiprogram Program Office. 12 The text’s discussion of matrix is...manager, and functional specialist are of vital importance to the effective operation of the matrix .... Matrix management will not achieve its

  4. A Novel Image Compression Algorithm for High Resolution 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2014-06-01

    This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.

  5. Blockwise conjugate gradient methods for image reconstruction in volumetric CT.

    PubMed

    Qiu, W; Titley-Peloquin, D; Soleimani, M

    2012-11-01

    Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  6. General MoM Solutions for Large Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasenfest, B; Capolino, F; Wilton, D R

    2003-07-22

    This paper focuses on a numerical procedure that addresses the difficulties of dealing with large, finite arrays while preserving the generality and robustness of full-wave methods. We present a fast method based on approximating interactions between sufficiently separated array elements via a relatively coarse interpolation of the Green's function on a uniform grid commensurate with the array's periodicity. The interaction between the basis and testing functions is reduced to a three-stage process. The first stage is a projection of standard (e.g., RWG) subdomain bases onto a set of interpolation functions that interpolate the Green's function on the array face. Thismore » projection, which is used in a matrix/vector product for each array cell in an iterative solution process, need only be carried out once for a single cell and results in a low-rank matrix. An intermediate stage matrix/vector product computation involving the uniformly sampled Green's function is of convolutional form in the lateral (transverse) directions so that a 2D FFT may be used. The final stage is a third matrix/vector product computation involving a matrix resulting from projecting testing functions onto the Green's function interpolation functions; the low-rank matrix is either identical to (using Galerkin's method) or similar to that for the bases projection. An effective MoM solution scheme is developed for large arrays using a modification of the AIM (Adaptive Integral Method) method. The method permits the analysis of arrays with arbitrary contours and nonplanar elements. Both fill and solve times within the MoM method are improved with respect to more standard MoM solvers.« less

  7. Registration using natural features for augmented reality systems.

    PubMed

    Yuan, M L; Ong, S K; Nee, A Y C

    2006-01-01

    Registration is one of the most difficult problems in augmented reality (AR) systems. In this paper, a simple registration method using natural features based on the projective reconstruction technique is proposed. This method consists of two steps: embedding and rendering. Embedding involves specifying four points to build the world coordinate system on which a virtual object will be superimposed. In rendering, the Kanade-Lucas-Tomasi (KLT) feature tracker is used to track the natural feature correspondences in the live video. The natural features that have been tracked are used to estimate the corresponding projective matrix in the image sequence. Next, the projective reconstruction technique is used to transfer the four specified points to compute the registration matrix for augmentation. This paper also proposes a robust method for estimating the projective matrix, where the natural features that have been tracked are normalized (translation and scaling) and used as the input data. The estimated projective matrix will be used as an initial estimate for a nonlinear optimization method that minimizes the actual residual errors based on the Levenberg-Marquardt (LM) minimization method, thus making the results more robust and stable. The proposed registration method has three major advantages: 1) It is simple, as no predefined fiducials or markers are used for registration for either indoor and outdoor AR applications. 2) It is robust, because it remains effective as long as at least six natural features are tracked during the entire augmentation, and the existence of the corresponding projective matrices in the live video is guaranteed. Meanwhile, the robust method to estimate the projective matrix can obtain stable results even when there are some outliers during the tracking process. 3) Virtual objects can still be superimposed on the specified areas, even if some parts of the areas are occluded during the entire process. Some indoor and outdoor experiments have been conducted to validate the performance of this proposed method.

  8. Multimedia Matrix: A Cognitive Strategy for Designers.

    ERIC Educational Resources Information Center

    Sherry, Annette C.

    This instructional development project evaluates the effect of a matrix-based strategy to assist multimedia authors in acquiring and applying principles for effective multimedia design. The Multimedia Matrix, based on the Park and Hannafin "Twenty Principles and Implications for Interactive Multimedia" design, displays a condensed…

  9. Integrating Temperature-Dependent Life Table Data into a Matrix Projection Model for Drosophila suzukii Population Estimation

    PubMed Central

    Wiman, Nik G.; Walton, Vaughn M.; Dalton, Daniel T.; Anfora, Gianfranco; Burrack, Hannah J.; Chiu, Joanna C.; Daane, Kent M.; Grassi, Alberto; Miller, Betsey; Tochen, Samantha; Wang, Xingeng; Ioriatti, Claudio

    2014-01-01

    Temperature-dependent fecundity and survival data was integrated into a matrix population model to describe relative Drosophila suzukii Matsumura (Diptera: Drosophilidae) population increase and age structure based on environmental conditions. This novel modification of the classic Leslie matrix population model is presented as a way to examine how insect populations interact with the environment, and has application as a predictor of population density. For D. suzukii, we examined model implications for pest pressure on crops. As case studies, we examined model predictions in three small fruit production regions in the United States (US) and one in Italy. These production regions have distinctly different climates. In general, patterns of adult D. suzukii trap activity broadly mimicked seasonal population levels predicted by the model using only temperature data. Age structure of estimated populations suggest that trap and fruit infestation data are of limited value and are insufficient for model validation. Thus, we suggest alternative experiments for validation. The model is advantageous in that it provides stage-specific population estimation, which can potentially guide management strategies and provide unique opportunities to simulate stage-specific management effects such as insecticide applications or the effect of biological control on a specific life-stage. The two factors that drive initiation of the model are suitable temperatures (biofix) and availability of a suitable host medium (fruit). Although there are many factors affecting population dynamics of D. suzukii in the field, temperature-dependent survival and reproduction are believed to be the main drivers for D. suzukii populations. PMID:25192013

  10. System Matrix Analysis for Computed Tomography Imaging

    PubMed Central

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  11. An entropy-variables-based formulation of residual distribution schemes for non-equilibrium flows

    NASA Astrophysics Data System (ADS)

    Garicano-Mena, Jesús; Lani, Andrea; Degrez, Gérard

    2018-06-01

    In this paper we present an extension of Residual Distribution techniques for the simulation of compressible flows in non-equilibrium conditions. The latter are modeled by means of a state-of-the-art multi-species and two-temperature model. An entropy-based variable transformation that symmetrizes the projected advective Jacobian for such a thermophysical model is introduced. Moreover, the transformed advection Jacobian matrix presents a block diagonal structure, with mass-species and electronic-vibrational energy being completely decoupled from the momentum and total energy sub-system. The advantageous structure of the transformed advective Jacobian can be exploited by contour-integration-based Residual Distribution techniques: established schemes that operate on dense matrices can be substituted by the same scheme operating on the momentum-energy subsystem matrix and repeated application of scalar scheme to the mass-species and electronic-vibrational energy terms. Finally, the performance gain of the symmetrizing-variables formulation is quantified on a selection of representative testcases, ranging from subsonic to hypersonic, in inviscid or viscous conditions.

  12. Insight into mitochondrial structure and function from electron tomography.

    PubMed

    Frey, T G; Renken, C W; Perkins, G A

    2002-09-10

    In recent years, electron tomography has provided detailed three-dimensional models of mitochondria that have redefined our concept of mitochondrial structure. The models reveal an inner membrane consisting of two components, the inner boundary membrane (IBM) closely apposed to the outer membrane and the cristae membrane that projects into the matrix compartment. These two components are connected by tubular structures of relatively uniform size called crista junctions. The distribution of crista junction sizes and shapes is predicted by a thermodynamic model based upon the energy of membrane bending, but proteins likely also play a role in determining the conformation of the inner membrane. Results of structural studies of mitochondria during apoptosis demonstrate that cytochrome c is released without detectable disruption of the outer membrane or extensive swelling of the mitochondrial matrix, suggesting the formation of an outer membrane pore large enough to allow passage of holo-cytochrome c. The possible compartmentation of inner membrane function between the IBM and the cristae membrane is also discussed.

  13. Cormack Research Project: Glasgow University

    NASA Technical Reports Server (NTRS)

    Skinner, Susan; Ryan, James M.

    1998-01-01

    The aim of this project was to investigate and improve upon existing methods of analysing data from COMITEL on the Gamma Ray Observatory for neutrons emitted during solar flares. In particular, a strategy for placing confidence intervals on neutron energy distributions, due to uncertainties on the response matrix has been developed. We have also been able to demonstrate the superior performance of one of a range of possible statistical regularization strategies. A method of generating likely models of neutron energy distributions has also been developed as a tool to this end. The project involved solving an inverse problem with noise being added to the data in various ways. To achieve this pre-existing C code was used to run Fortran subroutines which performed statistical regularization on the data.

  14. A Feasibility Study on a Parallel Mechanism for Examining the Space Shuttle Orbiter Payload Bay Radiators

    NASA Technical Reports Server (NTRS)

    Roberts, Rodney G.; LopezdelCastillo, Eduardo

    1996-01-01

    The goal of the project was to develop the necessary analysis tools for a feasibility study of a cable suspended robot system for examining the space shuttle orbiter payload bay radiators These tools were developed to address design issues such as workspace size, tension requirements on the cable, the necessary accuracy and resolution requirements and the stiffness and movement requirements of the system. This report describes the mathematical models for studying the inverse kinematics, statics, and stiffness of the robot. Each model is described by a matrix. The manipulator Jacobian was also related to the stiffness matrix, which characterized the stiffness of the system. Analysis tools were then developed based on the singular value decomposition (SVD) of the corresponding matrices. It was demonstrated how the SVD can be used to quantify the robot's performance and to provide insight into different design issues.

  15. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit

    1992-01-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  16. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Astrophysics Data System (ADS)

    Chitsomboon, Tawit

    1992-02-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  17. Implementing a Matrix-free Analytical Jacobian to Handle Nonlinearities in Models of 3D Lithospheric Deformation

    NASA Astrophysics Data System (ADS)

    Kaus, B.; Popov, A.

    2015-12-01

    The analytical expression for the Jacobian is a key component to achieve fast and robust convergence of the nonlinear Newton-Raphson iterative solver. Accomplishing this task in practice often requires a significant algebraic effort. Therefore it is quite common to use a cheap alternative instead, for example by approximating the Jacobian with a finite difference estimation. Despite its simplicity it is a relatively fragile and unreliable technique that is sensitive to the scaling of the residual and unknowns, as well as to the perturbation parameter selection. Unfortunately no universal rule can be applied to provide both a robust scaling and a perturbation. The approach we use here is to derive the analytical Jacobian for the coupled set of momentum, mass, and energy conservation equations together with the elasto-visco-plastic rheology and a marker in cell/staggered finite difference method. The software project LaMEM (Lithosphere and Mantle Evolution Model) is primarily developed for the thermo-mechanically coupled modeling of the 3D lithospheric deformation. The code is based on a staggered grid finite difference discretization in space, and uses customized scalable solvers form PETSc library to efficiently run on the massively parallel machines (such as IBM Blue Gene/Q). Currently LaMEM relies on the Jacobian-Free Newton-Krylov (JFNK) nonlinear solver, which approximates the Jacobian-vector product using a simple finite difference formula. This approach never requires an assembled Jacobian matrix and uses only the residual computation routine. We use an approximate Jacobian (Picard) matrix to precondition the Krylov solver with the Galerkin geometric multigrid. Because of the inherent problems of the finite difference Jacobian estimation, this approach doesn't always result in stable convergence. In this work we present and discuss a matrix-free technique in which the Jacobian-vector product is replaced by analytically-derived expressions and compare results with those obtained with a finite difference approximation of the Jacobian. This project is funded by ERC Starting Grant 258830 and computer facilities were provided by Jülich supercomputer center (Germany).

  18. An optimum organizational structure for a large earth-orbiting multidisciplinary space base. Ph.D. Thesis - Fla. State Univ., 1973

    NASA Technical Reports Server (NTRS)

    Ragusa, J. M.

    1975-01-01

    An optimum hypothetical organizational structure was studied for a large earth-orbiting, multidisciplinary research and applications space base manned by a crew of technologists. Because such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than with the empirical testing of the model. The essential finding of this research was that a four-level project type total matrix model will optimize the efficiency and effectiveness of space base technologists.

  19. Combining dispersal, landscape connectivity and habitat suitability to assess climate-induced changes in the distribution of Cunningham's skink, Egernia cunninghami.

    PubMed

    Ofori, Benjamin Y; Stow, Adam J; Baumgartner, John B; Beaumont, Linda J

    2017-01-01

    The ability of species to track their climate niche is dependent on their dispersal potential and the connectivity of the landscape matrix linking current and future suitable habitat. However, studies modeling climate-driven range shifts rarely address the movement of species across landscapes realistically, often assuming "unlimited" or "no" dispersal. Here, we incorporate dispersal rate and landscape connectivity with a species distribution model (Maxent) to assess the extent to which the Cunningham's skink (Egernia cunninghami) may be capable of tracking spatial shifts in suitable habitat as climate changes. Our model was projected onto four contrasting, but equally plausible, scenarios describing futures that are (relative to now) hot/wet, warm/dry, hot/with similar precipitation and warm/wet, at six time horizons with decadal intervals (2020-2070) and at two spatial resolutions: 1 km and 250 m. The size of suitable habitat was projected to decline 23-63% at 1 km and 26-64% at 250 m, by 2070. Combining Maxent output with the dispersal rate of the species and connectivity of the intervening landscape matrix showed that most current populations in regions projected to become unsuitable in the medium to long term, will be unable to shift the distance necessary to reach suitable habitat. In particular, numerous populations currently inhabiting the trailing edge of the species' range are highly unlikely to be able to disperse fast enough to track climate change. Unless these populations are capable of adaptation they are likely to be extirpated. We note, however, that the core of the species distribution remains suitable across the broad spectrum of climate scenarios considered. Our findings highlight challenges faced by philopatric species and the importance of adaptation for the persistence of peripheral populations under climate change.

  20. Structural and petrophysical characterization: from outcrop rock analogue to reservoir model of deep geothermal prospect in Eastern France

    NASA Astrophysics Data System (ADS)

    Bertrand, Lionel; Géraud, Yves; Diraison, Marc; Damy, Pierre-Clément

    2017-04-01

    The Scientific Interest Group (GIS) GEODENERGIES with the REFLET project aims to develop a geological and reservoir model for fault zones that are the main targets for deep geothermal prospects in the West European Rift system. In this project, several areas are studied with an integrated methodology combining field studies, boreholes and geophysical data acquisition and 3D modelling. In this study, we present the results of reservoir rock analogues characterization of one of these prospects in the Valence Graben (Eastern France). The approach used is a structural and petrophysical characterization of the rocks outcropping at the shoulders of the rift in order to model the buried targeted fault zone. The reservoir rocks are composed of fractured granites, gneiss and schists of the Hercynian basement of the graben. The matrix porosity, permeability, P-waves velocities and thermal conductivities have been characterized on hand samples coming from fault zones at the outcrop. Furthermore, fault organization has been mapped with the aim to identify the characteristic fault orientation, spacing and width. The fractures statistics like the orientation, density, and length have been identified in the damaged zones and unfaulted blocks regarding the regional fault pattern. All theses data have been included in a reservoir model with a double porosity model. The field study shows that the fault pattern in the outcrop area can be classified in different fault orders, with first order scale, larger faults distribution controls the first order structural and lithological organization. Between theses faults, the first order blocks are divided in second and third order faults, smaller structures, with characteristic spacing and width. Third order fault zones in granitic rocks show a significant porosity development in the fault cores until 25 % in the most locally altered material, as the damaged zones develop mostly fractures permeabilities. In the gneiss and schists units, the matrix porosity and permeability development is mainly controlled by microcrack density enhancement in the fault zone unlike the granite rocks were it is mostly mineral alteration. Due to the grain size much important in the gneiss, the opening of the cracks is higher than in the schist samples. Thus, the matrix permeability can be two orders higher in the gneiss than in the schists (until 10 mD for gneiss and 0,1 mD for schists for the same porosity around 5%). Combining the regional data with the fault pattern, the fracture and matrix porosity and permeability, we are able to construct a double-porosity model suitable for the prospected graben. This model, combined with seismic data acquisition is a predictable tool for flow modelling in the buried reservoir and helps the prediction of borehole targets and design in the graben.

  1. Design, Fabrication, Characterization and Modeling of Integrated Functional Materials

    DTIC Science & Technology

    2009-10-01

    cobalt ferrite (CoFe2O4) nanoparticles dispersed in a low-loss commercial polymer matrix obtained from Rogers Corporation. 2 mmol of Cobalt (II...oleylamine and 20 ml benzyl ether were added to the Iron (III) acetylacetonate and Cobalt (II) acetylacetonate mixture. The mixture was stirred...microwave applications Multiferroic bilayers of Cobalt Ferrite and PZT: The objective of this project is to fabricate bilayers of ferroelectric

  2. Supernova Cosmology Project

    Science.gov Websites

    (note that the arXiv.org version lacks the full-resolution figures) The SCP "Union" SN Ia Matrix Description Covariance Matrix with Systematics Description Full Table of All SNe Description

  3. Landau-Ginzburg to Calabi-Yau dictionary for D-branes

    NASA Astrophysics Data System (ADS)

    Aspinwall, Paul S.

    2007-08-01

    Based on the work by Orlov (e-print arXiv:math.AG/0503632), we give a precise recipe for mapping between B-type D-branes in a Landau-Ginzburg orbifold model (or Gepner model) and the corresponding large radius Calabi-Yau manifold. The D-branes in Landau-Ginzburg theories correspond to matrix factorizations and the D-branes on the Calabi-Yau manifolds are objects in the derived category. We give several examples including branes on quotient singularities associated with weighted projective spaces. We are able to confirm several conjectures and statements in the literature.

  4. 3-D Inversion of the MT EarthScope Data, Collected Over the East Central United States

    NASA Astrophysics Data System (ADS)

    Gribenko, A. V.; Zhdanov, M. S.

    2017-12-01

    The magnetotelluric (MT) data collected as a part of the EarthScope project provided a unique opportunity to study the conductivity structure of the deep interior of the North American continent. Besides the scientific value of the recovered subsurface models, the data also allowed inversion practitioners to test the robustness of their algorithms applied to regional long-period data. In this paper, we present the results of MT inversion of a subset of the second footprint of the MT data collection covering the East Central United States. Our inversion algorithm implements simultaneous inversion of the full MT impedance data both for the 3-D conductivity distribution and for the distortion matrix. The distortion matrix provides the means to account for the effect of the near-surface geoelectrical inhomogeneities on the MT data. The long-period data do not have the resolution for the small near-surface conductivity anomalies, which makes an application of the distortion matrix especially appropriate. The determined conductivity model of the region agrees well with the known geologic and tectonic features of the East Central United States. The conductivity anomalies recovered by our inversion indicate a possible presence of the hot spot track in the area.

  5. Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.

    PubMed

    Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David

    2016-02-01

    In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.

  6. Development of an Injectable Salmon Fibrinogen-Thrombin Matrix to Enhance Healing of Compound Fractures of Extremities

    DTIC Science & Technology

    2011-10-01

    of bone regeneration in animals treated with different implantable matrix. The material to be tested in this project is a salmon fibrin matrix... Buprenorphine and metacam (Meloxicam) are also administered at the time of surgery for short term pain relief. Fluoroscopy is performed before and after injury

  7. A General Exponential Framework for Dimensionality Reduction.

    PubMed

    Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan

    2014-02-01

    As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.

  8. Optimal projection method determination by Logdet Divergence and perturbed von-Neumann Divergence.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Qiu, Yushan; Cheng, Xiao-Qing

    2017-12-14

    Positive semi-definiteness is a critical property in kernel methods for Support Vector Machine (SVM) by which efficient solutions can be guaranteed through convex quadratic programming. However, a lot of similarity functions in applications do not produce positive semi-definite kernels. We propose projection method by constructing projection matrix on indefinite kernels. As a generalization of the spectrum method (denoising method and flipping method), the projection method shows better or comparable performance comparing to the corresponding indefinite kernel methods on a number of real world data sets. Under the Bregman matrix divergence theory, we can find suggested optimal λ in projection method using unconstrained optimization in kernel learning. In this paper we focus on optimal λ determination, in the pursuit of precise optimal λ determination method in unconstrained optimization framework. We developed a perturbed von-Neumann divergence to measure kernel relationships. We compared optimal λ determination with Logdet Divergence and perturbed von-Neumann Divergence, aiming at finding better λ in projection method. Results on a number of real world data sets show that projection method with optimal λ by Logdet divergence demonstrate near optimal performance. And the perturbed von-Neumann Divergence can help determine a relatively better optimal projection method. Projection method ia easy to use for dealing with indefinite kernels. And the parameter embedded in the method can be determined through unconstrained optimization under Bregman matrix divergence theory. This may provide a new way in kernel SVMs for varied objectives.

  9. Development and comparison of metrics for evaluating climate models and estimation of projection uncertainty

    NASA Astrophysics Data System (ADS)

    Ring, Christoph; Pollinger, Felix; Kaspar-Ott, Irena; Hertig, Elke; Jacobeit, Jucundus; Paeth, Heiko

    2017-04-01

    The COMEPRO project (Comparison of Metrics for Probabilistic Climate Change Projections of Mediterranean Precipitation), funded by the Deutsche Forschungsgemeinschaft (DFG), is dedicated to the development of new evaluation metrics for state-of-the-art climate models. Further, we analyze implications for probabilistic projections of climate change. This study focuses on the results of 4-field matrix metrics. Here, six different approaches are compared. We evaluate 24 models of the Coupled Model Intercomparison Project Phase 3 (CMIP3), 40 of CMIP5 and 18 of the Coordinated Regional Downscaling Experiment (CORDEX). In addition to the annual and seasonal precipitation the mean temperature is analysed. We consider both 50-year trend and climatological mean for the second half of the 20th century. For the probabilistic projections of climate change A1b, A2 (CMIP3) and RCP4.5, RCP8.5 (CMIP5,CORDEX) scenarios are used. The eight main study areas are located in the Mediterranean. However, we apply our metrics to globally distributed regions as well. The metrics show high simulation quality of temperature trend and both precipitation and temperature mean for most climate models and study areas. In addition, we find high potential for model weighting in order to reduce uncertainty. These results are in line with other accepted evaluation metrics and studies. The comparison of the different 4-field approaches reveals high correlations for most metrics. The results of the metric-weighted probabilistic density functions of climate change are heterogeneous. We find for different regions and seasons both increases and decreases of uncertainty. The analysis of global study areas is consistent with the regional study areas of the Medeiterrenean.

  10. Multi-cut solutions in Chern-Simons matrix models

    NASA Astrophysics Data System (ADS)

    Morita, Takeshi; Sugiyama, Kento

    2018-04-01

    We elaborate the Chern-Simons (CS) matrix models at large N. The saddle point equations of these matrix models have a curious structure which cannot be seen in the ordinary one matrix models. Thanks to this structure, an infinite number of multi-cut solutions exist in the CS matrix models. Particularly we exactly derive the two-cut solutions at finite 't Hooft coupling in the pure CS matrix model. In the ABJM matrix model, we argue that some of multi-cut solutions might be interpreted as a condensation of the D2-brane instantons.

  11. Determination of In-situ Porosity and Investigation of Diffusion Processes at the Grimsel Test Site, Switzerland.

    NASA Astrophysics Data System (ADS)

    Biggin, C.; Ota, K.; Siittari-Kauppi, M.; Moeri, A.

    2004-12-01

    In the context of a repository for radioactive waste, 'matrix diffusion' is used to describe the process by which solute, flowing in distinct flow paths, penetrates the surrounding rock matrix. Diffusion into the matrix occurs in a connected system of pores or microfractures. Matrix diffusion provides a mechanism for greatly enlarging the area of rock surface in contact with advecting radionuclides, from that of the flow path surfaces (and infills), to a much larger portion of the bulk rock and increases the global pore volume which can retard radionuclides. In terms of a repository safety assessment, demonstration of a significant depth of diffusion-accessible pore space may result in a significant delay in the calculated release of any escaping radionuclides to the environment and a dramatic reduction in the resulting concentration released into the biosphere. For the last decade, Nagra has investigated in situ matrix diffusion at the Grimsel Test Site (GTS) in the Swiss Alps. The in situ investigations offer two distinct advantages to those performed in the lab, namely: 1. Lab-based determination of porosity and diffusivity can lead to an overestimation of matrix diffusion due to stress relief when the rock is sampled (which would overestimate the retardation in the geosphere) 2. Lab-based analysis usually examines small (cm scale) samples and cannot therefore account for any matrix heterogeneity over the hundreds or thousands of metres a typical flow path The in situ investigations described began with the Connected Porosity project, wherein a specially developed acrylic resin was injected into the rock matrix to fill the pore space and determine the depth of connected porosity. The resin was polymerised in situ and the entire rock mass removed by overcoring. The results indicated that lab-based porosity measurements may be two to three times higher than those obtained in situ. While the depth of accessible matrix from a water-conducting feature assumed in repository performance assessments is generally 1 to 10 cm, the results from the GTS in situ experiment suggested depths of several metres could be more appropriate. More recently, the Pore Space Geometry (PSG) experiment at the GTS has used a C-14 doped acrylic resin, combined with state-of-the-art digital beta autoradiography and fluorescence detection to examine a larger area of rock for determination of porosity and the degree of connected pore space. Analysis is currently ongoing and the key findings will be reported in this paper. Starting at the GTS in 2005, the Long-term Diffusion (LTD) project will investigate such processes over spatial and temporal scales more relevant to a repository than traditional lab-based experiments. In the framework of this experiment, long-term (10 to 50 years) in situ diffusion experiments and resin injection experiments are planned to verify current models for matrix diffusion as a radionuclide retardation process. This paper will discuss the findings of the first two experiments and their significance to repository safety assessments before discussing the strategy for the future in relation to the LTD project.

  12. Stockholder projector analysis: A Hilbert-space partitioning of the molecular one-electron density matrix with orthogonal projectors

    NASA Astrophysics Data System (ADS)

    Vanfleteren, Diederik; Van Neck, Dimitri; Bultinck, Patrick; Ayers, Paul W.; Waroquier, Michel

    2012-01-01

    A previously introduced partitioning of the molecular one-electron density matrix over atoms and bonds [D. Vanfleteren et al., J. Chem. Phys. 133, 231103 (2010)] is investigated in detail. Orthogonal projection operators are used to define atomic subspaces, as in Natural Population Analysis. The orthogonal projection operators are constructed with a recursive scheme. These operators are chemically relevant and obey a stockholder principle, familiar from the Hirshfeld-I partitioning of the electron density. The stockholder principle is extended to density matrices, where the orthogonal projectors are considered to be atomic fractions of the summed contributions. All calculations are performed as matrix manipulations in one-electron Hilbert space. Mathematical proofs and numerical evidence concerning this recursive scheme are provided in the present paper. The advantages associated with the use of these stockholder projection operators are examined with respect to covalent bond orders, bond polarization, and transferability.

  13. Contribution of the Japan International Cooperation Agency health-related projects to health system strengthening.

    PubMed

    Yuasa, Motoyuki; Yamaguchi, Yoshie; Imada, Mihoko

    2013-09-22

    The Japan International Cooperation Agency (JICA) has focused its attention on appraising health development assistance projects and redirecting efforts towards health system strengthening. This study aimed to describe the type of project and targets of interest, and assess the contribution of JICA health-related projects to strengthening health systems worldwide. We collected a web-based Project Design Matrix (PDM) of 105 JICA projects implemented between January 2005 and December 2009. We developed an analytical matrix based on the World Health Organization (WHO) health system framework to examine the PDM data and thereby assess the projects' contributions to health system strengthening. The majority of JICA projects had prioritized workforce development, and improvements in governance and service delivery. Conversely, there was little assistance for finance or medical product development. The vast majority (87.6%) of JICA projects addressed public health issues, for example programs to improve maternal and child health, and the prevention and treatment of infectious diseases such as AIDS, tuberculosis and malaria. Nearly 90% of JICA technical healthcare assistance directly focused on improving governance as the most critical means of accomplishing its goals. Our study confirmed that JICA projects met the goals of bilateral cooperation by developing workforce capacity and governance. Nevertheless, our findings suggest that JICA assistance could be used to support financial aspects of healthcare systems, which is an area of increasing concern. We also showed that the analytical matrix methodology is an effective means of examining the component of health system strengthening to which the activity and output of a project contributes. This may help policy makers and practitioners focus future projects on priority areas.

  14. The agroecological matrix as alternative to the land-sparing/agriculture intensification model.

    PubMed

    Perfecto, Ivette; Vandermeer, John

    2010-03-30

    Among the myriad complications involved in the current food crisis, the relationship between agriculture and the rest of nature is one of the most important yet remains only incompletely analyzed. Particularly in tropical areas, agriculture is frequently seen as the antithesis of the natural world, where the problem is framed as one of minimizing land devoted to agriculture so as to devote more to conservation of biodiversity and other ecosystem services. In particular, the "forest transition model" projects an overly optimistic vision of a future where increased agricultural intensification (to produce more per hectare) and/or increased rural-to-urban migration (to reduce the rural population that cuts forest for agriculture) suggests a near future of much tropical aforestation and higher agricultural production. Reviewing recent developments in ecological theory (showing the importance of migration between fragments and local extinction rates) coupled with empirical evidence, we argue that there is little to suggest that the forest transition model is useful for tropical areas, at least under current sociopolitical structures. A model that incorporates the agricultural matrix as an integral component of conservation programs is proposed. Furthermore, we suggest that this model will be most successful within a framework of small-scale agroecological production.

  15. From deep TLS validation to ensembles of atomic models built from elemental motions

    DOE PAGES

    Urzhumtsev, Alexandre; Afonine, Pavel V.; Van Benschoten, Andrew H.; ...

    2015-07-28

    The translation–libration–screw model first introduced by Cruickshank, Schomaker and Trueblood describes the concerted motions of atomic groups. Using TLS models can improve the agreement between calculated and experimental diffraction data. Because the T, L and S matrices describe a combination of atomic vibrations and librations, TLS models can also potentially shed light on molecular mechanisms involving correlated motions. However, this use of TLS models in mechanistic studies is hampered by the difficulties in translating the results of refinement into molecular movement or a structural ensemble. To convert the matrices into a constituent molecular movement, the matrix elements must satisfy severalmore » conditions. Refining the T, L and S matrix elements as independent parameters without taking these conditions into account may result in matrices that do not represent concerted molecular movements. Here, a mathematical framework and the computational tools to analyze TLS matrices, resulting in either explicit decomposition into descriptions of the underlying motions or a report of broken conditions, are described. The description of valid underlying motions can then be output as a structural ensemble. All methods are implemented as part of the PHENIX project.« less

  16. Ising tricriticality in the extended Hubbard model with bond dimerization

    NASA Astrophysics Data System (ADS)

    Fehske, Holger; Ejima, Satoshi; Lange, Florian; Essler, Fabian H. L.

    We explore the quantum phase transition between Peierls and charge-density-wave insulating states in the one-dimensional, half-filled, extended Hubbard model with explicit bond dimerization. We show that the critical line of the continuous Ising transition terminates at a tricritical point, belonging to the universality class of the tricritical Ising model with central charge c=7/10. Above this point, the quantum phase transition becomes first order. Employing a numerical matrix-product-state based (infinite) density-matrix renormalization group method we determine the ground-state phase diagram, the spin and two-particle charge excitations gaps, and the entanglement properties of the model with high precision. Performing a bosonization analysis we can derive a field description of the transition region in terms of a triple sine-Gordon model. This allows us to derive field theory predictions for the power-law (exponential) decay of the density-density (spin-spin) and bond-order-wave correlation functions, which are found to be in excellent agreement with our numerical results. This work was supported by Deutsche Forschungsgemeinschaft (Germany), SFB 652, project B5, and by the EPSRC under Grant No. EP/N01930X/1 (FHLE).

  17. The Islamic State Battle Plan: Press Release Natural Language Processing

    DTIC Science & Technology

    2016-06-01

    Processing, text mining , corpus, generalized linear model, cascade, R Shiny, leaflet, data visualization 15. NUMBER OF PAGES 83 16. PRICE CODE...Terrorism and Responses to Terrorism TDM Term Document Matrix TF Term Frequency TF-IDF Term Frequency-Inverse Document Frequency tm text mining (R...package=leaflet. Feinerer I, Hornik K (2015) Text Mining Package “tm,” Version 0.6-2. (Jul 3) https://cran.r-project.org/web/packages/tm/tm.pdf

  18. Predicting the Productive Capacity of Air Force Aerospace Ground Equipment Personnel Using Aptitude and Experience Measures

    DTIC Science & Technology

    1993-03-01

    Study of the Productive Capacity Project 40 4. 454X1 Job Duty Areas ....... ........................ ......... 41 5. Bases Visited in the Initial Study of...101 21. Correlttion Matrix of the Other Job Performance Measures ................. 102 22 454X1 Tasks...mentioned, the goal of the thesis is to develop an experimental mathematical model for predicting the job performance of enlisted personnel in AFS 454X1

  19. Replication of clinical innovations in multiple medical practices.

    PubMed

    Henley, N S; Pearce, J; Phillips, L A; Weir, S

    1998-11-01

    Many clinical innovations had been successfully developed and piloted in individual medical practice units of Kaiser Permanente in North Carolina during 1995 and 1996. Difficulty in replicating these clinical innovations consistently throughout all 21 medical practice units led to development of the interdisciplinary Clinical Innovation Implementation Team, which was formed by using existing resources from various departments across the region. REPLICATION MODEL: Based on a model of transfer of best practices, the implementation team developed a process and tools (master schedule and activity matrix) to quickly replicate successful pilot projects throughout all medical practice units. The process involved the following steps: identifying a practice and delineating its characteristics and measures (source identification); identifying a team to receive the (new) practice; piloting the practice; and standardizing, including the incorporation of learnings. The model includes the following components for each innovation: sending and receiving teams, an innovation coordinator role, an innovation expert role, a location expert role, a master schedule, and a project activity matrix. Communication depended on a partnership among the location experts (local knowledge and credibility), the innovation coordinator (process expertise), and the innovation experts (content expertise). Results after 12 months of working with the 21 medical practice units include integration of diabetes care team services into the practices, training of more than 120 providers in the use of personal computers and an icon-based clinical information system, and integration of a planwide self-care program into the medical practices--all with measurable improved outcomes. The model for sequential replication and the implementation team structure and function should be successful in other organizational settings.

  20. A Curriculum Skills Matrix for Development and Assessment of Undergraduate Biochemistry and Molecular Biology Laboratory Programs

    ERIC Educational Resources Information Center

    Caldwell, Benjamin; Rohlman, Christopher; Benore-Parsons, Marilee

    2004-01-01

    We have designed a skills matrix to be used for developing and assessing undergraduate biochemistry and molecular biology laboratory curricula. We prepared the skills matrix for the Project Kaleidoscope Summer Institute workshop in Snowbird, Utah (July 2001) to help current and developing undergraduate biochemistry and molecular biology program…

  1. MSFC Combustion Devices in 2001

    NASA Technical Reports Server (NTRS)

    Dexter, Carol; Turner, James (Technical Monitor)

    2001-01-01

    The objectives of the project detailed in this viewgraph presentation were to reduce thrust assembly weights to create lighter engines and to increase the cycle life and/or operating temperatures. Information is given on material options (metal matrix composites and polymer matrix composites), ceramic matrix composites subscale liners, lightweight linear chambers, lightweight injector development, liquid/liquid preburner tasks, and vortex chamber tasks.

  2. A new fracture mechanics model for multiple matrix cracks of SiC fiber reinforced brittle-matrix composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okabe, T.; Takeda, N.; Komotori, J.

    1999-11-26

    A new model is proposed for multiple matrix cracking in order to take into account the role of matrix-rich regions in the cross section in initiating crack growth. The model is used to predict the matrix cracking stress and the total number of matrix cracks. The model converts the matrix-rich regions into equivalent penny shape crack sizes and predicts the matrix cracking stress with a fracture mechanics crack-bridging model. The estimated distribution of matrix cracking stresses is used as statistical input to predict the number of matrix cracks. The results show good agreement with the experimental results by replica observations.more » Therefore, it is found that the matrix cracking behavior mainly depends on the distribution of matrix-rich regions in the composite.« less

  3. Contribution of the Japan International Cooperation Agency health-related projects to health system strengthening

    PubMed Central

    2013-01-01

    Background The Japan International Cooperation Agency (JICA) has focused its attention on appraising health development assistance projects and redirecting efforts towards health system strengthening. This study aimed to describe the type of project and targets of interest, and assess the contribution of JICA health-related projects to strengthening health systems worldwide. Methods We collected a web-based Project Design Matrix (PDM) of 105 JICA projects implemented between January 2005 and December 2009. We developed an analytical matrix based on the World Health Organization (WHO) health system framework to examine the PDM data and thereby assess the projects’ contributions to health system strengthening. Results The majority of JICA projects had prioritized workforce development, and improvements in governance and service delivery. Conversely, there was little assistance for finance or medical product development. The vast majority (87.6%) of JICA projects addressed public health issues, for example programs to improve maternal and child health, and the prevention and treatment of infectious diseases such as AIDS, tuberculosis and malaria. Nearly 90% of JICA technical healthcare assistance directly focused on improving governance as the most critical means of accomplishing its goals. Conclusions Our study confirmed that JICA projects met the goals of bilateral cooperation by developing workforce capacity and governance. Nevertheless, our findings suggest that JICA assistance could be used to support financial aspects of healthcare systems, which is an area of increasing concern. We also showed that the analytical matrix methodology is an effective means of examining the component of health system strengthening to which the activity and output of a project contributes. This may help policy makers and practitioners focus future projects on priority areas. PMID:24053583

  4. Striatal Cholinergic Interneurons Modulate Spike-Timing in Striosomes and Matrix by an Amphetamine-Sensitive Mechanism

    PubMed Central

    Crittenden, Jill R.; Lacey, Carolyn J.; Weng, Feng-Ju; Garrison, Catherine E.; Gibson, Daniel J.; Lin, Yingxi; Graybiel, Ann M.

    2017-01-01

    The striatum is key for action-selection and the motivation to move. Dopamine and acetylcholine release sites are enriched in the striatum and are cross-regulated, possibly to achieve optimal behavior. Drugs of abuse, which promote abnormally high dopamine release, disrupt normal action-selection and drive restricted, repetitive behaviors (stereotypies). Stereotypies occur in a variety of disorders including obsessive-compulsive disorder, autism, schizophrenia and Huntington's disease, as well as in addictive states. The severity of drug-induced stereotypy is correlated with induction of c-Fos expression in striosomes, a striatal compartment that is related to the limbic system and that directly projects to dopamine-producing neurons of the substantia nigra. These characteristics of striosomes contrast with the properties of the extra-striosomal matrix, which has strong sensorimotor and associative circuit inputs and outputs. Disruption of acetylcholine signaling in the striatum blocks the striosome-predominant c-Fos expression pattern induced by drugs of abuse and alters drug-induced stereotypy. The activity of striatal cholinergic interneurons is associated with behaviors related to sensory cues, and cortical inputs to striosomes can bias action-selection in the face of conflicting cues. The neurons and neuropil of striosomes and matrix neurons have observably separate distributions, both at the input level in the striatum and at the output level in the substantia nigra. Notably, cholinergic axons readily cross compartment borders, providing a potential route for local cross-compartment communication to maintain a balance between striosomal and matrix activity. We show here, by slice electrophysiology in transgenic mice, that repetitive evoked firing patterns in striosomal and matrix striatal projection neurons (SPNs) are interrupted by optogenetic activation of cholinergic interneurons either by the addition or the deletion of spikes. We demonstrate that this cholinergic modulation of projection neurons is blocked in brain slices taken from mice exposed to amphetamine and engaged in amphetamine-induced stereotypy, and lacking responsiveness to salient cues. Our findings support a model whereby activity in striosomes is normally under strong regulation by cholinergic interneurons, favoring behavioral flexibility, but that in animals with drug-induced stereotypy, this cholinergic signaling breaks down, resulting in differential modulation of striosomal activity and an inability to bias action-selection according to relevant sensory cues. PMID:28377698

  5. MOM3D method of moments code theory manual

    NASA Technical Reports Server (NTRS)

    Shaeffer, John F.

    1992-01-01

    MOM3D is a FORTRAN algorithm that solves Maxwell's equations as expressed via the electric field integral equation for the electromagnetic response of open or closed three dimensional surfaces modeled with triangle patches. Two joined triangles (couples) form the vector current unknowns for the surface. Boundary conditions are for perfectly conducting or resistive surfaces. The impedance matrix represents the fundamental electromagnetic interaction of the body with itself. A variety of electromagnetic analysis options are possible once the impedance matrix is computed including backscatter radar cross section (RCS), bistatic RCS, antenna pattern prediction for user specified body voltage excitation ports, RCS image projection showing RCS scattering center locations, surface currents excited on the body as induced by specified plane wave excitation, and near field computation for the electric field on or near the body.

  6. Projected changes in precipitation intensity and frequency over complex topography: a multi-model perspective

    NASA Astrophysics Data System (ADS)

    Fischer, Andreas; Keller, Denise; Liniger, Mark; Rajczak, Jan; Schär, Christoph; Appenzeller, Christof

    2014-05-01

    Fundamental changes in the hydrological cycle are expected in a future warmer climate. This is of particular relevance for the Alpine region, as a source and reservoir of several major rivers in Europe and being prone to extreme events such as floodings. For this region, climate change assessments based on the ENSEMBLES regional climate models (RCMs) project a significant decrease in summer mean precipitation under the A1B emission scenario by the mid-to-end of this century, while winter mean precipitation is expected to slightly rise. From an impact perspective, projected changes in seasonal means, however, are often insufficient to adequately address the multifaceted challenges of climate change adaptation. In this study, we revisit the full matrix of the ENSEMBLES RCM projections regarding changes in frequency and intensity, precipitation-type (convective versus stratiform) and temporal structure (wet/dry spells and transition probabilities) over Switzerland and surroundings. As proxies for raintype changes, we rely on the model parameterized convective and large-scale precipitation components. Part of the analysis involves a Bayesian multi-model combination algorithm to infer changes from the multi-model ensemble. The analysis suggests a summer drying that evolves altitude-specific: over low-land regions it is associated with wet-day frequency decreases of convective and large-scale precipitation, while over elevated regions it is primarily associated with a decline in large-scale precipitation only. As a consequence, almost all the models project an increase in the convective fraction at elevated Alpine altitudes. The decrease in the number of wet days during summer is accompanied by decreases (increases) in multi-day wet (dry) spells. This shift in multi-day episodes also lowers the likelihood of short dry spell occurrence in all of the models. For spring and autumn the combined multi-model projections indicate higher mean precipitation intensity north of the Alps, while a similar tendency is expected for the winter season over most of Switzerland.

  7. An optimum organizational structure for a large earth-orbiting multidisciplinary Space Base

    NASA Technical Reports Server (NTRS)

    Ragusa, J. M.

    1973-01-01

    The purpose of this exploratory study was to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The essential finding of this research was that a four-level project type 'total matrix' model will optimize the efficiency and effectiveness of Space Base technologists.

  8. Computing the Density Matrix in Electronic Structure Theory on Graphics Processing Units.

    PubMed

    Cawkwell, M J; Sanville, E J; Mniszewski, S M; Niklasson, Anders M N

    2012-11-13

    The self-consistent solution of a Schrödinger-like equation for the density matrix is a critical and computationally demanding step in quantum-based models of interatomic bonding. This step was tackled historically via the diagonalization of the Hamiltonian. We have investigated the performance and accuracy of the second-order spectral projection (SP2) algorithm for the computation of the density matrix via a recursive expansion of the Fermi operator in a series of generalized matrix-matrix multiplications. We demonstrate that owing to its simplicity, the SP2 algorithm [Niklasson, A. M. N. Phys. Rev. B2002, 66, 155115] is exceptionally well suited to implementation on graphics processing units (GPUs). The performance in double and single precision arithmetic of a hybrid GPU/central processing unit (CPU) and full GPU implementation of the SP2 algorithm exceed those of a CPU-only implementation of the SP2 algorithm and traditional matrix diagonalization when the dimensions of the matrices exceed about 2000 × 2000. Padding schemes for arrays allocated in the GPU memory that optimize the performance of the CUBLAS implementations of the level 3 BLAS DGEMM and SGEMM subroutines for generalized matrix-matrix multiplications are described in detail. The analysis of the relative performance of the hybrid CPU/GPU and full GPU implementations indicate that the transfer of arrays between the GPU and CPU constitutes only a small fraction of the total computation time. The errors measured in the self-consistent density matrices computed using the SP2 algorithm are generally smaller than those measured in matrices computed via diagonalization. Furthermore, the errors in the density matrices computed using the SP2 algorithm do not exhibit any dependence of system size, whereas the errors increase linearly with the number of orbitals when diagonalization is employed.

  9. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    DOE PAGES

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; ...

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less

  10. Evaluation of light extraction efficiency for the light-emitting diodes based on the transfer matrix formalism and ray-tracing method

    NASA Astrophysics Data System (ADS)

    Pingbo, An; Li, Wang; Hongxi, Lu; Zhiguo, Yu; Lei, Liu; Xin, Xi; Lixia, Zhao; Junxi, Wang; Jinmin, Li

    2016-06-01

    The internal quantum efficiency (IQE) of the light-emitting diodes can be calculated by the ratio of the external quantum efficiency (EQE) and the light extraction efficiency (LEE). The EQE can be measured experimentally, but the LEE is difficult to calculate due to the complicated LED structures. In this work, a model was established to calculate the LEE by combining the transfer matrix formalism and an in-plane ray tracing method. With the calculated LEE, the IQE was determined and made a good agreement with that obtained by the ABC model and temperature-dependent photoluminescence method. The proposed method makes the determination of the IQE more practical and conventional. Project supported by the National Natural Science Foundation of China (Nos.11574306, 61334009), the China International Science and Technology Cooperation Program (No. 2014DFG62280), and the National High Technology Program of China (No. 2015AA03A101).

  11. Foreign Object Damage Prediction in Ceramic Matrix Composites

    DTIC Science & Technology

    2011-02-28

    Management and Budget. Paperwork Reduction Project (0704-0188) Washington, DC 20503 PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD...has been utilized successf ul I y f or damage predi cti on i n many probl ems In a peri dynamic simulation, the distances between points in the...this model as suits their needs, with N& R Engineering and UofA providing tool support/development as required. A marketing study will be conducted

  12. On Generalizations of Cochran’s Theorem and Projection Matrices.

    DTIC Science & Technology

    1980-08-01

    Definiteness of the Estimated Dispersion Matrix in a Multivariate Linear Model ," F. Pukelsheim and George P.H. Styan, May 1978. TECHNICAL REPORTS...with applications to the analysis of covariance," Proc. Cambridge Philos. Soc., 30, pp. 178-191. Graybill , F. A. and Marsaglia, G. (1957...34Idempotent matrices and quad- ratic forms in the general linear hypothesis," Ann. Math. Statist., 28, pp. 678-686. Greub, W. (1975). Linear Algebra (4th ed

  13. Sex in an uncertain world: environmental stochasticity helps restore competitive balance between sexually and asexually reproducing populations.

    PubMed

    Park, A W; Vandekerkhove, J; Michalakis, Y

    2014-08-01

    Like many organisms, individuals of the freshwater ostracod species Eucypris virens exhibit either obligate sexual or asexual reproductive modes. Both types of individual routinely co-occur, including in the same temporary freshwater pond (their natural habitat in which they undergo seasonal diapause). Given the well-known two-fold cost of sex, this begs the question of how sexually reproducing individuals are able to coexist with their asexual counterparts in spite of such overwhelming costs. Environmental stochasticity in the form of 'false dawn' inundations (where the first hydration is ephemeral and causes loss of early hatching individuals) may provide an advantage to the sexual subpopulation, which shows greater variation in hatching times following inundation. We explore the potential role of environmental stochasticity in this system using life-history data analysis, climate data, and matrix projection models. In the absence of environmental stochasticity, the population growth rate is significantly lower in sexual subpopulations. Climate data reveal that 'false dawn' inundations are common. Using matrix projection modelling with and without environmental stochasticity, we demonstrate that this phenomenon can restore appreciable balance to the system, in terms of population growth rates. This provides support for the role of environmental stochasticity in helping to explain the maintenance of sex and the occurrence of geographical parthenogenesis. © 2014 The Authors. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.

  14. Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem*

    PubMed Central

    Katsevich, E.; Katsevich, A.; Singer, A.

    2015-01-01

    In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly oriented copies of a molecule. The problem of single particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy two-dimensional projection images taken at unknown directions to reconstruct the three-dimensional (3D) structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in SPR. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously suggested that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is challenging, since only projections of the molecules are observed, but not the molecules themselves. In this paper, we formulate a general problem of covariance estimation from noisy projections of samples. This problem has intimate connections with matrix completion problems and high-dimensional principal component analysis. We propose an estimator and prove its consistency. When there are finitely many heterogeneity classes, the spectrum of the estimated covariance matrix reveals the number of classes. The estimator can be found as the solution to a certain linear system. In the cryo-EM case, the linear operator to be inverted, which we term the projection covariance transform, is an important object in covariance estimation for tomographic problems involving structural variation. Inverting it involves applying a filter akin to the ramp filter in tomography. We design a basis in which this linear operator is sparse and thus can be tractably inverted despite its large size. We demonstrate via numerical experiments on synthetic datasets the robustness of our algorithm to high levels of noise. PMID:25699132

  15. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  16. Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number

    NASA Astrophysics Data System (ADS)

    Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo

    Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.

  17. Modeling the role of quorum sensing in interspecies competition in biofilms

    NASA Astrophysics Data System (ADS)

    Narla, Avaneesh V.; Wingreen, Ned S.; Borenstein, David B.

    Bacteria grow on surfaces in complex immobile communities known as biofilms, composed of cells embedded in an extracellular matrix. Within biofilms, bacteria often communicate, cooperate, and compete within their own species and with other species using Quorum Sensing (QS). QS refers to the process by which bacteria produce, secrete, and subsequently detect small molecules called autoinducers as a way to assess the local population density of their species, or of other species. QS is known to regulate the production of extracellular matrix. We investigated the possible benefit of QS in regulating matrix production to best gain access to a nutrient that diffuses from a source positioned away from the surface on which the biofilm grows. We employed Agent-Based Modeling (ABM), a form of simulation that allows cells to modify their behavior based on local inputs, e.g. nutrient and QS concentrations. We first determined the optimal fixed strategies (that do not use QS) for pairwise competitions, and then demonstrated that simple QS-based strategies can be superior to any fixed strategy. In nature, species can compete by sensing and/or interfering with each other's QS signals, and we explore approaches for targeting specific species via QS-interference. A.V.N. and N.S.W. contributed equally to this project.

  18. Juxtaposed Integration Matrix: A Crisis Communication Tool

    DTIC Science & Technology

    2005-05-19

    Integration Matrix: A Crisis Communication Tool 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR (S) 5d. PROJECT NUMBER 5e...for their patience and understanding when Daddy had to do schoolwork. The views expressed in this article are those of the author and do not reflect...62 APPENDIX A JUXTAPOSED INTEGRATION MATRIX TRAINING GUIDE ............................64 B QUESTIONNAIRE WORKSHEET

  19. Effect of spatial noise of medical grade Liquid Crystal Displays (LCD) on the detection of micro-calcification

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Fan, Jiahua; Dallas, William J.; Krupinski, Elizabeth A.; Johnson, Jeffrey

    2009-08-01

    This presentation describes work in progress that is the result of an NIH SBIR Phase 1 project that addresses the wide- spread concern for the large number of breast-cancers and cancer victims [1,2]. The primary goal of the project is to increase the detection rate of microcalcifications as a result of the decrease of spatial noise of the LCDs used to display the mammograms [3,4]. Noise reduction is to be accomplished with the aid of a high performance CCD camera and subsequent application of local-mean equalization and error diffusion [5,6]. A second goal of the project is the actual detection of breast cancer. Contrary to the approach to mammography, where the mammograms typically have a pixel matrix of approximately 1900 x 2300 pixels, otherwise known as FFDM or Full-Field Digital Mammograms, we will only use sections of mammograms with a pixel matrix of 256 x 256 pixels. This is because at this time, reduction of spatial noise on an LCD can only be done on relatively small areas like 256 x 256 pixels. In addition, judging the efficacy for detection of breast cancer will be done using two methods: One is a conventional ROC study [7], the other is a vision model developed over several years starting at the Sarnoff Research Center and continuing at the Siemens Corporate Research in Princeton NJ [8].

  20. Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions

    NASA Astrophysics Data System (ADS)

    Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.

    2016-09-01

    Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.

  1. Infinite projected entangled-pair state algorithm for ruby and triangle-honeycomb lattices

    NASA Astrophysics Data System (ADS)

    Jahromi, Saeed S.; Orús, Román; Kargarian, Mehdi; Langari, Abdollah

    2018-03-01

    The infinite projected entangled-pair state (iPEPS) algorithm is one of the most efficient techniques for studying the ground-state properties of two-dimensional quantum lattice Hamiltonians in the thermodynamic limit. Here, we show how the algorithm can be adapted to explore nearest-neighbor local Hamiltonians on the ruby and triangle-honeycomb lattices, using the corner transfer matrix (CTM) renormalization group for 2D tensor network contraction. Additionally, we show how the CTM method can be used to calculate the ground-state fidelity per lattice site and the boundary density operator and entanglement entropy (EE) on an infinite cylinder. As a benchmark, we apply the iPEPS method to the ruby model with anisotropic interactions and explore the ground-state properties of the system. We further extract the phase diagram of the model in different regimes of the couplings by measuring two-point correlators, ground-state fidelity, and EE on an infinite cylinder. Our phase diagram is in agreement with previous studies of the model by exact diagonalization.

  2. 40 projects in stem cell research, tissue engineering, tolerance induction and more (NRP46 "Implants and Transplants" 1999-2006).

    PubMed

    Thiel, Gilbert T

    2007-03-02

    Forty projects on stem cell research, tissue and matrix engineering, tolerance induction and other topics were supported by the Swiss National Research Program NRP46 (Implants, Transplants) from 1999-2006. The last project is devoted to developing stem cell lines from frozen surplus human embryos in Switzerland, which would otherwise have to be destroyed at the end of 2008. It is entitled JESP (Joint Embryonic Stem Cell Project) since it involves two Swiss universities, in vitro fertilisation centres and experts from the humanities (ethics and law) to handle this difficult problem. Over the years, stem cell transplantation and tissue/matrix engineering have drawn closer to each other and even developed synergies. Progress in stem cell research has been slower than anticipated, but a multitude of technical skills (phenotyping, isolation, transfection, induction of differentiation, labelling, expanding cells in culture, etc) were acquired. Understanding of stem cell biology has grown. The 7 projects on tissue and matrix engineering progressed closer to clinical applicability than the stem cell projects. Of 3 projects to implant encapsulated cells for the production of hormones (insulin, erythropoietin), one is close to clinical pilot studies with an advanced encapsulated device. Five projects were devoted to mechanisms of tolerance or the role of metzincins in chronic allograft nephropathy. Four studies in psychology and communication in transplantation were funded, as were 5 projects in ethics, law and the history of transplantation in Switzerland. The goal of NRP46 was to provide an impulse for research in these new fields and bring together experts from the humanities, biology and medicine to cope more effectively with the problems of regenerative medicine in the future. The majority of goals were attained, mainly in the basics.

  3. Final Project Report: Release of aged contaminants from weathered sediments: Effects of sorbate speciation on scaling of reactive transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jon Chorover, University of Arizona; Peggy O'€™Day, University of California, Merced; Karl Mueller, Penn State University

    2012-10-01

    Hanford sediments impacted by hyperalkaline high level radioactive waste have undergone incongruent silicate mineral weathering concurrent with contaminant uptake. In this project, we studied the impact of background pore water (BPW) on strontium, cesium and iodine desorption and transport in Hanford sediments that were experimentally weathered by contact with simulated hyperalkaline tank waste leachate (STWL) solutions. Using those lab-weathered Hanford sediments (HS) and model precipitates formed during nucleation from homogeneous STWL solutions (HN), we (i) provided detailed characterization of reaction products over a matrix of field-relevant gradients in contaminant concentration, PCO2, and reaction time; (ii) improved molecular-scale understanding of howmore » sorbate speciation controls contaminant desorption from weathered sediments upon removal of caustic sources; and (iii) developed a mechanistic, predictive model of meso- to field-scale contaminant reactive transport under these conditions.« less

  4. Estimation of the chemical rank for the three-way data: a principal norm vector orthogonal projection approach.

    PubMed

    Hong-Ping, Xie; Jian-Hui, Jiang; Guo-Li, Shen; Ru-Qin, Yu

    2002-01-01

    A new approach for estimating the chemical rank of the three-way array called the principal norm vector orthogonal projection method has been proposed. The method is based on the fact that the chemical rank of the three-way data array is equal to one of the column space of the unfolded matrix along the spectral or chromatographic mode. A vector with maximum Frobenius norm is selected among all the column vectors of the unfolded matrix as the principal norm vector (PNV). A transformation is conducted for the column vectors with an orthogonal projection matrix formulated by PNV. The mathematical rank of the column space of the residual matrix thus obtained should decrease by one. Such orthogonal projection is carried out repeatedly till the contribution of chemical species to the signal data is all deleted. At this time the decrease of the mathematical rank would equal that of the chemical rank, and the remaining residual subspace would entirely be due to the noise contribution. The chemical rank can be estimated easily by using an F-test. The method has been used successfully to the simulated HPLC-DAD type three-way data array and two real excitation-emission fluorescence data sets of amino acid mixtures and dye mixtures. The simulation with added relatively high level noise shows that the method is robust in resisting the heteroscedastic noise. The proposed algorithm is simple and easy to program with quite light computational burden.

  5. Spin-orbital quantum liquid on the honeycomb lattice

    NASA Astrophysics Data System (ADS)

    Corboz, Philippe

    2013-03-01

    The symmetric Kugel-Khomskii can be seen as a minimal model describing the interactions between spin and orbital degrees of freedom in transition-metal oxides with orbital degeneracy, and it is equivalent to the SU(4) Heisenberg model of four-color fermionic atoms. We present simulation results for this model on various two-dimensional lattices obtained with infinite projected-entangled pair states (iPEPS), an efficient variational tensor-network ansatz for two dimensional wave functions in the thermodynamic limit. This approach can be seen as a two-dimensional generalization of matrix product states - the underlying ansatz of the density matrix renormalization group method. We find a rich variety of exotic phases: while on the square and checkerboard lattices the ground state exhibits dimer-Néel order and plaquette order, respectively, quantum fluctuations on the honeycomb lattice destroy any order, giving rise to a spin-orbital liquid. Our results are supported from flavor-wave theory and exact diagonalization. Furthermore, the properties of the spin-orbital liquid state on the honeycomb lattice are accurately accounted for by a projected variational wave-function based on the pi-flux state of fermions on the honeycomb lattice at 1/4-filling. In that state, correlations are algebraic because of the presence of a Dirac point at the Fermi level, suggesting that the ground state is an algebraic spin-orbital liquid. This model provides a good starting point to understand the recently discovered spin-orbital liquid behavior of Ba3CuSb2O9. The present results also suggest to choose optical lattices with honeycomb geometry in the search for quantum liquids in ultra-cold four-color fermionic atoms. We acknowledge the financial support from the Swiss National Science Foundation.

  6. A non-stochastic iterative computational method to model light propagation in turbid media

    NASA Astrophysics Data System (ADS)

    McIntyre, Thomas J.; Zemp, Roger J.

    2015-03-01

    Monte Carlo models are widely used to model light transport in turbid media, however their results implicitly contain stochastic variations. These fluctuations are not ideal, especially for inverse problems where Jacobian matrix errors can lead to large uncertainties upon matrix inversion. Yet Monte Carlo approaches are more computationally favorable than solving the full Radiative Transport Equation. Here, a non-stochastic computational method of estimating fluence distributions in turbid media is proposed, which is called the Non-Stochastic Propagation by Iterative Radiance Evaluation method (NSPIRE). Rather than using stochastic means to determine a random walk for each photon packet, the propagation of light from any element to all other elements in a grid is modelled simultaneously. For locally homogeneous anisotropic turbid media, the matrices used to represent scattering and projection are shown to be block Toeplitz, which leads to computational simplifications via convolution operators. To evaluate the accuracy of the algorithm, 2D simulations were done and compared against Monte Carlo models for the cases of an isotropic point source and a pencil beam incident on a semi-infinite turbid medium. The model was shown to have a mean percent error less than 2%. The algorithm represents a new paradigm in radiative transport modelling and may offer a non-stochastic alternative to modeling light transport in anisotropic scattering media for applications where the diffusion approximation is insufficient.

  7. Some Methods for Evaluating Program Implementation.

    ERIC Educational Resources Information Center

    Hardy, Roy A.

    An approach to evaluating program implementation is described. This approach includes the development of a project description which includes a structure matrix, sampling from the structure matrix, and preparing an implementation evaluation plan. The implementation evaluation plan should include: (1) verification of implementation of planned…

  8. Establishing non-Abelian topological order in Gutzwiller-projected Chern insulators via entanglement entropy and modular S-matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Vishwanath, Ashvin

    2013-04-01

    We use entanglement entropy signatures to establish non-Abelian topological order in projected Chern-insulator wave functions. The simplest instance is obtained by Gutzwiller projecting a filled band with Chern number C=2, whose wave function may also be viewed as the square of the Slater determinant of a band insulator. We demonstrate that this wave function is captured by the SU(2)2 Chern-Simons theory coupled to fermions. This is established most persuasively by calculating the modular S-matrix from the candidate ground-state wave functions, following a recent entanglement-entropy-based approach. This directly demonstrates the peculiar non-Abelian braiding statistics of Majorana fermion quasiparticles in this state. We also provide microscopic evidence for the field theoretic generalization, that the Nth power of a Chern number C Slater determinant realizes the topological order of the SU(N)C Chern-Simons theory coupled to fermions, by studying the SU(2)3 (Read-Rezayi-type state) and the SU(3)2 wave functions. An advantage of our projected Chern-insulator wave functions is the relative ease with which physical properties, such as entanglement entropy and modular S-matrix, can be numerically calculated using Monte Carlo techniques.

  9. Environmental impact assessment of the industrial estate development plan with the geographical information system and matrix methods.

    PubMed

    Ghasemian, Mohammad; Poursafa, Parinaz; Amin, Mohammad Mehdi; Ziarati, Mohammad; Ghoddousi, Hamid; Momeni, Seyyed Alireza; Rezaei, Amir Hossein

    2012-01-01

    The purpose of this study is environmental impact assessment of the industrial estate development planning. This cross-sectional study was conducted in 2010 in Isfahan province, Iran. GIS and matrix methods were applied. Data analysis was done to identify the current situation of the region, zoning vulnerable areas, and scoping the region. Quantitative evaluation was done by using matrix of Wooten and Rau. The net score for impact of industrial units operation on air quality of the project area was (-3). According to the transition of industrial estate pollutants, residential places located in the radius of 2500 meters of the city were expected to be affected more. The net score for impact of construction of industrial units on plant species of the project area was (-2). Environmental protected areas were not affected by the air and soil pollutants because of their distance from industrial estate. Positive effects of project activities outweigh the drawbacks and the sum scores allocated to the project activities on environmental factor was (+37). Totally it does not have detrimental effects on the environment and residential neighborhood. EIA should be considered as an anticipatory, participatory environmental management tool before determining a plan application.

  10. Combining dispersal, landscape connectivity and habitat suitability to assess climate-induced changes in the distribution of Cunningham’s skink, Egernia cunninghami

    PubMed Central

    Stow, Adam J.; Baumgartner, John B.; Beaumont, Linda J.

    2017-01-01

    The ability of species to track their climate niche is dependent on their dispersal potential and the connectivity of the landscape matrix linking current and future suitable habitat. However, studies modeling climate-driven range shifts rarely address the movement of species across landscapes realistically, often assuming “unlimited” or “no” dispersal. Here, we incorporate dispersal rate and landscape connectivity with a species distribution model (Maxent) to assess the extent to which the Cunningham’s skink (Egernia cunninghami) may be capable of tracking spatial shifts in suitable habitat as climate changes. Our model was projected onto four contrasting, but equally plausible, scenarios describing futures that are (relative to now) hot/wet, warm/dry, hot/with similar precipitation and warm/wet, at six time horizons with decadal intervals (2020–2070) and at two spatial resolutions: 1 km and 250 m. The size of suitable habitat was projected to decline 23–63% at 1 km and 26–64% at 250 m, by 2070. Combining Maxent output with the dispersal rate of the species and connectivity of the intervening landscape matrix showed that most current populations in regions projected to become unsuitable in the medium to long term, will be unable to shift the distance necessary to reach suitable habitat. In particular, numerous populations currently inhabiting the trailing edge of the species’ range are highly unlikely to be able to disperse fast enough to track climate change. Unless these populations are capable of adaptation they are likely to be extirpated. We note, however, that the core of the species distribution remains suitable across the broad spectrum of climate scenarios considered. Our findings highlight challenges faced by philopatric species and the importance of adaptation for the persistence of peripheral populations under climate change. PMID:28873398

  11. Modeling the Complex Impacts of Timber Harvests to Find Optimal Management Regimes for Amazon Tidal Floodplain Forests

    PubMed Central

    Fortini, Lucas B.; Cropper, Wendell P.; Zarin, Daniel J.

    2015-01-01

    At the Amazon estuary, the oldest logging frontier in the Amazon, no studies have comprehensively explored the potential long-term population and yield consequences of multiple timber harvests over time. Matrix population modeling is one way to simulate long-term impacts of tree harvests, but this approach has often ignored common impacts of tree harvests including incidental damage, changes in post-harvest demography, shifts in the distribution of merchantable trees, and shifts in stand composition. We designed a matrix-based forest management model that incorporates these harvest-related impacts so resulting simulations reflect forest stand dynamics under repeated timber harvests as well as the realities of local smallholder timber management systems. Using a wide range of values for management criteria (e.g., length of cutting cycle, minimum cut diameter), we projected the long-term population dynamics and yields of hundreds of timber management regimes in the Amazon estuary, where small-scale, unmechanized logging is an important economic activity. These results were then compared to find optimal stand-level and species-specific sustainable timber management (STM) regimes using a set of timber yield and population growth indicators. Prospects for STM in Amazonian tidal floodplain forests are better than for many other tropical forests. However, generally high stock recovery rates between harvests are due to the comparatively high projected mean annualized yields from fast-growing species that effectively counterbalance the projected yield declines from other species. For Amazonian tidal floodplain forests, national management guidelines provide neither the highest yields nor the highest sustained population growth for species under management. Our research shows that management guidelines specific to a region’s ecological settings can be further refined to consider differences in species demographic responses to repeated harvests. In principle, such fine-tuned management guidelines could make management more attractive, thus bridging the currently prevalent gap between tropical timber management practice and regulation. PMID:26322896

  12. Modeling the complex impacts of timber harvests to find optimal management regimes for Amazon tidal floodplain forests

    USGS Publications Warehouse

    Fortini, Lucas B.; Cropper, Wendell P.; Zarin, Daniel J.

    2015-01-01

    At the Amazon estuary, the oldest logging frontier in the Amazon, no studies have comprehensively explored the potential long-term population and yield consequences of multiple timber harvests over time. Matrix population modeling is one way to simulate long-term impacts of tree harvests, but this approach has often ignored common impacts of tree harvests including incidental damage, changes in post-harvest demography, shifts in the distribution of merchantable trees, and shifts in stand composition. We designed a matrix-based forest management model that incorporates these harvest-related impacts so resulting simulations reflect forest stand dynamics under repeated timber harvests as well as the realities of local smallholder timber management systems. Using a wide range of values for management criteria (e.g., length of cutting cycle, minimum cut diameter), we projected the long-term population dynamics and yields of hundreds of timber management regimes in the Amazon estuary, where small-scale, unmechanized logging is an important economic activity. These results were then compared to find optimal stand-level and species-specific sustainable timber management (STM) regimes using a set of timber yield and population growth indicators. Prospects for STM in Amazonian tidal floodplain forests are better than for many other tropical forests. However, generally high stock recovery rates between harvests are due to the comparatively high projected mean annualized yields from fast-growing species that effectively counterbalance the projected yield declines from other species. For Amazonian tidal floodplain forests, national management guidelines provide neither the highest yields nor the highest sustained population growth for species under management. Our research shows that management guidelines specific to a region’s ecological settings can be further refined to consider differences in species demographic responses to repeated harvests. In principle, such fine-tuned management guidelines could make management more attractive, thus bridging the currently prevalent gap between tropical timber management practice and regulation.

  13. Information matrix estimation procedures for cognitive diagnostic models.

    PubMed

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  14. A Fully Non-Metallic Gas Turbine Engine Enabled by Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Grady, Joseph E.

    2015-01-01

    The Non-Metallic Gas Turbine Engine project, funded by NASA Aeronautics Research Institute, represents the first comprehensive evaluation of emerging materials and manufacturing technologies that will enable fully nonmetallic gas turbine engines. This will be achieved by assessing the feasibility of using additive manufacturing technologies to fabricate polymer matrix composite and ceramic matrix composite turbine engine components. The benefits include: 50 weight reduction compared to metallic parts, reduced manufacturing costs, reduced part count and rapid design iterations. Two high payoff metallic components have been identified for replacement with PMCs and will be fabricated using fused deposition modeling (FDM) with high temperature polymer filaments. The CMC effort uses a binder jet process to fabricate silicon carbide test coupons and demonstration articles. Microstructural analysis and mechanical testing will be conducted on the PMC and CMC materials. System studies will assess the benefits of fully nonmetallic gas turbine engine in terms of fuel burn, emissions, reduction of part count, and cost. The research project includes a multidisciplinary, multiorganization NASA - industry team that includes experts in ceramic materials and CMCs, polymers and PMCs, structural engineering, additive manufacturing, engine design and analysis, and system analysis.

  15. Inverse solutions for electrical impedance tomography based on conjugate gradients methods

    NASA Astrophysics Data System (ADS)

    Wang, M.

    2002-01-01

    A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.

  16. Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.

    PubMed

    Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui

    2018-03-01

    Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.

  17. Leadership's Role in Support of Online Academic Programs: Implementing an Administrative Support Matrix

    PubMed Central

    Barefield, Amanda C.; Meyer, John D.

    2013-01-01

    The proliferation of online education programs creates a myriad of challenges for those charged with implementation and delivery of these programs. Although creating and sustaining quality education is a shared responsibility of faculty, staff, and academic leaders, this article focuses on the pivotal role of leadership in securing the necessary resources, developing the organizational structures, and influencing organizational culture. The vital foundation for a successful outcome when implementing online education programs is the role of leadership in providing adequate and appropriate support. Abundant literature extols the roles of leadership in project management; however, there is a dearth of models or systematic methods for leaders to follow regarding how to implement and sustain online programs. Research conducted by the authors culminated in the development of an Administrative Support Matrix, thus addressing the current gap in the literature. PMID:23346030

  18. ECOS E-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parisien, Lia

    2016-01-31

    This final scientific/technical report on the ECOS e-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database provides a disclaimer and acknowledgement, table of contents, executive summary, description of project activities, and briefing/technical presentation link.

  19. PNNL Technical Support to The Implementation of EMTA and EMTA-NLA Models in Autodesk® Moldflow® Packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Wang, Jin

    2012-12-01

    Under the Predictive Engineering effort, PNNL developed linear and nonlinear property prediction models for long-fiber thermoplastics (LFTs). These models were implemented in PNNL’s EMTA and EMTA-NLA codes. While EMTA is a standalone software for the computation of the composites thermoelastic properties, EMTA-NLA presents a series of nonlinear models implemented in ABAQUS® via user subroutines for structural analyses. In all these models, it is assumed that the fibers are linear elastic while the matrix material can exhibit a linear or typical nonlinear behavior depending on the loading prescribed to the composite. The key idea is to model the constitutive behavior ofmore » the matrix material and then to use an Eshelby-Mori-Tanaka approach (EMTA) combined with numerical techniques for fiber length and orientation distributions to determine the behavior of the as-formed composite. The basic property prediction models of EMTA and EMTA-NLA have been subject for implementation in the Autodesk® Moldflow® software packages. These models are the elastic stiffness model accounting for fiber length and orientation distributions, the fiber/matrix interface debonding model, and the elastic-plastic models. The PNNL elastic-plastic models for LFTs describes the composite nonlinear stress-strain response up to failure by an elastic-plastic formulation associated with either a micromechanical criterion to predict failure or a continuum damage mechanics formulation coupling damage to plasticity. All the models account for fiber length and orientation distributions as well as fiber/matrix debonding that can occur at any stage of loading. In an effort to transfer the technologies developed under the Predictive Engineering project to the American automotive and plastics industries, PNNL has obtained the approval of the DOE Office of Vehicle Technologies to provide Autodesk, Inc. with the technical support for the implementation of the basic property prediction models of EMTA and EMTA-NLA in the Autodesk® Moldflow® packages. This report summarizes the recent results from Autodesk Simulation Moldlow Insight (ASMI) analyses using the EMTA models and EMTA-NLA/ABAQUS® analyses for further assessment of the EMTA-NLA models to support their implementation in Autodesk Moldflow Structural Alliance (AMSA). PNNL’s technical support to Autodesk, Inc. included (i) providing the theoretical property prediction models as described in published journal articles and reports, (ii) providing explanations of these models and computational procedure, (iii) providing the necessary LFT data for process simulations and property predictions, and (iv) performing ABAQUS/EMTA-NLA analyses to further assess and illustrate the models for selected LFT materials.« less

  20. RE-Powering Tracking Matrix

    EPA Pesticide Factsheets

    The RE-Powering Renewable Energy project list tracks completed projects where renewable energy systems have been installed on contaminated lands, landfills, and mine sites.This resource is for informational purposes only and may not be comprehensive.

  1. Method of joining metallic and composite components

    NASA Technical Reports Server (NTRS)

    Semmes, Edmund B. (Inventor)

    2010-01-01

    A method is provided for joining a metallic member to a structure made of a composite matrix material. One or more surfaces of a portion of the metallic member that is to be joined to the composite matrix structure is provided with a plurality of outwardly projecting studs. The surface including the studs is brought into engagement with a portion of an uncured composite matrix material so that fibers of the composite matrix material intertwine with the studs, and the metallic member and composite structure form an assembly. The assembly is then companion cured so as to join the metallic member to the composite matrix material structure.

  2. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.

  3. A simple procedure for construction of the orthonormal basis vectors of irreducible representations of O(5) in the OT (3) ⊗ON (2) basis

    NASA Astrophysics Data System (ADS)

    Pan, Feng; Ding, Xiaoxue; Launey, Kristina D.; Draayer, J. P.

    2018-06-01

    A simple and effective algebraic isospin projection procedure for constructing orthonormal basis vectors of irreducible representations of O (5) ⊃OT (3) ⊗ON (2) from those in the canonical O (5) ⊃ SUΛ (2) ⊗ SUI (2) basis is outlined. The expansion coefficients are components of null space vectors of the projection matrix with four nonzero elements in each row in general. Explicit formulae for evaluating OT (3)-reduced matrix elements of O (5) generators are derived.

  4. A genetic fuzzy analytical hierarchy process based projection pursuit method for selecting schemes of water transportation projects

    NASA Astrophysics Data System (ADS)

    Jin, Juliang; Li, Lei; Wang, Wensheng; Zhang, Ming

    2006-10-01

    The optimal selection of schemes of water transportation projects is a process of choosing a relatively optimal scheme from a number of schemes of water transportation programming and management projects, which is of importance in both theory and practice in water resource systems engineering. In order to achieve consistency and eliminate the dimensions of fuzzy qualitative and fuzzy quantitative evaluation indexes, to determine the weights of the indexes objectively, and to increase the differences among the comprehensive evaluation index values of water transportation project schemes, a projection pursuit method, named FPRM-PP for short, was developed in this work for selecting the optimal water transportation project scheme based on the fuzzy preference relation matrix. The research results show that FPRM-PP is intuitive and practical, the correction range of the fuzzy preference relation matrix A it produces is relatively small, and the result obtained is both stable and accurate; therefore FPRM-PP can be widely used in the optimal selection of different multi-factor decision-making schemes.

  5. Simple model for deriving sdg interacting boson model Hamiltonians: 150Nd example

    NASA Astrophysics Data System (ADS)

    Devi, Y. D.; Kota, V. K. B.

    1993-07-01

    A simple and yet useful model for deriving sdg interacting boson model (IBM) Hamiltonians is to assume that single-boson energies derive from identical particle (pp and nn) interactions and proton, neutron single-particle energies, and that the two-body matrix elements for bosons derive from pn interaction, with an IBM-2 to IBM-1 projection of the resulting p-n sdg IBM Hamiltonian. The applicability of this model in generating sdg IBM Hamiltonians is demonstrated, using a single-j-shell Otsuka-Arima-Iachello mapping of the quadrupole and hexadecupole operators in proton and neutron spaces separately and constructing a quadrupole-quadrupole plus hexadecupole-hexadecupole Hamiltonian in the analysis of the spectra, B(E2)'s, and E4 strength distribution in the example of 150Nd.

  6. Modeling Creep Effects in Advanced SiC/SiC Composites

    NASA Technical Reports Server (NTRS)

    Lang, Jerry; DiCarlo, James

    2006-01-01

    Because advanced SiC/SiC composites are projected to be used for aerospace components with large thermal gradients at high temperatures, efforts are on-going at NASA Glenn to develop approaches for modeling the anticipated creep behavior of these materials and its subsequent effects on such key composite properties as internal residual stress, proportional limit stress, ultimate tensile strength, and rupture life. Based primarily on in-plane creep data for 2D panels, this presentation describes initial modeling progress at applied composite stresses below matrix cracking for some high performance SiC/SiC composite systems recently developed at NASA. Studies are described to develop creep and rupture models using empirical, mechanical analog, and mechanistic approaches, and to implement them into finite element codes for improved component design and life modeling

  7. Effects of uncertainty and variability on population declines and IUCN Red List classifications.

    PubMed

    Rueda-Cediel, Pamela; Anderson, Kurt E; Regan, Tracey J; Regan, Helen M

    2018-01-22

    The International Union for Conservation of Nature (IUCN) Red List Categories and Criteria is a quantitative framework for classifying species according to extinction risk. Population models may be used to estimate extinction risk or population declines. Uncertainty and variability arise in threat classifications through measurement and process error in empirical data and uncertainty in the models used to estimate extinction risk and population declines. Furthermore, species traits are known to affect extinction risk. We investigated the effects of measurement and process error, model type, population growth rate, and age at first reproduction on the reliability of risk classifications based on projected population declines on IUCN Red List classifications. We used an age-structured population model to simulate true population trajectories with different growth rates, reproductive ages and levels of variation, and subjected them to measurement error. We evaluated the ability of scalar and matrix models parameterized with these simulated time series to accurately capture the IUCN Red List classification generated with true population declines. Under all levels of measurement error tested and low process error, classifications were reasonably accurate; scalar and matrix models yielded roughly the same rate of misclassifications, but the distribution of errors differed; matrix models led to greater overestimation of extinction risk than underestimations; process error tended to contribute to misclassifications to a greater extent than measurement error; and more misclassifications occurred for fast, rather than slow, life histories. These results indicate that classifications of highly threatened taxa (i.e., taxa with low growth rates) under criterion A are more likely to be reliable than for less threatened taxa when assessed with population models. Greater scrutiny needs to be placed on data used to parameterize population models for species with high growth rates, particularly when available evidence indicates a potential transition to higher risk categories. © 2018 Society for Conservation Biology.

  8. Table-sized matrix model in fractional learning

    NASA Astrophysics Data System (ADS)

    Soebagyo, J.; Wahyudin; Mulyaning, E. C.

    2018-05-01

    This article provides an explanation of the fractional learning model i.e. a Table-Sized Matrix model in which fractional representation and its operations are symbolized by the matrix. The Table-Sized Matrix are employed to develop problem solving capabilities as well as the area model. The Table-Sized Matrix model referred to in this article is used to develop an understanding of the fractional concept to elementary school students which can then be generalized into procedural fluency (algorithm) in solving the fractional problem and its operation.

  9. River flow prediction using hybrid models of support vector regression with the wavelet transform, singular spectrum analysis and chaotic approach

    NASA Astrophysics Data System (ADS)

    Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal

    2018-06-01

    Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.

  10. Planning and Analysis of Fractured Rock Injection Tests in the Cerro Brillador Underground Laboratory, Northern Chile

    NASA Astrophysics Data System (ADS)

    Fairley, J. P., Jr.; Oyarzún L, R.; Villegas, G.

    2015-12-01

    Early theories of fluid migration in unsaturated fractured rock hypothesized that matrix suction would dominate flow up to the point of matrix saturation. However, experiments in underground laboratories such as the ESF (Yucca Mountain, NV) have demonstrated that liquid water can migrate significant distances through fractures in an unsaturated porous medium, suggesting limited interaction between fractures and unsaturated matrix blocks and potentially rapid transmission of recharge to the sat- urated zone. Determining the conditions under which this rapid recharge may take place is an important factor in understanding deep percolation processes in arid areas with thick unsaturated zones. As part of an on-going, Fondecyt-funded project (award 11150587) to study mountain block hydrological processes in arid regions, we are plan- ning a series of in-situ fracture flow injection tests in the Cerro Brillador/Mina Escuela, an underground laboratory and teaching facility belonging to the Universidad la Serena, Chile. Planning for the tests is based on an analytical model and curve-matching method, originally developed to evaluate data from injection tests at Yucca Mountain (Fairley, J.P., 2010, WRR 46:W08542), that uses a known rate of liquid injection to a fracture (for example, from a packed-off section of borehole) and the observed rate of seepage discharging from the fracture to estimate effective fracture aperture, matrix sorptivity, fracture/matrix flow partitioning, and the wetted fracture/matrix interac- tion area between the injection and recovery points. We briefly review the analytical approach and its application to test planning and analysis, and describe the proposed tests and their goals.

  11. Thermodynamical Limit for Correlated Gaussian Random Energy Models

    NASA Astrophysics Data System (ADS)

    Contucci, P.; Esposti, M. Degli; Giardinà, C.; Graffi, S.

    Let {EΣ(N)}ΣΣN be a family of |ΣN|=2N centered unit Gaussian random variables defined by the covariance matrix CN of elements cN(Σ,τ):=Av(EΣ(N)Eτ(N)) and the corresponding random Hamiltonian. Then the quenched thermodynamical limit exists if, for every decomposition N=N1+N2, and all pairs (Σ,τ)ΣN×ΣN: where πk(Σ),k=1,2 are the projections of ΣΣN into ΣNk. The condition is explicitly verified for the Sherrington-Kirkpatrick, the even p-spin, the Derrida REM and the Derrida-Gardner GREM models.

  12. Investigation and Implementation of Matrix Permanent Algorithms for Identity Resolution

    DTIC Science & Technology

    2014-12-01

    calculation of the permanent of a matrix whose dimension is a function of target count [21]. However, the optimal approach for computing the permanent is...presently unclear. The primary objective of this project was to determine the optimal computing strategy(-ies) for the matrix permanent in tactical and...solving various combinatorial problems (see [16] for details and appli- cations to a wide variety of problems) and thus can be applied to compute a

  13. Practical implementation of tetrahedral mesh reconstruction in emission tomography

    PubMed Central

    Boutchko, R.; Sitek, A.; Gullberg, G. T.

    2014-01-01

    This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise. PMID:23588373

  14. Practical implementation of tetrahedral mesh reconstruction in emission tomography

    NASA Astrophysics Data System (ADS)

    Boutchko, R.; Sitek, A.; Gullberg, G. T.

    2013-05-01

    This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise.

  15. Quantitative comparison between crowd models for evacuation planning and evaluation

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vaisagh; Lee, Chong Eu; Lees, Michael Harold; Cheong, Siew Ann; Sloot, Peter M. A.

    2014-02-01

    Crowd simulation is rapidly becoming a standard tool for evacuation planning and evaluation. However, the many crowd models in the literature are structurally different, and few have been rigorously calibrated against real-world egress data, especially in emergency situations. In this paper we describe a procedure to quantitatively compare different crowd models or between models and real-world data. We simulated three models: (1) the lattice gas model, (2) the social force model, and (3) the RVO2 model, and obtained the distributions of six observables: (1) evacuation time, (2) zoned evacuation time, (3) passage density, (4) total distance traveled, (5) inconvenience, and (6) flow rate. We then used the DISTATIS procedure to compute the compromise matrix of statistical distances between the three models. Projecting the three models onto the first two principal components of the compromise matrix, we find the lattice gas and RVO2 models are similar in terms of the evacuation time, passage density, and flow rates, whereas the social force and RVO2 models are similar in terms of the total distance traveled. Most importantly, we find that the zoned evacuation times of the three models to be very different from each other. Thus we propose to use this variable, if it can be measured, as the key test between different models, and also between models and the real world. Finally, we compared the model flow rates against the flow rate of an emergency evacuation during the May 2008 Sichuan earthquake, and found the social force model agrees best with this real data.

  16. Modeling CO2 Storage in Fractured Reservoirs: Fracture-Matrix Interactions of Free-Phase and Dissolved CO2

    NASA Astrophysics Data System (ADS)

    Oldenburg, C. M.; Zhou, Q.; Birkholzer, J. T.

    2017-12-01

    The injection of supercritical CO2 (scCO2) in fractured reservoirs has been conducted at several storage sites. However, no site-specific dual-continuum modeling for fractured reservoirs has been reported and modeling studies have generally underestimated the fracture-matrix interactions. We developed a conceptual model for enhanced CO2 storage to take into account global scCO2 migration in the fracture continuum, local storage of scCO2 and dissolved CO2 (dsCO2) in the matrix continuum, and driving forces for scCO2 invasion and dsCO2 diffusion from fractures. High-resolution discrete fracture-matrix models were developed for a column of idealized matrix blocks bounded by vertical and horizontal fractures and for a km-scale fractured reservoir. The column-scale simulation results show that equilibrium storage efficiency strongly depends on matrix entry capillary pressure and matrix-matrix connectivity while the time scale to reach equilibrium is sensitive to fracture spacing and matrix flow properties. The reservoir-scale modeling results shows that the preferential migration of scCO2 through fractures is coupled with bulk storage in the rock matrix that in turn retards the fracture scCO2 plume. We also developed unified-form diffusive flux equations to account for dsCO2 storage in brine-filled matrix blocks and found solubility trapping is significant in fractured reservoirs with low-permeability matrix.

  17. Nonlinear optimization with linear constraints using a projection method

    NASA Technical Reports Server (NTRS)

    Fox, T.

    1982-01-01

    Nonlinear optimization problems that are encountered in science and industry are examined. A method of projecting the gradient vector onto a set of linear contraints is developed, and a program that uses this method is presented. The algorithm that generates this projection matrix is based on the Gram-Schmidt method and overcomes some of the objections to the Rosen projection method.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janke, C.J.

    Electron beam (EB) curing is a technology that promises, in certain applications, to deliver lower cost and higher performance polymer matrix composite (PMC) structures compared to conventional thermal curing processes. PMCs enhance performance by making products lighter, stronger, more durable, and less energy demanding. They are essential in weight- and performance-dominated applications. Affordable PMCs can enhance US economic prosperity and national security. US industry expects rapid implementation of electron beam cured composites in aircraft and aerospace applications as satisfactory properties are demonstrated, and implementation in lower performance applications will likely follow thereafter. In fact, at this time and partly becausemore » of discoveries made in this project, field demonstrations are underway that may result in the first fielded applications of electron beam cured composites. Serious obstacles preventing the widespread use of electron beam cured PMCs in many applications are their relatively poor interfacial properties and resin toughness. The composite shear strength and resin toughness of electron beam cured carbon fiber reinforced epoxy composites were about 25% and 50% lower, respectively, than those of thermally cured composites of similar formulations. The essential purpose of this project was to improve the mechanical properties of electron beam cured, carbon fiber reinforced epoxy composites, with a specific focus on composite shear properties for high performance aerospace applications. Many partners, sponsors, and subcontractors participated in this project. There were four government sponsors from three federal agencies, with the US Department of Energy (DOE) being the principal sponsor. The project was executed by Oak Ridge National Laboratory (ORNL), NASA and Department of Defense (DOD) participants, eleven private CRADA partners, and two subcontractors. A list of key project contacts is provided in Appendix A. In order to properly manage the large project team and properly address the various technical tasks, the CRADA team was organized into integrated project teams (IPT's) with each team focused on specific research areas. Early in the project, the end user partners developed ''exit criteria'', recorded in Appendix B, against which the project's success was to be judged. The project team made several important discoveries. A number of fiber coatings or treatments were developed that improved fiber-matrix adhesion by 40% or more, according to microdebond testing. The effects of dose-time and temperature-time profiles during the cure were investigated, and it was determined that fiber-matrix adhesion is relatively insensitive to the irradiation procedure, but can be elevated appreciably by thermal postcuring. Electron beam curable resin properties were improved substantially, with 80% increase in electron beam 798 resin toughness, and {approx}25% and 50% improvement, respectively, in ultimate tensile strength and ultimate tensile strain vs. earlier generation electron beam curable resins. Additionally, a new resin electron beam 800E was developed with generally good properties, and a very notable 120% improvement in transverse composite tensile strength vs. earlier generation electron beam cured carbon fiber reinforced epoxies. Chemical kinetics studies showed that reaction pathways can be affected by the irradiation parameters, although no consequential effects on material properties have been noted to date. Preliminary thermal kinetics models were developed to predict degree of cure vs. irradiation and thermal parameters. These models are continually being refined and validated. Despite the aforementioned impressive accomplishments, the project team did not fully realize the project objectives. The best methods for improving adhesion were combined with the improved electron beam 3K resin to make prepreg and uni-directional test laminates from which composite properties could be determined. Nevertheless, only minor improvements in the composite shear strength, and moderate improvements in the transverse tensile strength, were achieved. The project team was not satisfied with the laminate quality achieved, and low quality (specifically, high void fraction) laminates will compromise the composite properties. There were several problems with the prepregging and fabrication, many of them related to the use of new fiber treatments.« less

  19. Theory of quark mixing matrix and invariant functions of mass matrices

    NASA Astrophysics Data System (ADS)

    Jarloskog, C.

    1987-10-01

    The origin of the quark mixing matrix; super elementary theory of flavor projection operators; equivalences and invariances; the commutator formalism and CP violation; CP conditions for any number of families; the angle between the quark mass matrices; and application to Fritzsch and Stech mass matrices are discussed.

  20. First-Principle Construction of U(1) Symmetric Matrix Product States

    NASA Astrophysics Data System (ADS)

    Rakov, Mykhailo V.

    2018-07-01

    The algorithm to calculate the sets of symmetry sectors for virtual indices of U(1) symmetric matrix product states (MPS) is described. The principal differences between open (OBC) and periodic (PBC) boundary conditions are stressed, and the extension of PBC MPS algorithm to projected entangled pair states is outlined.

  1. Multiple-Matrix Sampling: A Technique for Maximizing the Effectiveness of Lengthy Survey Instruments.

    ERIC Educational Resources Information Center

    Shemick, John M.

    1983-01-01

    In a project to identify and verify professional competencies for beginning industrial education teachers, researchers found a 173-item questionnaire unwieldy. Using multiple-matrix sampling, they distributed subsets of items to respondents, resulting in adequate returns as well as duplication, postage, and time savings. (SK)

  2. Calculating Path-Dependent Travel Time Prediction Variance and Covariance fro a Global Tomographic P-Velocity Model

    NASA Astrophysics Data System (ADS)

    Ballard, S.; Hipp, J. R.; Encarnacao, A.; Young, C. J.; Begnaud, M. L.; Phillips, W. S.

    2012-12-01

    Seismic event locations can be made more accurate and precise by computing predictions of seismic travel time through high fidelity 3D models of the wave speed in the Earth's interior. Given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from SALSA3D, our global, seamless 3D tomographic P-velocity model. Typical global 3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.

  3. Ability of matrix models to explain the past and predict the future of plant populations.

    USGS Publications Warehouse

    McEachern, Kathryn; Crone, Elizabeth E.; Ellis, Martha M.; Morris, William F.; Stanley, Amanda; Bell, Timothy; Bierzychudek, Paulette; Ehrlen, Johan; Kaye, Thomas N.; Knight, Tiffany M.; Lesica, Peter; Oostermeijer, Gerard; Quintana-Ascencio, Pedro F.; Ticktin, Tamara; Valverde, Teresa; Williams, Jennifer I.; Doak, Daniel F.; Ganesan, Rengaian; Thorpe, Andrea S.; Menges, Eric S.

    2013-01-01

    Uncertainty associated with ecological forecasts has long been recognized, but forecast accuracy is rarely quantified. We evaluated how well data on 82 populations of 20 species of plants spanning 3 continents explained and predicted plant population dynamics. We parameterized stage-based matrix models with demographic data from individually marked plants and determined how well these models forecast population sizes observed at least 5 years into the future. Simple demographic models forecasted population dynamics poorly; only 40% of observed population sizes fell within our forecasts' 95% confidence limits. However, these models explained population dynamics during the years in which data were collected; observed changes in population size during the data-collection period were strongly positively correlated with population growth rate. Thus, these models are at least a sound way to quantify population status. Poor forecasts were not associated with the number of individual plants or years of data. We tested whether vital rates were density dependent and found both positive and negative density dependence. However, density dependence was not associated with forecast error. Forecast error was significantly associated with environmental differences between the data collection and forecast periods. To forecast population fates, more detailed models, such as those that project how environments are likely to change and how these changes will affect population dynamics, may be needed. Such detailed models are not always feasible. Thus, it may be wiser to make risk-averse decisions than to expect precise forecasts from models.

  4. Ability of matrix models to explain the past and predict the future of plant populations.

    PubMed

    Crone, Elizabeth E; Ellis, Martha M; Morris, William F; Stanley, Amanda; Bell, Timothy; Bierzychudek, Paulette; Ehrlén, Johan; Kaye, Thomas N; Knight, Tiffany M; Lesica, Peter; Oostermeijer, Gerard; Quintana-Ascencio, Pedro F; Ticktin, Tamara; Valverde, Teresa; Williams, Jennifer L; Doak, Daniel F; Ganesan, Rengaian; McEachern, Kathyrn; Thorpe, Andrea S; Menges, Eric S

    2013-10-01

    Uncertainty associated with ecological forecasts has long been recognized, but forecast accuracy is rarely quantified. We evaluated how well data on 82 populations of 20 species of plants spanning 3 continents explained and predicted plant population dynamics. We parameterized stage-based matrix models with demographic data from individually marked plants and determined how well these models forecast population sizes observed at least 5 years into the future. Simple demographic models forecasted population dynamics poorly; only 40% of observed population sizes fell within our forecasts' 95% confidence limits. However, these models explained population dynamics during the years in which data were collected; observed changes in population size during the data-collection period were strongly positively correlated with population growth rate. Thus, these models are at least a sound way to quantify population status. Poor forecasts were not associated with the number of individual plants or years of data. We tested whether vital rates were density dependent and found both positive and negative density dependence. However, density dependence was not associated with forecast error. Forecast error was significantly associated with environmental differences between the data collection and forecast periods. To forecast population fates, more detailed models, such as those that project how environments are likely to change and how these changes will affect population dynamics, may be needed. Such detailed models are not always feasible. Thus, it may be wiser to make risk-averse decisions than to expect precise forecasts from models. © 2013 Society for Conservation Biology.

  5. Synergistic Effects of Temperature and Oxidation on Matrix Cracking in Fiber-Reinforced Ceramic-Matrix Composites

    NASA Astrophysics Data System (ADS)

    Longbiao, Li

    2017-06-01

    In this paper, the synergistic effects of temperatrue and oxidation on matrix cracking in fiber-reinforced ceramic-matrix composites (CMCs) has been investigated using energy balance approach. The shear-lag model cooperated with damage models, i.e., the interface oxidation model, interface debonding model, fiber strength degradation model and fiber failure model, has been adopted to analyze microstress field in the composite. The relationships between matrix cracking stress, interface debonding and slipping, fiber fracture, oxidation temperatures and time have been established. The effects of fiber volume fraction, interface properties, fiber strength and oxidation temperatures on the evolution of matrix cracking stress versus oxidation time have been analyzed. The matrix cracking stresses of C/SiC composite with strong and weak interface bonding after unstressed oxidation at an elevated temperature of 700 °C in air condition have been predicted for different oxidation time.

  6. An Efficient Distributed Compressed Sensing Algorithm for Decentralized Sensor Network.

    PubMed

    Liu, Jing; Huang, Kaiyu; Zhang, Guoxian

    2017-04-20

    We consider the joint sparsity Model 1 (JSM-1) in a decentralized scenario, where a number of sensors are connected through a network and there is no fusion center. A novel algorithm, named distributed compact sensing matrix pursuit (DCSMP), is proposed to exploit the computational and communication capabilities of the sensor nodes. In contrast to the conventional distributed compressed sensing algorithms adopting a random sensing matrix, the proposed algorithm focuses on the deterministic sensing matrices built directly on the real acquisition systems. The proposed DCSMP algorithm can be divided into two independent parts, the common and innovation support set estimation processes. The goal of the common support set estimation process is to obtain an estimated common support set by fusing the candidate support set information from an individual node and its neighboring nodes. In the following innovation support set estimation process, the measurement vector is projected into a subspace that is perpendicular to the subspace spanned by the columns indexed by the estimated common support set, to remove the impact of the estimated common support set. We can then search the innovation support set using an orthogonal matching pursuit (OMP) algorithm based on the projected measurement vector and projected sensing matrix. In the proposed DCSMP algorithm, the process of estimating the common component/support set is decoupled with that of estimating the innovation component/support set. Thus, the inaccurately estimated common support set will have no impact on estimating the innovation support set. It is proven that under the condition the estimated common support set contains the true common support set, the proposed algorithm can find the true innovation set correctly. Moreover, since the innovation support set estimation process is independent of the common support set estimation process, there is no requirement for the cardinality of both sets; thus, the proposed DCSMP algorithm is capable of tackling the unknown sparsity problem successfully.

  7. Environmental Impact Assessment of the Industrial Estate Development Plan with the Geographical Information System and Matrix Methods

    PubMed Central

    Ghasemian, Mohammad; Poursafa, Parinaz; Amin, Mohammad Mehdi; Ziarati, Mohammad; Ghoddousi, Hamid; Momeni, Seyyed Alireza; Rezaei, Amir Hossein

    2012-01-01

    Background. The purpose of this study is environmental impact assessment of the industrial estate development planning. Methods. This cross-sectional study was conducted in 2010 in Isfahan province, Iran. GIS and matrix methods were applied. Data analysis was done to identify the current situation of the region, zoning vulnerable areas, and scoping the region. Quantitative evaluation was done by using matrix of Wooten and Rau. Results. The net score for impact of industrial units operation on air quality of the project area was (−3). According to the transition of industrial estate pollutants, residential places located in the radius of 2500 meters of the city were expected to be affected more. The net score for impact of construction of industrial units on plant species of the project area was (−2). Environmental protected areas were not affected by the air and soil pollutants because of their distance from industrial estate. Conclusion. Positive effects of project activities outweigh the drawbacks and the sum scores allocated to the project activities on environmental factor was (+37). Totally it does not have detrimental effects on the environment and residential neighborhood. EIA should be considered as an anticipatory, participatory environmental management tool before determining a plan application. PMID:22272210

  8. A Stochastic Multi-Attribute Assessment of Energy Options for Fairbanks, Alaska

    NASA Astrophysics Data System (ADS)

    Read, L.; Madani, K.; Mokhtari, S.; Hanks, C. L.; Sheets, B.

    2012-12-01

    Many competing projects have been proposed to address Interior Alaska's high cost of energy—both for electricity production and for heating. Public and private stakeholders are considering the costs associated with these competing projects which vary in fuel source, subsidy requirements, proximity, and other factors. As a result, the current projects under consideration involve a complex cost structure of potential subsidies and reliance on present and future market prices, introducing a significant amount of uncertainty associated with each selection. Multi-criteria multi-decision making (MCMDM) problems of this nature can benefit from game theory and systems engineering methods, which account for behavior and preferences of stakeholders in the analysis to produce feasible and relevant solutions. This work uses a stochastic MCMDM framework to evaluate the trade-offs of each proposed project based on a complete cost analysis, environmental impact, and long-term sustainability. Uncertainty in the model is quantified via a Monte Carlo analysis, which helps characterize the sensitivity and risk associated with each project. Based on performance measures and criteria outlined by the stakeholders, a decision matrix will inform policy on selecting a project that is both efficient and preferred by the constituents.

  9. Next Generation Electromagnetic Pump Analysis Tools (PLM DOC-0005-2188). Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stregy, Seth; Dasilva, Ana; Yilmaz, Serkan

    2015-10-29

    This report provides the broad historical review of EM Pump development and details of MATRIX development under this project. This report summarizes the efforts made to modernize the legacy performance models used in previous EM Pump designs and the improvements made to the analysis tools. This report provides information on Tasks 1, 3, and 4 of the entire project. The research for Task 4 builds upon Task 1: Update EM Pump Databank and Task 3: Modernize the Existing EM Pump Analysis Model, which are summarized within this report. Where research for Task 2: Insulation Materials Development and Evaluation identified parametersmore » applicable to the analysis model with Task 4, the analysis code was updated, and analyses were made for additional materials. The important design variables for the manufacture and operation of an EM Pump that the model improvement can evaluate are: space constraints; voltage capability of insulation system; maximum flux density through iron; flow rate and outlet pressure; efficiency and manufacturability. The development of the next-generation EM Pump analysis tools during this two-year program provides information in three broad areas: Status of analysis model development; Improvements made to older simulations; and Comparison to experimental data.« less

  10. Two-dimensional PCA-based human gait identification

    NASA Astrophysics Data System (ADS)

    Chen, Jinyan; Wu, Rongteng

    2012-11-01

    It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait based identification focus on recognizing human by his walking video automatically using computer vision and image processing approaches. As a potential biometric measure, human gait identification has attracted more and more researchers. Current human gait identification methods can be divided into two categories: model-based methods and motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis based human gait identification method is proposed. Using background estimation and image subtraction we can get a binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body moving mode during a person walking. We use the following steps to extract the temporal-space features from the difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two vectors. The performance of our methods is illustrated using the CASIA Gait Database.

  11. The stage-classified matrix models project a significant increase in biomass carbon stocks in China’s forests between 2005 and 2050

    PubMed Central

    Hu, Huifeng; Wang, Shaopeng; Guo, Zhaodi; Xu, Bing; Fang, Jingyun

    2015-01-01

    China’s forests are characterized by young age, low carbon (C) density and a large plantation area, implying a high potential for increasing C sinks in the future. Using data of provincial forest area and biomass C density from China’s forest inventories between 1994 and 2008 and the planned forest coverage of the country by 2050, we developed a stage-classified matrix model to predict biomass C stocks of China’s forests from 2005 to 2050. The results showed that total forest biomass C stock would increase from 6.43 Pg C (1 Pg = 1015 g) in 2005 to 9.97 Pg C (95% confidence interval: 8.98 ~ 11.07 Pg C) in 2050, with an overall net C gain of 78.8 Tg C yr−1 (56.7 ~ 103.3 Tg C yr−1; 1 Tg = 1012 g). Our findings suggest that China’s forests will be a large and persistent biomass C sink through 2050. PMID:26110831

  12. The stage-classified matrix models project a significant increase in biomass carbon stocks in China's forests between 2005 and 2050.

    PubMed

    Hu, Huifeng; Wang, Shaopeng; Guo, Zhaodi; Xu, Bing; Fang, Jingyun

    2015-06-25

    China's forests are characterized by young age, low carbon (C) density and a large plantation area, implying a high potential for increasing C sinks in the future. Using data of provincial forest area and biomass C density from China's forest inventories between 1994 and 2008 and the planned forest coverage of the country by 2050, we developed a stage-classified matrix model to predict biomass C stocks of China's forests from 2005 to 2050. The results showed that total forest biomass C stock would increase from 6.43 Pg C (1 Pg = 10(15) g) in 2005 to 9.97 Pg C (95% confidence interval: 8.98 ~ 11.07 Pg C) in 2050, with an overall net C gain of 78.8 Tg C yr(-1) (56.7 ~ 103.3 Tg C yr(-1); 1 Tg = 10(12) g). Our findings suggest that China's forests will be a large and persistent biomass C sink through 2050.

  13. Matrix Management: Is It Really Conflict Management.

    DTIC Science & Technology

    1976-11-01

    ru— A036 516 DEFENSE SYSTEMS MANAGEMENT COLL FORT BELVOIR VA - FIG 5/1 MATRIX MANAGEMENT: IS IT REALLY CONFLICT MANAGEMENT . (U) NOV 76 R P... conflict management .” As a result , sensitivity training for managers and their subordinates was conducted in order to implement the matrix concept...Wilemon. “ Conflict Management in ProJect Life Cycles ,” Sloan ~anagement Review , Vol. 16,Spring , 1975, pp . 3 1—5 0. Thamhain , Hans J. and David L

  14. IPMA Standard Competence Scope in Project Management Education

    ERIC Educational Resources Information Center

    Bartoška, Jan; Flégl, Martin; Jarkovská, Martina

    2012-01-01

    The authors of the paper endeavoured to find out key competences in IPMA standard for educational approaches in project management. These key competences may be used as a basis for project management university courses. An incidence matrix was set up, containing relations between IPMA competences described in IPMA competence baseline. Further,…

  15. Priority Determination for AVC Funded R&D Projects.

    ERIC Educational Resources Information Center

    Wilkinson, Gene L.

    As an extension of ideas suggested in an earlier paper which proposed a project control system for Indiana University's Audio-Visual Center (see EM 010 306), this paper examines the establishment of project legitimacy and priority within the system and reviews the need to stimulate specific research proposals as well as generating a matrix of…

  16. Secret Message Decryption: Group Consulting Projects Using Matrices and Linear Programming

    ERIC Educational Resources Information Center

    Gurski, Katharine F.

    2009-01-01

    We describe two short group projects for finite mathematics students that incorporate matrices and linear programming into fictional consulting requests presented as a letter to the students. The students are required to use mathematics to decrypt secret messages in one project involving matrix multiplication and inversion. The second project…

  17. Mat-Rix-Toe: Improving Writing through a Game-Based Project in Linear Algebra

    ERIC Educational Resources Information Center

    Graham-Squire, Adam; Farnell, Elin; Stockton, Julianna Connelly

    2014-01-01

    The Mat-Rix-Toe project utilizes a matrix-based game to deepen students' understanding of linear algebra concepts and strengthen students' ability to express themselves mathematically. The project was administered in three classes using slightly different approaches, each of which included some editing component to encourage the…

  18. High Strain Rate Deformation Modeling of a Polymer Matrix Composite. Part 1; Matrix Constitutive Equations

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Stouffer, Donald C.

    1998-01-01

    Recently applications have exposed polymer matrix composite materials to very high strain rate loading conditions, requiring an ability to understand and predict the material behavior under these extreme conditions. In this first paper of a two part report, background information is presented, along with the constitutive equations which will be used to model the rate dependent nonlinear deformation response of the polymer matrix. Strain rate dependent inelastic constitutive models which were originally developed to model the viscoplastic deformation of metals have been adapted to model the nonlinear viscoelastic deformation of polymers. The modified equations were correlated by analyzing the tensile/ compressive response of both 977-2 toughened epoxy matrix and PEEK thermoplastic matrix over a variety of strain rates. For the cases examined, the modified constitutive equations appear to do an adequate job of modeling the polymer deformation response. A second follow-up paper will describe the implementation of the polymer deformation model into a composite micromechanical model, to allow for the modeling of the nonlinear, rate dependent deformation response of polymer matrix composites.

  19. Selection of reservoirs amenable to micellar flooding. First annual report, October 1978-December 1979

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldburg, A.; Price, H.

    The overall project objective is to build a solid engineering base upon which the Department of Energy (DOE) can improve and accelerate the application of micellar-polymer recovery technology to Mid-Continent and California sandstone reservoirs. The purpose of the work carried out under these two contracts is to significantly aid, both DOE and the private sector, in gaining the following Project Objectives: to select the better micellar-polymer prospects in the Mid-Continent and California regions; to assess all of the available field and laboratory data which has a bearing on recovering oil by micellar-polymer projects in order to help identify and resolvemore » both the technical and economic constraints relating thereto; and to design and analyze improved field pilots and tests and to develop a micellar-polymer applications matrix for use by the potential technology users; i.e., owner/operators. The report includes the following: executive summary and project objectives; development of a predictive model for economic evaluation of reservoirs; reservoir data bank for micellar-polymer recovery evaluation; PECON program for preliminary economic evaluation; ordering of candidate reservoirs for additional data acquisition; validation of predictive model by numerical simulation; and work forecast. Tables, figures and references are included.« less

  20. ASTM and VAMAS activities in titanium matrix composites test methods development

    NASA Technical Reports Server (NTRS)

    Johnson, W. S.; Harmon, D. M.; Bartolotta, P. A.; Russ, S. M.

    1994-01-01

    Titanium matrix composites (TMC's) are being considered for a number of aerospace applications ranging from high performance engine components to airframe structures in areas that require high stiffness to weight ratios at temperatures up to 400 C. TMC's exhibit unique mechanical behavior due to fiber-matrix interface failures, matrix cracks bridged by fibers, thermo-viscoplastic behavior of the matrix at elevated temperatures, and the development of significant thermal residual stresses in the composite due to fabrication. Standard testing methodology must be developed to reflect the uniqueness of this type of material systems. The purpose of this paper is to review the current activities in ASTM and Versailles Project on Advanced Materials and Standards (VAMAS) that are directed toward the development of standard test methodology for titanium matrix composites.

  1. Bayes linear covariance matrix adjustment

    NASA Astrophysics Data System (ADS)

    Wilkinson, Darren J.

    1995-12-01

    In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated with examples of adjustments for some common problems. The problem of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be amenable to a similar approach. Diagnostics for matrix adjustments are also discussed.

  2. Thinking about Bacillus subtilis as a multicellular organism.

    PubMed

    Aguilar, Claudio; Vlamakis, Hera; Losick, Richard; Kolter, Roberto

    2007-12-01

    Initial attempts to use colony morphogenesis as a tool to investigate bacterial multicellularity were limited by the fact that laboratory strains often have lost many of their developmental properties. Recent advances in elucidating the molecular mechanisms underlying colony morphogenesis have been made possible through the use of undomesticated strains. In particular, Bacillus subtilis has proven to be a remarkable model system to study colony morphogenesis because of its well-characterized developmental features. Genetic screens that analyze mutants defective in colony morphology have led to the discovery of an intricate regulatory network that controls the production of an extracellular matrix. This matrix is essential for the development of complex colony architecture characterized by aerial projections that serve as preferential sites for sporulation. While much progress has been made, the challenge for future studies will be to determine the underlying mechanisms that regulate development such that differentiation occurs in a spatially and temporally organized manner.

  3. Effects of Different Camera Motions on the Error in Estimates of Epipolar Geometry between Two Dimensional Images in Order to Provide a Framework for Solutions to Vision Based Simultaneous Localization and Mapping (SLAM)

    DTIC Science & Technology

    2007-09-01

    the projective camera matrix (P) which is a 3x4 matrix that is represents both the intrinsic and extrinsic parameters of a camera. It is used to...K contains the intrinsic parameters of the camera and |R t⎡ ⎤⎣ ⎦ represents the extrinsic parameters of the camera. By definition, the extrinsic ... extrinsic parameters are known then the camera is said to be calibrated. If only the intrinsic parameters are known, then the projective camera can

  4. Application of Krylov exponential propagation to fluid dynamics equations

    NASA Technical Reports Server (NTRS)

    Saad, Youcef; Semeraro, David

    1991-01-01

    An application of matrix exponentiation via Krylov subspace projection to the solution of fluid dynamics problems is presented. The main idea is to approximate the operation exp(A)v by means of a projection-like process onto a krylov subspace. This results in a computation of an exponential matrix vector product similar to the one above but of a much smaller size. Time integration schemes can then be devised to exploit this basic computational kernel. The motivation of this approach is to provide time-integration schemes that are essentially of an explicit nature but which have good stability properties.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Thao D.; Grazier, John Mark; Boyce, Brad Lee

    Biological tissues are uniquely structured materials with technologically appealing properties. Soft tissues such as skin, are constructed from a composite of strong fibrils and fluid-like matrix components. This was the first coordinated experimental/modeling project at Sandia or in the open literature to consider the mechanics of micromechanically-based anisotropy and viscoelasticity of soft biological tissues. We have exploited and applied Sandia's expertise in experimentation and mechanics modeling to better elucidate the behavior of collagen fibril-reinforced soft tissues. The purpose of this project was to provide a detailed understanding of the deformation of ocular tissues, specifically the highly structured skin-like tissue inmore » the cornea. This discovery improved our knowledge of soft/complex materials testing and modeling. It also provided insight into the way that cornea tissue is bio-engineered such that under physiologically-relevant conditions it has a unique set of properties which enhance functionality. These results also provide insight into how non-physiologic loading conditions, such as corrective surgeries, may push the cornea outside of its natural design window, resulting in unexpected non-linear responses. Furthermore, this project created a clearer understanding of the mechanics of soft tissues that could lead to bio-inspired materials, such as highly supple and impact resistant body armor, and improve our design of human-machine interfaces, such as micro-electrical-mechanical (MEMS) based prosthetics.« less

  6. Innovative Remediation Technologies: Field-Scale Demonstration Projects in North America, 2nd Edition

    EPA Pesticide Factsheets

    This report consolidates key reference information in a matrix that allows project mangers to quickly identify new technologies that may answer their cleanup needs and contacts for obtaining technology demonstration results and other information.

  7. Measuring biotechnology employees' ethical attitudes towards a controversial transgenic cattle project: the ethical valance matrix.

    PubMed

    Small, Bruce H; Fisher, Mark W

    2005-01-01

    What is the relationship between biotechnology employees' beliefs about the moral outcomes of a controversial transgenic research project and their attitudes of acceptance towards the project? To answer this question, employees (n=466) of a New Zealand company, AgResearch Ltd., were surveyed regarding a project to create transgenic cattle containing a synthetic copy of the human myelin basic protein gene (hMBP). Although diversity existed amongst employees' attitudes of acceptance, they were generally: in favor of the project, believed that it should be allowed to proceed to completion, and that it is acceptable to use transgenic cattle to produce medicines for humans. These three items were aggregated to form a project acceptance score. Scales were developed to measure respondents' beliefs about the moral outcomes of the project for identified stakeholders in terms of the four principles of common morality (benefit, non-harm, justice, and autonomy). These data were statistically aggregated into an Ethical Valence Matrix fo the project. The respondents' project Ethical Valence Scores correlated significantly with their project acceptance scores (r=0.64, p<0.001), accounting for 41% of the variance in respondents' acceptance attitudes. Of the four principles, non-harm had the strongest correlation with attitude to the project (r=0.59), followed by benefit and justice (both r=0.54), then autonomy (r=0.44). These results indicate that beliefs about the moral outcomes of a research project, in terms of the four principles approach, are strongly related to, and may be significant determinants of, attitudes to the research project. This suggests that, for employees of a biotechnology organization, ethical reasoning could be a central mechanism for the evaluation of the acceptability of a project. We propose that the Ethical Valence Matrix may be used as a tool to measure ethical attitudes towards controversial issues, providing a metric for comparison of perceived ethical consequences for multiple stakeholder groups and for the evaluation and comparison of the ethical consequences of competing alternative issues or projects. The tool could be used to measure both public and special interest groups' ethical attitudes and results used for the development of socially responsible policy or by science organizations as a democratizing decision aid to selection amongst projects competing for scarce research funds.

  8. Convergence of Transition Probability Matrix in CLVMarkov Models

    NASA Astrophysics Data System (ADS)

    Permana, D.; Pasaribu, U. S.; Indratno, S. W.; Suprayogi, S.

    2018-04-01

    A transition probability matrix is an arrangement of transition probability from one states to another in a Markov chain model (MCM). One of interesting study on the MCM is its behavior for a long time in the future. The behavior is derived from one property of transition probabilty matrix for n steps. This term is called the convergence of the n-step transition matrix for n move to infinity. Mathematically, the convergence of the transition probability matrix is finding the limit of the transition matrix which is powered by n where n moves to infinity. The convergence form of the transition probability matrix is very interesting as it will bring the matrix to its stationary form. This form is useful for predicting the probability of transitions between states in the future. The method usually used to find the convergence of transition probability matrix is through the process of limiting the distribution. In this paper, the convergence of the transition probability matrix is searched using a simple concept of linear algebra that is by diagonalizing the matrix.This method has a higher level of complexity because it has to perform the process of diagonalization in its matrix. But this way has the advantage of obtaining a common form of power n of the transition probability matrix. This form is useful to see transition matrix before stationary. For example cases are taken from CLV model using MCM called Model of CLV-Markov. There are several models taken by its transition probability matrix to find its convergence form. The result is that the convergence of the matrix of transition probability through diagonalization has similarity with convergence with commonly used distribution of probability limiting method.

  9. Simulation for Wind Turbine Generators -- With FAST and MATLAB-Simulink Modules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, M.; Muljadi, E.; Jonkman, J.

    This report presents the work done to develop generator and gearbox models in the Matrix Laboratory (MATLAB) environment and couple them to the National Renewable Energy Laboratory's Fatigue, Aerodynamics, Structures, and Turbulence (FAST) program. The goal of this project was to interface the superior aerodynamic and mechanical models of FAST to the excellent electrical generator models found in various Simulink libraries and applications. The scope was limited to Type 1, Type 2, and Type 3 generators and fairly basic gear-train models. Future work will include models of Type 4 generators and more-advanced gear-train models with increased degrees of freedom. Asmore » described in this study, implementation of the developed drivetrain model enables the software tool to be used in many ways. Several case studies are presented as examples of the many types of studies that can be performed using this tool.« less

  10. Path-Dependent Travel Time Prediction Variance and Covariance for a Global Tomographic P- and S-Velocity Model

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Ballard, S.; Begnaud, M. L.; Encarnacao, A. V.; Young, C. J.; Phillips, W. S.

    2015-12-01

    Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P- and S-velocity model (SALSA3D) that provides superior first P and first S travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from our latest tomographic model. Typical global 3D SALSA3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes a prior model covariance constraint) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.

  11. Basal Ganglia Disorders Associated with Imbalances in the Striatal Striosome and Matrix Compartments

    PubMed Central

    Crittenden, Jill R.; Graybiel, Ann M.

    2011-01-01

    The striatum is composed principally of GABAergic, medium spiny striatal projection neurons (MSNs) that can be categorized based on their gene expression, electrophysiological profiles, and input–output circuits. Major subdivisions of MSN populations include (1) those in ventromedial and dorsolateral striatal regions, (2) those giving rise to the direct and indirect pathways, and (3) those that lie in the striosome and matrix compartments. The first two classificatory schemes have enabled advances in understanding of how basal ganglia circuits contribute to disease. However, despite the large number of molecules that are differentially expressed in the striosomes or the extra-striosomal matrix, and the evidence that these compartments have different input–output connections, our understanding of how this compartmentalization contributes to striatal function is still not clear. A broad view is that the matrix contains the direct and indirect pathway MSNs that form parts of sensorimotor and associative circuits, whereas striosomes contain MSNs that receive input from parts of limbic cortex and project directly or indirectly to the dopamine-containing neurons of the substantia nigra, pars compacta. Striosomes are widely distributed within the striatum and are thought to exert global, as well as local, influences on striatal processing by exchanging information with the surrounding matrix, including through interneurons that send processes into both compartments. It has been suggested that striosomes exert and maintain limbic control over behaviors driven by surrounding sensorimotor and associative parts of the striatal matrix. Consistent with this possibility, imbalances between striosome and matrix functions have been reported in relation to neurological disorders, including Huntington’s disease, L-DOPA-induced dyskinesias, dystonia, and drug addiction. Here, we consider how signaling imbalances between the striosomes and matrix might relate to symptomatology in these disorders. PMID:21941467

  12. E-learning for textile enterprises innovation improvement

    NASA Astrophysics Data System (ADS)

    Blaga, M.; Harpa, R.; Radulescu, I. R.; Stepjanovic, Z.

    2017-10-01

    The Erasmus Plus project- TEXMatrix: “Matrix of knowledge for innovation and competitiveness in textile enterprises”, financed through the Erasmus+ Programme, Strategic partnerships- KA2 for Vocational Education and Training, aims at spreading the creative and innovative organizational culture inside textile enterprises by transferring and implementing methodologies, tools and concepts for improved training. Five European partners form the project consortium: INCDTP - Bucharest, Romania (coordinator), TecMinho - Portugal, Centrocot - Italy, University Maribor, Slovenia, and “Gheorghe Asachi” Technical University of Iasi, Romania. These will help the textile enterprises involved in the project, to learn how to apply creative thinking in their organizations and how to develop the capacity for innovation and change. The project aims to bridge the gap between textile enterprises need for qualified personnel and the young workforce. It develops an innovative knowledge matrix for the tangible and intangible assets of an enterprise and a benchmarking study, based on which a dedicated software tool will be created. This software tool will aid the decision-making enterprise staff (managers, HR specialists, professionals) as well as the trainees (young employees, students, and scholars) to cope with the new challenges of innovation and competitiveness for the textile field. The purpose of this paper is to present the main objectives and achievements of the project, according to its declared goals, with the focus on the presentation of the knowledge matrix of innovation, which is a powerful instrument for the quantification of the intangible assets of textile enterprises.

  13. Acausal measurement-based quantum computing

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki

    2014-07-01

    In measurement-based quantum computing, there is a natural "causal cone" among qubits of the resource state, since the measurement angle on a qubit has to depend on previous measurement results in order to correct the effect of by-product operators. If we respect the no-signaling principle, by-product operators cannot be avoided. Here we study the possibility of acausal measurement-based quantum computing by using the process matrix framework [Oreshkov, Costa, and Brukner, Nat. Commun. 3, 1092 (2012), 10.1038/ncomms2076]. We construct a resource process matrix for acausal measurement-based quantum computing restricting local operations to projective measurements. The resource process matrix is an analog of the resource state of the standard causal measurement-based quantum computing. We find that if we restrict local operations to projective measurements the resource process matrix is (up to a normalization factor and trivial ancilla qubits) equivalent to the decorated graph state created from the graph state of the corresponding causal measurement-based quantum computing. We also show that it is possible to consider a causal game whose causal inequality is violated by acausal measurement-based quantum computing.

  14. Inferring monopartite projections of bipartite networks: an entropy-based approach

    NASA Astrophysics Data System (ADS)

    Saracco, Fabio; Straka, Mika J.; Di Clemente, Riccardo; Gabrielli, Andrea; Caldarelli, Guido; Squartini, Tiziano

    2017-05-01

    Bipartite networks are currently regarded as providing a major insight into the organization of many real-world systems, unveiling the mechanisms driving the interactions occurring between distinct groups of nodes. One of the most important issues encountered when modeling bipartite networks is devising a way to obtain a (monopartite) projection on the layer of interest, which preserves as much as possible the information encoded into the original bipartite structure. In the present paper we propose an algorithm to obtain statistically-validated projections of bipartite networks, according to which any two nodes sharing a statistically-significant number of neighbors are linked. Since assessing the statistical significance of nodes similarity requires a proper statistical benchmark, here we consider a set of four null models, defined within the exponential random graph framework. Our algorithm outputs a matrix of link-specific p-values, from which a validated projection is straightforwardly obtainable, upon running a multiple hypothesis testing procedure. Finally, we test our method on an economic network (i.e. the countries-products World Trade Web representation) and a social network (i.e. MovieLens, collecting the users’ ratings of a list of movies). In both cases non-trivial communities are detected: while projecting the World Trade Web on the countries layer reveals modules of similarly-industrialized nations, projecting it on the products layer allows communities characterized by an increasing level of complexity to be detected; in the second case, projecting MovieLens on the films layer allows clusters of movies whose affinity cannot be fully accounted for by genre similarity to be individuated.

  15. Robust Computation of Linear Models, or How to Find a Needle in a Haystack

    DTIC Science & Technology

    2012-02-17

    robustly, project it onto a sphere, and then apply standard PCA. This approach is due to [LMS+99]. Maronna et al . [MMY06] recommend it as a preferred...of this form is due to Chandrasekaran et al . [CSPW11]. Given an observed matrix X, they propose to solve the semidefinite problem minimize ‖P ‖S1 + γ...regularization parameter γ negotiates a tradeoff between the two goals. Candès et al . [CLMW11] study the performance of (2.1) for robust linear

  16. Controller reduction by preserving impulse response energy

    NASA Technical Reports Server (NTRS)

    Craig, Roy R., Jr.; Su, Tzu-Jeng

    1989-01-01

    A model order reduction algorithm based on a Krylov recurrence formulation is developed to reduce order of controllers. The reduced-order controller is obtained by projecting the full-order LQG controller onto a Krylov subspace in which either the controllability or the observability grammian is equal to the identity matrix. The reduced-order controller preserves the impulse response energy of the full-order controller and has a parameter-matching property. Two numerical examples drawn from other controller reduction literature are used to illustrate the efficacy of the proposed reduction algorithm.

  17. System parameter identification from projection of inverse analysis

    NASA Astrophysics Data System (ADS)

    Liu, K.; Law, S. S.; Zhu, X. Q.

    2017-05-01

    The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.

  18. Model reduction of nonsquare linear MIMO systems using multipoint matrix continued-fraction expansions

    NASA Technical Reports Server (NTRS)

    Guo, Tong-Yi; Hwang, Chyi; Shieh, Leang-San

    1994-01-01

    This paper deals with the multipoint Cauer matrix continued-fraction expansion (MCFE) for model reduction of linear multi-input multi-output (MIMO) systems with various numbers of inputs and outputs. A salient feature of the proposed MCFE approach to model reduction of MIMO systems with square transfer matrices is its equivalence to the matrix Pade approximation approach. The Cauer second form of the ordinary MCFE for a square transfer function matrix is generalized in this paper to a multipoint and nonsquare-matrix version. An interesting connection of the multipoint Cauer MCFE method to the multipoint matrix Pade approximation method is established. Also, algorithms for obtaining the reduced-degree matrix-fraction descriptions and reduced-dimensional state-space models from a transfer function matrix via the multipoint Cauer MCFE algorithm are presented. Practical advantages of using the multipoint Cauer MCFE are discussed and a numerical example is provided to illustrate the algorithms.

  19. Terrestrial population models for ecological risk assessment: A state-of-the-art review

    USGS Publications Warehouse

    Emlen, J.M.

    1989-01-01

    Few attempts have been made to formulate models for predicting impacts of xenobiotic chemicals on wildlife populations. However, considerable effort has been invested in wildlife optimal exploitation models. Because death from intoxication has a similar effect on population dynamics as death by harvesting, these management models are applicable to ecological risk assessment. An underlying Leslie-matrix bookkeeping formulation is widely applicable to vertebrate wildlife populations. Unfortunately, however, the various submodels that track birth, death, and dispersal rates as functions of the physical, chemical, and biotic environment are by their nature almost inevitably highly species- and locale-specific. Short-term prediction of one-time chemical applications requires only information on mortality before and after contamination. In such cases a simple matrix formulation may be adequate for risk assessment. But generally, risk must be projected over periods of a generation or more. This precludes generic protocols for risk assessment and also the ready and inexpensive predictions of a chemical's influence on a given population. When designing and applying models for ecological risk assessment at the population level, the endpoints (output) of concern must be carefully and rigorously defined. The most easily accessible and appropriate endpoints are (1) pseudoextinction (the frequency or probability of a population falling below a prespecified density), and (2) temporal mean population density. Spatial and temporal extent of predicted changes must be clearly specified a priori to avoid apparent contradictions and confusion.

  20. Investigation of Coupled model of Pore network and Continuum in shale gas

    NASA Astrophysics Data System (ADS)

    Cao, G.; Lin, M.

    2016-12-01

    Flow in shale spanning over many scales, makes the majority of conventional treatment methods disabled. For effectively simulating, a coupled model of pore-scale and continuum-scale was proposed in this paper. Based on the SEM image, we decompose organic-rich-shale into two subdomains: kerogen and inorganic matrix. In kerogen, the nanoscale pore-network is the main storage space and migration pathway so that the molecular phenomena (slip and diffusive transport) is significant. Whereas, inorganic matrix, with relatively large pores and micro fractures, the flow is approximate to Darcy. We use pore-scale network models (PNM) to represent kerogen and continuum-scale models (FVM or FEM) to represent matrix. Finite element mortars are employed to couple pore- and continuum-scale models by enforcing continuity of pressures and fluxes at shared boundary interfaces. In our method, the process in the coupled model is described by pressure square equation, and uses Dirichlet boundary conditions. We discuss several problems: the optimal element number of mortar faces, two categories boundary faces of pore network, the difference between 2D and 3D models, and the difference between continuum models FVM and FEM in mortars. We conclude that: (1) too coarse mesh in mortars will decrease the accuracy, while too fine mesh will lead to an ill-condition even singular system, the optimal element number is depended on boundary pores and nodes number. (2) pore network models are adjacent to two different mortar faces (PNM to PNM, PNM to continuum model), incidental repeated mortar nodes must be deleted. (3) 3D models can be replaced by 2D models under certain condition. (4) FVM is more convenient than FEM, for its simplicity in assigning interface nodes pressure and calculating interface fluxes. This work is supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB10020302), the 973 Program (2014CB239004), the Key Instrument Developing Project of the CAS (ZDYZ2012-1-08-02), the National Natural Science Foundation of China (41574129).

  1. Transformer-4 version 2.0.1, a free multi-platform software to quickly reformat genotype matrices of any marker type, and archive them in the Demiurge information system.

    PubMed

    Caujapé-Castells, Juli; Sabbagh, Izzat; Castellano, José J; Ramos, Rafael; Henríquez, Víctor; Quintana, Francisco M; Medina, Dailos A; Toledo, Javier; Ramírez, Fernando; Rodríguez, Juan F

    2013-05-01

    Transformer-4 version 2.0.1 (T4) is a multi-platform freeware programmed in java that can transform a genotype matrix in Excel or XML format into the input formats of one or several of the most commonly used population genetic software, for any possible combination of the populations that the matrix contains. T4 also allows the users to (i) draw allozyme gel interpretations for any number of diploid individuals, and then generate a genotype matrix ready to be used by T4; and (ii) produce basic reports about the data in the matrices. Furthermore, T4 is the only way to optionally submit 'genetic diversity digests' for publication in the Demiurge online information system (http://www.demiurge-project.org). Each such digest undergoes peer-review, and it consists of a geo-referenced data matrix in the tfm4 format plus any ancillary document or hyperlink that the digest authors see fit to include. The complementarity between T4 and Demiurge facilitates a free, safe, permanent, and standardized data archival and analysis system for researchers, and may also be a convenient resource for scientific journals, public administrations, or higher educators. T4 and its converters are freely available (at, respectively, http://www.demiurge-project.org/download_t4 and http://www.demiurge-project.org/converterstore) upon registration in the Demiurge information system (http://demiurge-project.org/register). Users have to click on the link provided on an account validation email, and accept Demiurge's terms of use (see http://www.demiurge-project.org/termsofuse). A thorough user's guide is available within T4. A 3-min promotional video about T4 and Demiurge can be seen at http://vimeo.com/29828406. © 2013 Blackwell Publishing Ltd.

  2. Application of Matrix Projection Exposure Using a Liquid Crystal Display Panel to Fabricate Thick Resist Molds

    NASA Astrophysics Data System (ADS)

    Fukasawa, Hirotoshi; Horiuchi, Toshiyuki

    2009-08-01

    The patterning characteristics of matrix projection exposure using an analog liquid crystal display (LCD) panel in place of a reticle were investigated, in particular for oblique patterns. In addition, a new method for fabricating practical thick resist molds was developed. At first, an exposure system fabricated in past research was reconstructed. Changes in the illumination optics and the projection lens were the main improvements. Using fly's eye lenses, the illumination light intensity distribution was homogenized. The projection lens was changed from a common camera lens to a higher-grade telecentric lens. In addition, although the same metal halide lamp was used as an exposure light source, the central exposure wavelength was slightly shortened from 480 to 450 nm to obtain higher resist sensitivity while maintaining almost equivalent contrast between black and white. Circular and radial patterns with linewidths of approximately 6 µm were uniformly printed in all directions throughout the exposure field owing to these improvements. The patterns were smoothly printed without accompanying stepwise roughness caused by the cell matrix array. On the bases of these results, a new method of fabricating thick resist molds for electroplating was investigated. It is known that thick resist molds fabricated using the negative resist SU-8 (Micro Chem) are useful because very high aspect patterns are printable and the side walls are perpendicular to the substrate surfaces. However, the most suitable exposure wavelength of SU-8 is 365 nm, and SU-8 is insensitive to light of 450 nm wavelength, which is most appropriate for LCD matrix exposure. For this reason, a novel multilayer resist process was proposed, and micromolds of SU-8 of 50 µm thickness were successfully obtained. As a result, feasibility for fabricating complex resist molds including oblique patterns was demonstrated.

  3. Fast Compressive Tracking.

    PubMed

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

  4. NCTR using a polarization-agile coherent radar system

    NASA Astrophysics Data System (ADS)

    Walton, E. K.; Moffatt, D. L.; Garber, F. D.; Kamis, A.; Lai, C. Y.

    1986-01-01

    This report describes the results of the first year of a research project performed by the Ohio State University ElectroScience Laboratory (OSU/ESL) for the Naval Weapons Center (NWC). The goal of this project is to explore the use of the polarization properties of the signal scattered from a radar target for the purpose of radar target identification. Various radar target identification algorithms were applied to the case of a full polarization coherent radar system, and were tested using a specific data base and noise model. The data base used to test the performance of the radar target identification algorithms developed here is a unique set of measurements made on scale models of aircraft. Measurements were made using the OSU/ESL Compact Radar Measurement Range. The range was operated in a broad-band (1-12 GHZ) mode and the full polarization matrix was measured. Calibrated values (amplitude and phase) of the RCS for the three polarization states were thus available. The polarization states are listed below.

  5. Construction of fuzzy spaces and their applications to matrix models

    NASA Astrophysics Data System (ADS)

    Abe, Yasuhiro

    Quantization of spacetime by means of finite dimensional matrices is the basic idea of fuzzy spaces. There remains an issue of quantizing time, however, the idea is simple and it provides an interesting interplay of various ideas in mathematics and physics. Shedding some light on such an interplay is the main theme of this dissertation. The dissertation roughly separates into two parts. In the first part, we consider rather mathematical aspects of fuzzy spaces, namely, their construction. We begin with a review of construction of fuzzy complex projective spaces CP k (k = 1, 2, · · ·) in relation to geometric quantization. This construction facilitates defining symbols and star products on fuzzy CPk. Algebraic construction of fuzzy CPk is also discussed. We then present construction of fuzzy S 4, utilizing the fact that CP3 is an S2 bundle over S4. Fuzzy S4 is obtained by imposing an additional algebraic constraint on fuzzy CP3. Consequently it is proposed that coordinates on fuzzy S4 are described by certain block-diagonal matrices. It is also found that fuzzy S8 can analogously be constructed. In the second part of this dissertation, we consider applications of fuzzy spaces to physics. We first consider theories of gravity on fuzzy spaces, anticipating that they may offer a novel way of regularizing spacetime dynamics. We obtain actions for gravity on fuzzy S2 and on fuzzy CP3 in terms of finite dimensional matrices. Application to M(atrix) theory is also discussed. With an introduction of extra potentials to the theory, we show that it also has new brane solutions whose transverse directions are described by fuzzy S 4 and fuzzy CP3. The extra potentials can be considered as fuzzy versions of differential forms or fluxes, which enable us to discuss compactification models of M(atrix) theory. In particular, compactification down to fuzzy S4 is discussed and a realistic matrix model of M-theory in four-dimensions is proposed.

  6. An active learning representative subset selection method using net analyte signal.

    PubMed

    He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi

    2018-05-05

    To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. An active learning representative subset selection method using net analyte signal

    NASA Astrophysics Data System (ADS)

    He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi

    2018-05-01

    To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.

  8. Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models

    NASA Astrophysics Data System (ADS)

    Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing

    2018-06-01

    The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.

  9. Eikonal Scattering in the sdg Interacting Boson Model:. Analytical Results in the SUsdg(3) Limit and Their Generalizations

    NASA Astrophysics Data System (ADS)

    Kota, V. K. B.

    General expression for the representation matrix elements in the SUsdg(3) limit of the sdg interacting boson model (sdgIBM) is derived that determine the scattering amplitude in the eikonal approximation for medium energy proton-nucleus scattering when the target nucleus is deformed and it is described by the SUsdg(3) limit. The SUsdg(3) result is generalized to two important situations: (i) when the target nucleus ground band states are described as states arising out of angular momentum projection from a general single Kπ = 0+ intrinsic state in sdg space; (ii) for rotational bands built on one-phonon excitations in sdgIBM.

  10. Portraying Reflexivity in Health Services Research.

    PubMed

    Rae, John; Green, Bill

    2016-09-01

    A model is proposed for supporting reflexivity in qualitative health research, informed by arguments from Bourdieu and Finlay. Bourdieu refers to mastering the subjective relation to the object at three levels-the overall social space, the field of specialists, and the scholastic universe. The model overlays Bourdieu's levels of objectivation with Finlay's three stages of research (pre-research, data collection, and data analysis). The intersections of these two ways of considering reflexivity, displayed as cells of a matrix, pose questions and offer prompts to productively challenge health researchers' reflexivity. Portraiture is used to show how these challenges and prompts can facilitate such reflexivity, as illustrated in a research project. © The Author(s) 2016.

  11. Optical Modeling Activities for the James Webb Space Telescope (JWST) Project. II; Determining Image Motion and Wavefront Error Over an Extended Field of View with a Segmented Optical System

    NASA Technical Reports Server (NTRS)

    Howard, Joseph M.; Ha, Kong Q.

    2004-01-01

    This is part two of a series on the optical modeling activities for JWST. Starting with the linear optical model discussed in part one, we develop centroid and wavefront error sensitivities for the special case of a segmented optical system such as JWST, where the primary mirror consists of 18 individual segments. Our approach extends standard sensitivity matrix methods used for systems consisting of monolithic optics, where the image motion is approximated by averaging ray coordinates at the image and residual wavefront error is determined with global tip/tilt removed. We develop an exact formulation using the linear optical model, and extend it to cover multiple field points for performance prediction at each instrument aboard JWST. This optical model is then driven by thermal and dynamic structural perturbations in an integrated modeling environment. Results are presented.

  12. The New Multi-HAzard and MulTi-RIsK Assessment MethodS for Europe (MATRIX) Project - An overview of its major findings

    NASA Astrophysics Data System (ADS)

    Fleming, Kevin; Zschau, Jochen; Gasparini, Paolo

    2014-05-01

    Recent major natural disasters, such as the 2011 Tōhoku earthquake, tsunami and subsequent Fukushima nuclear accident, have raised awareness of the frequent and potentially far-reaching interconnections between natural hazards. Such interactions occur at the hazard level, where an initial hazard may trigger other events (e.g., an earthquake triggering a tsunami) or several events may occur concurrently (or nearly so), e.g., severe weather around the same time as an earthquake. Interactions also occur at the vulnerability level, where the initial event may make the affected community more susceptible to the negative consequences of another event (e.g., an earthquake weakens buildings, which are then damaged further by windstorms). There is also a temporal element involved, where changes in exposure may alter the total risk to a given area. In short, there is the likelihood that the total risk estimated when considering multiple hazard and risks and their interactions is greater than the sum of their individual parts. It is with these issues in mind that the European Commission, under their FP7 program, supported the New Multi-HAzard and MulTi-RIsK Assessment MethodS for Europe or MATRIX project (10.2010 to 12.2013). MATRIX set out to tackle multiple natural hazards (i.e., those of concern to Europe, namely earthquakes, landslides, volcanos, tsunamis, wild fires, storms and fluvial and coastal flooding) and risks within a common theoretical framework. The MATRIX work plan proceeded from an assessment of single-type risk methodologies (including how uncertainties should be treated), cascade effects within a multi-hazard environment, time-dependent vulnerability, decision making and support for multi-hazard mitigation and adaption, and an assessment of how the multi-hazard and risk viewpoint may be integrated into current decision making and risk mitigation programs, considering the existing single-hazard and risk focus. Three test sites were considered during the project: Naples, Cologne, and the French West Indies. In addition, a software platform, the MATRIX-Common IT sYstem (MATRIX-CITY), was developed to allow the evaluation of characteristic multi-hazard and risk scenarios in comparison to single-type analyses. This presentation therefore outlines the more significant outcomes of the project, in particular those dealing with the harmonization of single-type hazards, cascade event analysis, time-dependent vulnerability changes and the response of the disaster management community to the MATRIX point of view.

  13. OCO-2 Column Carbon Dioxide and Biometric Data Jointly Constrain Parameterization and Projection of a Global Land Model

    NASA Astrophysics Data System (ADS)

    Shi, Z.; Crowell, S.; Luo, Y.; Rayner, P. J.; Moore, B., III

    2015-12-01

    Uncertainty in predicted carbon-climate feedback largely stems from poor parameterization of global land models. However, calibration of global land models with observations has been extremely challenging at least for two reasons. First we lack global data products from systematical measurements of land surface processes. Second, computational demand is insurmountable for estimation of model parameter due to complexity of global land models. In this project, we will use OCO-2 retrievals of dry air mole fraction XCO2 and solar induced fluorescence (SIF) to independently constrain estimation of net ecosystem exchange (NEE) and gross primary production (GPP). The constrained NEE and GPP will be combined with data products of global standing biomass, soil organic carbon and soil respiration to improve the community land model version 4.5 (CLM4.5). Specifically, we will first develop a high fidelity emulator of CLM4.5 according to the matrix representation of the terrestrial carbon cycle. It has been shown that the emulator fully represents the original model and can be effectively used for data assimilation to constrain parameter estimation. We will focus on calibrating those key model parameters (e.g., maximum carboxylation rate, turnover time and transfer coefficients of soil carbon pools, and temperature sensitivity of respiration) for carbon cycle. The Bayesian Markov chain Monte Carlo method (MCMC) will be used to assimilate the global databases into the high fidelity emulator to constrain the model parameters, which will be incorporated back to the original CLM4.5. The calibrated CLM4.5 will be used to make scenario-based projections. In addition, we will conduct observing system simulation experiments (OSSEs) to evaluate how the sampling frequency and length could affect the model constraining and prediction.

  14. Data-Driven Learning of Q-Matrix

    PubMed Central

    Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang

    2013-01-01

    The recent surge of interests in cognitive assessment has led to developments of novel statistical models for diagnostic classification. Central to many such models is the well-known Q-matrix, which specifies the item–attribute relationships. This article proposes a data-driven approach to identification of the Q-matrix and estimation of related model parameters. A key ingredient is a flexible T-matrix that relates the Q-matrix to response patterns. The flexibility of the T-matrix allows the construction of a natural criterion function as well as a computationally amenable algorithm. Simulations results are presented to demonstrate usefulness and applicability of the proposed method. Extension to handling of the Q-matrix with partial information is presented. The proposed method also provides a platform on which important statistical issues, such as hypothesis testing and model selection, may be formally addressed. PMID:23926363

  15. Design and implementation of ergonomic performance measurement system at a steel plant in India.

    PubMed

    Ray, Pradip Kumar; Tewari, V K

    2012-01-01

    Management of Tata Steel, the largest steel making company of India in the private sector, felt the need to develop a framework to determine the levels of ergonomic performance at its different workplaces. The objectives of the study are manifold: to identify and characterize the ergonomic variables for a given worksystem with regard to work efficiency, operator safety, and working conditions, to design a comprehensive Ergonomic Performance Indicator (EPI) for quantitative determination of the ergonomic status and maturity of a given worksystem. The study team of IIT Kharagpur consists of three faculty members and the management of Tata Steel formed a team of eleven members for implementation of EPI model. In order to design and develop the EPI model with total participation and understanding of the concerned personnel of Tata Steel, a three-phase action plan for the project was prepared. The project consists of three phases: preparation and data collection, detailed structuring and validation of EPI model. Identification of ergonomic performance factors, development of interaction matrix, design of assessment tool, and testing and validation of assessment tool (EPI) in varied situations are the major steps in these phases. The case study discusses in detail the EPI model and its applications.

  16. Detection of Nitrogen Content in Rubber Leaves Using Near-Infrared (NIR) Spectroscopy with Correlation-Based Successive Projections Algorithm (SPA).

    PubMed

    Tang, Rongnian; Chen, Xupeng; Li, Chuang

    2018-05-01

    Near-infrared spectroscopy is an efficient, low-cost technology that has potential as an accurate method in detecting the nitrogen content of natural rubber leaves. Successive projections algorithm (SPA) is a widely used variable selection method for multivariate calibration, which uses projection operations to select a variable subset with minimum multi-collinearity. However, due to the fluctuation of correlation between variables, high collinearity may still exist in non-adjacent variables of subset obtained by basic SPA. Based on analysis to the correlation matrix of the spectra data, this paper proposed a correlation-based SPA (CB-SPA) to apply the successive projections algorithm in regions with consistent correlation. The result shows that CB-SPA can select variable subsets with more valuable variables and less multi-collinearity. Meanwhile, models established by the CB-SPA subset outperform basic SPA subsets in predicting nitrogen content in terms of both cross-validation and external prediction. Moreover, CB-SPA is assured to be more efficient, for the time cost in its selection procedure is one-twelfth that of the basic SPA.

  17. A Visual Servoing-Based Method for ProCam Systems Calibration

    PubMed Central

    Berry, Francois; Aider, Omar Ait; Mosnier, Jeremie

    2013-01-01

    Projector-camera systems are currently used in a wide field of applications, such as 3D reconstruction and augmented reality, and can provide accurate measurements, depending on the configuration and calibration. Frequently, the calibration task is divided into two steps: camera calibration followed by projector calibration. The latter still poses certain problems that are not easy to solve, such as the difficulty in obtaining a set of 2D–3D points to compute the projection matrix between the projector and the world. Existing methods are either not sufficiently accurate or not flexible. We propose an easy and automatic method to calibrate such systems that consists in projecting a calibration pattern and superimposing it automatically on a known printed pattern. The projected pattern is provided by a virtual camera observing a virtual pattern in an OpenGL model. The projector displays what the virtual camera visualizes. Thus, the projected pattern can be controlled and superimposed on the printed one with the aid of visual servoing. Our experimental results compare favorably with those of other methods considering both usability and accuracy. PMID:24084121

  18. SPECT data acquisition and image reconstruction in a stationary small animal SPECT/MRI system

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Chen, Si; Yu, Jianhua; Meier, Dirk; Wagenaar, Douglas J.; Patt, Bradley E.; Tsui, Benjamin M. W.

    2010-04-01

    The goal of the study was to investigate data acquisition strategies and image reconstruction methods for a stationary SPECT insert that can operate inside an MRI scanner with a 12 cm bore diameter for simultaneous SPECT/MRI imaging of small animals. The SPECT insert consists of 3 octagonal rings of 8 MR-compatible CZT detectors per ring surrounding a multi-pinhole (MPH) collimator sleeve. Each pinhole is constructed to project the field-of-view (FOV) to one CZT detector. All 24 pinholes are focused to a cylindrical FOV of 25 mm in diameter and 34 mm in length. The data acquisition strategies we evaluated were optional collimator rotations to improve tomographic sampling; and the image reconstruction methods were iterative ML-EM with and without compensation for the geometric response function (GRF) of the MPH collimator. For this purpose, we developed an analytic simulator that calculates the system matrix with the GRF models of the MPH collimator. The simulator was used to generate projection data of a digital rod phantom with pinhole aperture sizes of 1 mm and 2 mm and with different collimator rotation patterns. Iterative ML-EM reconstruction with and without GRF compensation were used to reconstruct the projection data from the central ring of 8 detectors only, and from all 24 detectors. Our results indicated that without GRF compensation and at the default design of 24 projection views, the reconstructed images had significant artifacts. Accurate GRF compensation substantially improved the reconstructed image resolution and reduced image artifacts. With accurate GRF compensation, useful reconstructed images can be obtained using 24 projection views only. This last finding potentially enables dynamic SPECT (and/or MRI) studies in small animals, one of many possible application areas of the SPECT/MRI system. Further research efforts are warranted including experimentally measuring the system matrix for improved geometrical accuracy, incorporating the co-registered MRI image in SPECT reconstruction, and exploring potential applications of the simultaneous SPECT/MRI SA system including dynamic SPECT studies.

  19. Calculating Path-Dependent Travel Time Prediction Variance and Covariance for the SALSA3D Global Tomographic P-Velocity Model with a Distributed Parallel Multi-Core Computer

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Encarnacao, A.; Ballard, S.; Young, C. J.; Phillips, W. S.; Begnaud, M. L.

    2011-12-01

    Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P-velocity model (SALSA3D) that provides superior first P travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we show a methodology for accomplishing this by exploiting the full model covariance matrix. Our model has on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiply methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix we solve for the travel-time covariance associated with arbitrary ray-paths by integrating the model covariance along both ray paths. Setting the paths equal gives variance for that path. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  20. Tuning stochastic matrix models with hydrologic data to predict the population dynamics of a riverine fish.

    PubMed

    Sakaris, Peter C; Irwin, Elise R

    2010-03-01

    We developed stochastic matrix models to evaluate the effects of hydrologic alteration and variable mortality on the population dynamics of a lotic fish in a regulated river system. Models were applied to a representative lotic fish species, the flathead catfish (Pylodictis olivaris), for which two populations were examined: a native population from a regulated reach of the Coosa River (Alabama, USA) and an introduced population from an unregulated section of the Ocmulgee River (Georgia, USA). Size-classified matrix models were constructed for both populations, and residuals from catch-curve regressions were used as indices of year class strength (i.e., recruitment). A multiple regression model indicated that recruitment of flathead catfish in the Coosa River was positively related to the frequency of spring pulses between 283 and 566 m3/s. For the Ocmulgee River population, multiple regression models indicated that year class strength was negatively related to mean March discharge and positively related to June low flow. When the Coosa population was modeled to experience five consecutive years of favorable hydrologic conditions during a 50-year projection period, it exhibited a substantial spike in size and increased at an overall 0.2% annual rate. When modeled to experience five years of unfavorable hydrologic conditions, the Coosa population initially exhibited a decrease in size but later stabilized and increased at a 0.4% annual rate following the decline. When the Ocmulgee River population was modeled to experience five years of favorable conditions, it exhibited a substantial spike in size and increased at an overall 0.4% annual rate. After the Ocmulgee population experienced five years of unfavorable conditions, a sharp decline in population size was predicted. However, the population quickly recovered, with population size increasing at a 0.3% annual rate following the decline. In general, stochastic population growth in the Ocmulgee River was more erratic and variable than population growth in the Coosa River. We encourage ecologists to develop similar models for other lotic species, particularly in regulated river systems. Successful management of fish populations in regulated systems requires that we are able to predict how hydrology affects recruitment and will ultimately influence the population dynamics of fishes.

  1. Effective correlator for RadioAstron project

    NASA Astrophysics Data System (ADS)

    Sergeev, Sergey

    This paper presents the implementation of programme FX-correlator for Very Long Baseline Interferometry, adapted for the project "RadioAstron". Software correlator implemented for heterogeneous computing systems using graphics accelerators. It is shown that for the task interferometry implementation of the graphics hardware has a high efficiency. The host processor of heterogeneous computing system, performs the function of forming the data flow for graphics accelerators, the number of which corresponds to the number of frequency channels. So, for the Radioastron project, such channels is seven. Each accelerator is perform correlation matrix for all bases for a single frequency channel. Initial data is converted to the floating-point format, is correction for the corresponding delay function and computes the entire correlation matrix simultaneously. Calculation of the correlation matrix is performed using the sliding Fourier transform. Thus, thanks to the compliance of a solved problem for architecture graphics accelerators, managed to get a performance for one processor platform Kepler, which corresponds to the performance of this task, the computing cluster platforms Intel on four nodes. This task successfully scaled not only on a large number of graphics accelerators, but also on a large number of nodes with multiple accelerators.

  2. Discriminant projective non-negative matrix factorization.

    PubMed

    Guan, Naiyang; Zhang, Xiang; Luo, Zhigang; Tao, Dacheng; Yang, Xuejun

    2013-01-01

    Projective non-negative matrix factorization (PNMF) projects high-dimensional non-negative examples X onto a lower-dimensional subspace spanned by a non-negative basis W and considers W(T) X as their coefficients, i.e., X≈WW(T) X. Since PNMF learns the natural parts-based representation Wof X, it has been widely used in many fields such as pattern recognition and computer vision. However, PNMF does not perform well in classification tasks because it completely ignores the label information of the dataset. This paper proposes a Discriminant PNMF method (DPNMF) to overcome this deficiency. In particular, DPNMF exploits Fisher's criterion to PNMF for utilizing the label information. Similar to PNMF, DPNMF learns a single non-negative basis matrix and needs less computational burden than NMF. In contrast to PNMF, DPNMF maximizes the distance between centers of any two classes of examples meanwhile minimizes the distance between any two examples of the same class in the lower-dimensional subspace and thus has more discriminant power. We develop a multiplicative update rule to solve DPNMF and prove its convergence. Experimental results on four popular face image datasets confirm its effectiveness comparing with the representative NMF and PNMF algorithms.

  3. Development of an Engineered Producet Storage Concept for the UREX+1 Combined Transuraqnic?Lanthanide Product Streams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dr. Sean M. McDeavitt; Thomas J. Downar; Dr. Temitope A. Taiwo

    2009-03-01

    The U.S. Department of Energy is developing next generation processing methods to recycle uranium and transuranic (TRU) isotopes from spent nuclear fuel. The objective of the 3-year project described in this report was to develop near-term options for storing TRU oxides isolated through the uranium extraction (UREX+) process. More specifically, a Zircaloy matrix cermet was developed as a storage form for transuranics with the understanding that the cermet also has the ability to serve as a inert matrix fuel form for TRU burning after intermediate storage. The goals of this research projects were: 1) to develop the processing steps requiredmore » to transform the effluent TRU nitrate solutions and the spent Xircaloy cladding into a zireonium matrix cermet sotrage form; and 2) to evaluate the impact of phenomena that govern durability of the storage form, material processing, and TRU utiliztion in fast reactor fuel. This report represents a compilation of the results generated under this program. The information is presented as a brief technical narrative in the following sections with appended papers, presentations and academic theses to provide a detailed review of the project's accomplishments.« less

  4. Discriminant Projective Non-Negative Matrix Factorization

    PubMed Central

    Guan, Naiyang; Zhang, Xiang; Luo, Zhigang; Tao, Dacheng; Yang, Xuejun

    2013-01-01

    Projective non-negative matrix factorization (PNMF) projects high-dimensional non-negative examples X onto a lower-dimensional subspace spanned by a non-negative basis W and considers WT X as their coefficients, i.e., X≈WWT X. Since PNMF learns the natural parts-based representation Wof X, it has been widely used in many fields such as pattern recognition and computer vision. However, PNMF does not perform well in classification tasks because it completely ignores the label information of the dataset. This paper proposes a Discriminant PNMF method (DPNMF) to overcome this deficiency. In particular, DPNMF exploits Fisher's criterion to PNMF for utilizing the label information. Similar to PNMF, DPNMF learns a single non-negative basis matrix and needs less computational burden than NMF. In contrast to PNMF, DPNMF maximizes the distance between centers of any two classes of examples meanwhile minimizes the distance between any two examples of the same class in the lower-dimensional subspace and thus has more discriminant power. We develop a multiplicative update rule to solve DPNMF and prove its convergence. Experimental results on four popular face image datasets confirm its effectiveness comparing with the representative NMF and PNMF algorithms. PMID:24376680

  5. QCD dirac operator at nonzero chemical potential: lattice data and matrix model.

    PubMed

    Akemann, Gernot; Wettig, Tilo

    2004-03-12

    Recently, a non-Hermitian chiral random matrix model was proposed to describe the eigenvalues of the QCD Dirac operator at nonzero chemical potential. This matrix model can be constructed from QCD by mapping it to an equivalent matrix model which has the same symmetries as QCD with chemical potential. Its microscopic spectral correlations are conjectured to be identical to those of the QCD Dirac operator. We investigate this conjecture by comparing large ensembles of Dirac eigenvalues in quenched SU(3) lattice QCD at a nonzero chemical potential to the analytical predictions of the matrix model. Excellent agreement is found in the two regimes of weak and strong non-Hermiticity, for several different lattice volumes.

  6. Construction of energy-stable Galerkin reduced order models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalashnikova, Irina; Barone, Matthew Franklin; Arunajatesan, Srinivasan

    2013-05-01

    This report aims to unify several approaches for building stable projection-based reduced order models (ROMs). Attention is focused on linear time-invariant (LTI) systems. The model reduction procedure consists of two steps: the computation of a reduced basis, and the projection of the governing partial differential equations (PDEs) onto this reduced basis. Two kinds of reduced bases are considered: the proper orthogonal decomposition (POD) basis and the balanced truncation basis. The projection step of the model reduction can be done in two ways: via continuous projection or via discrete projection. First, an approach for building energy-stable Galerkin ROMs for linear hyperbolicmore » or incompletely parabolic systems of PDEs using continuous projection is proposed. The idea is to apply to the set of PDEs a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. The resulting ROM will be energy-stable for any choice of reduced basis. It is shown that, for many PDE systems, the desired transformation is induced by a special weighted L2 inner product, termed the %E2%80%9Csymmetry inner product%E2%80%9D. Attention is then turned to building energy-stable ROMs via discrete projection. A discrete counterpart of the continuous symmetry inner product, a weighted L2 inner product termed the %E2%80%9CLyapunov inner product%E2%80%9D, is derived. The weighting matrix that defines the Lyapunov inner product can be computed in a black-box fashion for a stable LTI system arising from the discretization of a system of PDEs in space. It is shown that a ROM constructed via discrete projection using the Lyapunov inner product will be energy-stable for any choice of reduced basis. Connections between the Lyapunov inner product and the inner product induced by the balanced truncation algorithm are made. Comparisons are also made between the symmetry inner product and the Lyapunov inner product. The performance of ROMs constructed using these inner products is evaluated on several benchmark test cases.« less

  7. Geometric Modeling of Construction Communications with Specified Dynamic Properties

    NASA Astrophysics Data System (ADS)

    Korotkiy, V. A.; Usmanova, E. A.; Khmarova, L. I.

    2017-11-01

    Among many construction communications the pipelines designed for the organized supply or removal of liquid or loose working bodies are distinguished for their functional purpose. Such communications should have dynamic properties which allow one to reduce losses on friction and vortex formation. From the point of view of geometric modeling, the given dynamic properties of the projected communication mean the required degree of smoothness of its center line. To model the axial line (flat or spatial), it is proposed to use composite curve lines consisting of the curve arcs of the second order or from their quadratic images. The advantage of the proposed method is that the designer gets the model of a given curve not as a set of coordinates of its points but in the form of a matrix of coefficients of the canonical equations for each arc.

  8. The Cutting Edge of High-Temperature Composites

    NASA Technical Reports Server (NTRS)

    2006-01-01

    NASA s Ultra-Efficient Engine Technology (UEET) program was formed in 1999 at Glenn Research Center to manage an important national propulsion program for the Space Agency. The UEET program s focus is on developing innovative technologies to enable intelligent, environmentally friendly, and clean-burning turbine engines capable of reducing harmful emissions while maintaining high performance and increasing reliability. Seven technology projects exist under the program, with each project working towards specific goals to provide new technology for propulsion. One of these projects, Materials and Structures for High Performance, is concentrating on developing and demonstrating advanced high-temperature materials to enable high-performance, high-efficiency, and environmentally compatible propulsion systems. Materials include ceramic matrix composite (CMC) combustor liners and turbine vanes, disk alloys, turbine airfoil material systems, high-temperature polymer matrix composites, and lightweight materials for static engine structures.

  9. Application of fiber bridging models to fatigue crack growth in unidirectional titanium matrix composites

    NASA Technical Reports Server (NTRS)

    Bakuckas, J. G., Jr.; Johnson, W. S.

    1992-01-01

    Several fiber bridging models were reviewed and applied to study the matrix fatigue crack growth behavior in center notched (0)(sub 8) SCS-6/Ti-15-3 and (0)(sub 4) SCS-6/Ti-6Al-4V laminates. Observations revealed that fatigue damage consisted primarily of matrix cracks and fiber matrix interfacial failure in the (0)(sub 8) SCS-6/Ti-15-3 laminates. Fiber-matrix interface failure included fracture of the brittle reaction zone and cracking between the two carbon rich fiber coatings. Intact fibers in the wake of the matrix cracks reduce the stress intensity factor range. Thus, an applied stress intensity factor range is inappropriate to characterize matrix crack growth behavior. Fiber bridging models were used to determine the matrix stress intensity factor range in titanium metal matrix composites. In these models, the fibers in the wake of the crack are idealized as a closure pressure. An unknown constant frictional shear stress is assumed to act along the debond or slip length of the bridging fibers. The frictional shear stress was used as a curve fitting parameter to available data (crack growth data, crack opening displacement data, and debond length data). Large variations in the frictional shear stress required to fit the experimental data indicate that the fiber bridging models in their present form lack predictive capabilities. However, these models provide an efficient and relatively simple engineering method for conducting parametric studies of the matrix growth behavior based on constituent properties.

  10. PEPFAR support of alcohol-HIV prevention activities in Namibia and Botswana: a framework for investigation, implementation and evaluation.

    PubMed

    Glenshaw, M; Deluca, N; Adams, R; Parry, C; Fritz, K; Du Preez, V; Voetsch, K; Lekone, P; Seth, P; Bachanas, P; Grillo, M; Kresina, T F; Pick, B; Ryan, C; Bock, N

    2016-01-01

    The association between harmful use of alcohol and HIV infection is well documented. To address this dual epidemic, the US President's Emergency Plan for AIDS Relief (PEPFAR) developed and implemented a multi-pronged approach primarily in Namibia and Botswana. We present the approach and preliminary results of the public health investigative and programmatic activities designed, initiated and supported by PEPFAR to combat the harmful use of alcohol and its association as a driver of HIV morbidity and mortality from 2008 to 2013. PEPFAR supported comprehensive alcohol programming using a matrix model approach that combined the socio-ecological framework and the Alcohol Misuse Prevention and Intervention Continuum. This structure enabled seven component objectives: (1) to quantify harmful use of alcohol through rapid assessments; (2) to develop and evaluate alcohol-based interventions; (3) to promote screening programs and alcohol abuse resource services; (4) to support stakeholder networks; (5) to support policy interventions and (6) structural interventions; and (7) to institutionalize universal prevention messages. Targeted PEPFAR support for alcohol activities resulted in several projects to address harmful alcohol use and HIV. Components are graphically conceptualized within the matrix model, demonstrating the intersections between primary, secondary and tertiary prevention activities and individual, interpersonal, community, and societal factors. Key initiative successes included leveraging alcohol harm prevention activities that enabled projects to be piloted in healthcare settings, schools, communities, and alcohol outlets. Primary challenges included the complexity of multi-sectorial programming, varying degrees of political will, and difficulties monitoring outcomes over the short duration of the program.

  11. A creep cavity growth model for creep-fatigue life prediction of a unidirectional W/Cu composite

    NASA Astrophysics Data System (ADS)

    Kim, Young-Suk; Verrilli, Michael J.; Halford, Gary R.

    1992-05-01

    A microstructural model was developed to predict creep-fatigue life in a (0)(sub 4), 9 volume percent tungsten fiber-reinforced copper matrix composite at the temperature of 833 K. The mechanism of failure of the composite is assumed to be governed by the growth of quasi-equilibrium cavities in the copper matrix of the composite, based on the microscopically observed failure mechanisms. The methodology uses a cavity growth model developed for prediction of creep fracture. Instantaneous values of strain rate and stress in the copper matrix during fatigue cycles were calculated and incorporated in the model to predict cyclic life. The stress in the copper matrix was determined by use of a simple two-bar model for the fiber and matrix during cyclic loading. The model successfully predicted the composite creep-fatigue life under tension-tension cyclic loading through the use of this instantaneous matrix stress level. Inclusion of additional mechanisms such as cavity nucleation, grain boundary sliding, and the effect of fibers on matrix-stress level would result in more generalized predictions of creep-fatigue life.

  12. Temperature dependent nonlinear metal matrix laminae behavior

    NASA Technical Reports Server (NTRS)

    Barrett, D. J.; Buesking, K. W.

    1986-01-01

    An analytical method is described for computing the nonlinear thermal and mechanical response of laminated plates. The material model focuses upon the behavior of metal matrix materials by relating the nonlinear composite response to plasticity effects in the matrix. The foundation of the analysis is the unidirectional material model which is used to compute the instantaneous properties of the lamina based upon the properties of the fibers and matrix. The unidirectional model assumes that the fibers properties are constant with temperature and assumes that the matrix can be modelled as a temperature dependent, bilinear, kinematically hardening material. An incremental approach is used to compute average stresses in the fibers and matrix caused by arbitrary mechanical and thermal loads. The layer model is incorporated in an incremental laminated plate theory to compute the nonlinear response of laminated metal matrix composites of general orientation and stacking sequence. The report includes comparisons of the method with other analytical approaches and compares theoretical calculations with measured experimental material behavior. A section is included which describes the limitations of the material model.

  13. A creep cavity growth model for creep-fatigue life prediction of a unidirectional W/Cu composite

    NASA Technical Reports Server (NTRS)

    Kim, Young-Suk; Verrilli, Michael J.; Halford, Gary R.

    1992-01-01

    A microstructural model was developed to predict creep-fatigue life in a (0)(sub 4), 9 volume percent tungsten fiber-reinforced copper matrix composite at the temperature of 833 K. The mechanism of failure of the composite is assumed to be governed by the growth of quasi-equilibrium cavities in the copper matrix of the composite, based on the microscopically observed failure mechanisms. The methodology uses a cavity growth model developed for prediction of creep fracture. Instantaneous values of strain rate and stress in the copper matrix during fatigue cycles were calculated and incorporated in the model to predict cyclic life. The stress in the copper matrix was determined by use of a simple two-bar model for the fiber and matrix during cyclic loading. The model successfully predicted the composite creep-fatigue life under tension-tension cyclic loading through the use of this instantaneous matrix stress level. Inclusion of additional mechanisms such as cavity nucleation, grain boundary sliding, and the effect of fibers on matrix-stress level would result in more generalized predictions of creep-fatigue life.

  14. Design Change Model for Effective Scheduling Change Propagation Paths

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Zhu; Ding, Guo-Fu; Li, Rong; Qin, Sheng-Feng; Yan, Kai-Yin

    2017-09-01

    Changes in requirements may result in the increasing of product development project cost and lead time, therefore, it is important to understand how requirement changes propagate in the design of complex product systems and be able to select best options to guide design. Currently, a most approach for design change is lack of take the multi-disciplinary coupling relationships and the number of parameters into account integrally. A new design change model is presented to systematically analyze and search change propagation paths. Firstly, a PDS-Behavior-Structure-based design change model is established to describe requirement changes causing the design change propagation in behavior and structure domains. Secondly, a multi-disciplinary oriented behavior matrix is utilized to support change propagation analysis of complex product systems, and the interaction relationships of the matrix elements are used to obtain an initial set of change paths. Finally, a rough set-based propagation space reducing tool is developed to assist in narrowing change propagation paths by computing the importance of the design change parameters. The proposed new design change model and its associated tools have been demonstrated by the scheduling change propagation paths of high speed train's bogie to show its feasibility and effectiveness. This model is not only supportive to response quickly to diversified market requirements, but also helpful to satisfy customer requirements and reduce product development lead time. The proposed new design change model can be applied in a wide range of engineering systems design with improved efficiency.

  15. Modeling for Matrix Multicracking Evolution of Cross-ply Ceramic-Matrix Composites Using Energy Balance Approach

    NASA Astrophysics Data System (ADS)

    Longbiao, Li

    2015-12-01

    The matrix multicracking evolution of cross-ply ceramic-matrix composites (CMCs) has been investigated using energy balance approach. The multicracking of cross-ply CMCs was classified into five modes, i.e., (1) mode 1: transverse multicracking; (2) mode 2: transverse multicracking and matrix multicracking with perfect fiber/matrix interface bonding; (3) mode 3: transverse multicracking and matrix multicracking with fiber/matrix interface debonding; (4) mode 4: matrix multicracking with perfect fiber/matrix interface bonding; and (5) mode 5: matrix multicracking with fiber/matrix interface debonding. The stress distributions of four cracking modes, i.e., mode 1, mode 2, mode 3 and mode 5, are analysed using shear-lag model. The matrix multicracking evolution of mode 1, mode 2, mode 3 and mode 5, has been determined using energy balance approach. The effects of ply thickness and fiber volume fraction on matrix multicracking evolution of cross-ply CMCs have been investigated.

  16. Organizational Models and Mythologies of the American Research University. ASHE 1986 Annual Meeting Paper.

    ERIC Educational Resources Information Center

    Alpert, Daniel

    Features of the matrix model of the research university and myths about the academic enterprise are described, along with serious dissonances in the U.S. university system. The linear model, from which the matrix model evolved, describes the university's structure, perceived mission, and organizational behavior. A matrix model portrays in concise,…

  17. Multi-Length Scale-Enriched Continuum-Level Material Model for Kevlar-Fiber-Reinforced Polymer-Matrix Composites

    DTIC Science & Technology

    2012-08-03

    is unlimited. Multi-Length Scale-Enriched Continuum-Level Material Model for Kevlar ®-Fiber-Reinforced Polymer-Matrix Composites The views, opinions...12211 Research Triangle Park, NC 27709-2211 ballistics, composites, Kevlar , material models, microstructural defects REPORT DOCUMENTATION PAGE 11... Kevlar ®-Fiber-Reinforced Polymer-Matrix Composites Report Title Fiber-reinforced polymer matrix composite materials display quite complex deformation

  18. On the Stark effect in open shell complexes exhibiting partially quenched electronic angular momentum: Infrared laser Stark spectroscopy of OH–C 2H 2, OH–C 2H 4, and OH–H 2O

    DOE PAGES

    Moradi, Christopher P.; Douberly, Gary E.

    2015-06-22

    The Stark effect is considered for polyatomic open shell complexes that exhibit partially quenched electronic angular momentum. Matrix elements of the Stark Hamiltonian represented in a parity conserving Hund's case (a) basis are derived for the most general case, in which the permanent dipole moment has projections on all three inertial axes of the system. Transition intensities are derived, again for the most general case, in which the laser polarization has projections onto axes parallel and perpendicular to the Stark electric field, and the transition dipole moment vector is projected onto all three inertial axes in the molecular frame. Asmore » a result, simulations derived from this model are compared to experimental rovibrational Stark spectra of OH-C 2H 2, OH-C 2H 4, and OH-H 2O complexes formed in helium nanodroplets.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Petrongolo, M; Wang, T

    Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less

  20. Classification of fresh and frozen-thawed pork muscles using visible and near infrared hyperspectral imaging and textural analysis.

    PubMed

    Pu, Hongbin; Sun, Da-Wen; Ma, Ji; Cheng, Jun-Hu

    2015-01-01

    The potential of visible and near infrared hyperspectral imaging was investigated as a rapid and nondestructive technique for classifying fresh and frozen-thawed meats by integrating critical spectral and image features extracted from hyperspectral images in the region of 400-1000 nm. Six feature wavelengths (400, 446, 477, 516, 592 and 686 nm) were identified using uninformative variable elimination and successive projections algorithm. Image textural features of the principal component images from hyperspectral images were obtained using histogram statistics (HS), gray level co-occurrence matrix (GLCM) and gray level-gradient co-occurrence matrix (GLGCM). By these spectral and textural features, probabilistic neural network (PNN) models for classification of fresh and frozen-thawed pork meats were established. Compared with the models using the optimum wavelengths only, optimum wavelengths with HS image features, and optimum wavelengths with GLCM image features, the model integrating optimum wavelengths with GLGCM gave the highest classification rate of 93.14% and 90.91% for calibration and validation sets, respectively. Results indicated that the classification accuracy can be improved by combining spectral features with textural features and the fusion of critical spectral and textural features had better potential than single spectral extraction in classifying fresh and frozen-thawed pork meat. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Crystal Plasticity Model of Reactor Pressure Vessel Embrittlement in GRIZZLY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Pritam; Biner, Suleyman Bulent; Zhang, Yongfeng

    2015-07-01

    The integrity of reactor pressure vessels (RPVs) is of utmost importance to ensure safe operation of nuclear reactors under extended lifetime. Microstructure-scale models at various length and time scales, coupled concurrently or through homogenization methods, can play a crucial role in understanding and quantifying irradiation-induced defect production, growth and their influence on mechanical behavior of RPV steels. A multi-scale approach, involving atomistic, meso- and engineering-scale models, is currently being pursued within the GRIZZLY project to understand and quantify irradiation-induced embrittlement of RPV steels. Within this framework, a dislocation-density based crystal plasticity model has been developed in GRIZZLY that captures themore » effect of irradiation-induced defects on the flow stress behavior and is presented in this report. The present formulation accounts for the interaction between self-interstitial loops and matrix dislocations. The model predictions have been validated with experiments and dislocation dynamics simulation.« less

  2. The Tetrahedral Zamolodchikov Algebra and the {AdS_5× S^5} S-matrix

    NASA Astrophysics Data System (ADS)

    Mitev, Vladimir; Staudacher, Matthias; Tsuboi, Zengo

    2017-08-01

    The S-matrix of the {AdS_5× S^5} string theory is a tensor product of two centrally extended su{(2|2)\\ltimes R^2 S-matrices, each of which is related to the R-matrix of the Hubbard model. The R-matrix of the Hubbard model was first found by Shastry, who ingeniously exploited the fact that, for zero coupling, the Hubbard model can be decomposed into two XX models. In this article, we review and clarify this construction from the AdS/CFT perspective and investigate the implications this has for the {AdS_5× S^5} S-matrix.

  3. Modelling the effect of urbanization on the transmission of an infectious disease.

    PubMed

    Zhang, Ping; Atkinson, Peter M

    2008-01-01

    This paper models the impact of urbanization on infectious disease transmission by integrating a CA land use development model, population projection matrix model and CA epidemic model in S-Plus. The innovative feature of this model lies in both its explicit treatment of spatial land use development, demographic changes, infectious disease transmission and their combination in a dynamic, stochastic model. Heuristically-defined transition rules in cellular automata (CA) were used to capture the processes of both land use development with urban sprawl and infectious disease transmission. A population surface model and dwelling distribution surface were used to bridge the gap between urbanization and infectious disease transmission. A case study is presented involving modelling influenza transmission in Southampton, a dynamically evolving city in the UK. The simulation results for Southampton over a 30-year period show that the pattern of the average number of infection cases per day can depend on land use and demographic changes. The modelling framework presents a useful tool that may be of use in planning applications.

  4. Data-Driven Sampling Matrix Boolean Optimization for Energy-Efficient Biomedical Signal Acquisition by Compressive Sensing.

    PubMed

    Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao

    2017-04-01

    Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.

  5. Constructing service-oriented architecture adoption maturity matrix using Kano model

    NASA Astrophysics Data System (ADS)

    Hamzah, Mohd Hamdi Irwan; Baharom, Fauziah; Mohd, Haslina

    2017-10-01

    Commonly, organizations adopted Service-Oriented Architecture (SOA) because it can provide a flexible reconfiguration and can reduce the development time and cost. In order to guide the SOA adoption, previous industry and academia have constructed SOA maturity model. However, there is a limited number of works on how to construct the matrix in the previous SOA maturity model. Therefore, this study is going to provide a method that can be used in order to construct the matrix in the SOA maturity model. This study adapts Kano Model to construct the cross evaluation matrix focused on SOA adoption IT and business benefits. This study found that Kano Model can provide a suitable and appropriate method for constructing the cross evaluation matrix in SOA maturity model. Kano model also can be used to plot, organize and better represent the evaluation dimension for evaluating the SOA adoption.

  6. Stage-structured matrix models for organisms with non-geometric development times

    Treesearch

    Andrew Birt; Richard M. Feldman; David M. Cairns; Robert N. Coulson; Maria Tchakerian; Weimin Xi; James M. Guldin

    2009-01-01

    Matrix models have been used to model population growth of organisms for many decades. They are popular because of both their conceptual simplicity and their computational efficiency. For some types of organisms they are relatively accurate in predicting population growth; however, for others the matrix approach does not adequately model...

  7. ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.

    PubMed

    Lee, Keunbaik; Baek, Changryong; Daniels, Michael J

    2017-11-01

    In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.

  8. A review of failure models for unidirectional ceramic matrix composites under monotonic loads

    NASA Technical Reports Server (NTRS)

    Tripp, David E.; Hemann, John H.; Gyekenyesi, John P.

    1989-01-01

    Ceramic matrix composites offer significant potential for improving the performance of turbine engines. In order to achieve their potential, however, improvements in design methodology are needed. In the past most components using structural ceramic matrix composites were designed by trial and error since the emphasis of feasibility demonstration minimized the development of mathematical models. To understand the key parameters controlling response and the mechanics of failure, the development of structural failure models is required. A review of short term failure models with potential for ceramic matrix composite laminates under monotonic loads is presented. Phenomenological, semi-empirical, shear-lag, fracture mechanics, damage mechanics, and statistical models for the fast fracture analysis of continuous fiber unidirectional ceramic matrix composites under monotonic loads are surveyed.

  9. Micromechanical Modeling of Woven Metal Matrix Composites

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Pindera, Marek-Jerzy

    1997-01-01

    This report presents the results of an extensive micromechanical modeling effort for woven metal matrix composites. The model is employed to predict the mechanical response of 8-harness (8H) satin weave carbon/copper (C/Cu) composites. Experimental mechanical results for this novel high thermal conductivity material were recently reported by Bednarcyk et al. along with preliminary model results. The micromechanics model developed herein is based on an embedded approach. A micromechanics model for the local (micro-scale) behavior of the woven composite, the original method of cells (Aboudi), is embedded in a global (macro-scale) micromechanics model (the three-dimensional generalized method of cells (GMC-3D) (Aboudi). This approach allows representation of true repeating unit cells for woven metal matrix composites via GMC-3D, and representation of local effects, such as matrix plasticity, yarn porosity, and imperfect fiber-matrix bonding. In addition, the equations of GMC-3D were reformulated to significantly reduce the number of unknown quantities that characterize the deformation fields at the microlevel in order to make possible the analysis of actual microstructures of woven composites. The resulting micromechanical model (WCGMC) provides an intermediate level of geometric representation, versatility, and computational efficiency with respect to previous analytical and numerical models for woven composites, but surpasses all previous modeling work by allowing the mechanical response of a woven metal matrix composite, with an elastoplastic matrix, to be examined for the first time. WCGMC is employed to examine the effects of composite microstructure, porosity, residual stresses, and imperfect fiber-matrix bonding on the predicted mechanical response of 8H satin C/Cu. The previously reported experimental results are summarized, and the model predictions are compared to monotonic and cyclic tensile and shear test data. By considering appropriate levels of porosity, residual stresses, and imperfect fiber-matrix debonding, reasonably good qualitative and quantitative correlation is achieved between model and experiment.

  10. Pilot project - demonstration of capabilities and benefits of bridge load rating through physical testing : tech transfer summary.

    DOT National Transportation Integrated Search

    2013-08-01

    This project demonstrated the capabilities for load testing bridges in Iowa, developed and presented a webinar to local and state engineers, and produced a spreadsheet and benefit evaluation matrix that others can use to preliminarily assess where br...

  11. In vitro model to study the effects of matrix stiffening on Ca2+ handling and myofilament function in isolated adult rat cardiomyocytes

    PubMed Central

    Najafi, Aref; Fontoura, Dulce; Valent, Erik; Goebel, Max; Kardux, Kim; Falcão‐Pires, Inês; van der Velden, Jolanda

    2017-01-01

    Key points This paper describes a novel model that allows exploration of matrix‐induced cardiomyocyte adaptations independent of the passive effect of matrix rigidity on cardiomyocyte function.Detachment of adult cardiomyocytes from the matrix enables the study of matrix effects on cell shortening, Ca2+ handling and myofilament function.Cell shortening and Ca2+ handling are altered in cardiomyocytes cultured for 24 h on a stiff matrix.Matrix stiffness‐impaired cardiomyocyte contractility is reversed upon normalization of extracellular stiffness.Matrix stiffness‐induced reduction in unloaded shortening is more pronounced in cardiomyocytes isolated from obese ZSF1 rats with heart failure with preserved ejection fraction compared to lean ZSF1 rats. Abstract Extracellular matrix (ECM) stiffening is a key element of cardiac disease. Increased rigidity of the ECM passively inhibits cardiac contraction, but if and how matrix stiffening also actively alters cardiomyocyte contractility is incompletely understood. In vitro models designed to study cardiomyocyte–matrix interaction lack the possibility to separate passive inhibition by a stiff matrix from active matrix‐induced alterations of cardiomyocyte properties. Here we introduce a novel experimental model that allows exploration of cardiomyocyte functional alterations in response to matrix stiffening. Adult rat cardiomyocytes were cultured for 24 h on matrices of tuneable stiffness representing the healthy and the diseased heart and detached from their matrix before functional measurements. We demonstrate that matrix stiffening, independent of passive inhibition, reduces cell shortening and Ca2+ handling but does not alter myofilament‐generated force. Additionally, detachment of adult cultured cardiomyocytes allowed the transfer of cells from one matrix to another. This revealed that stiffness‐induced cardiomyocyte changes are reversed when matrix stiffness is normalized. These matrix stiffness‐induced changes in cardiomyocyte function could not be explained by adaptation in the microtubules. Additionally, cardiomyocytes isolated from stiff hearts of the obese ZSF1 rat model of heart failure with preserved ejection fraction show more pronounced reduction in unloaded shortening in response to matrix stiffening. Taken together, we introduce a method that allows evaluation of the influence of ECM properties on cardiomyocyte function separate from the passive inhibitory component of a stiff matrix. As such, it adds an important and physiologically relevant tool to investigate the functional consequences of cardiomyocyte–matrix interactions. PMID:28485491

  12. Matrix viscoplasticity and its shielding by active mechanics in microtissue models: experiments and mathematical modeling

    NASA Astrophysics Data System (ADS)

    Liu, Alan S.; Wang, Hailong; Copeland, Craig R.; Chen, Christopher S.; Shenoy, Vivek B.; Reich, Daniel H.

    2016-09-01

    The biomechanical behavior of tissues under mechanical stimulation is critically important to physiological function. We report a combined experimental and modeling study of bioengineered 3D smooth muscle microtissues that reveals a previously unappreciated interaction between active cell mechanics and the viscoplastic properties of the extracellular matrix. The microtissues’ response to stretch/unstretch actuations, as probed by microcantilever force sensors, was dominated by cellular actomyosin dynamics. However, cell lysis revealed a viscoplastic response of the underlying model collagen/fibrin matrix. A model coupling Hill-type actomyosin dynamics with a plastic perfectly viscoplastic description of the matrix quantitatively accounts for the microtissue dynamics, including notably the cells’ shielding of the matrix plasticity. Stretch measurements of single cells confirmed the active cell dynamics, and were well described by a single-cell version of our model. These results reveal the need for new focus on matrix plasticity and its interactions with active cell mechanics in describing tissue dynamics.

  13. Matrix viscoplasticity and its shielding by active mechanics in microtissue models: experiments and mathematical modeling

    PubMed Central

    Liu, Alan S.; Wang, Hailong; Copeland, Craig R.; Chen, Christopher S.; Shenoy, Vivek B.; Reich, Daniel H.

    2016-01-01

    The biomechanical behavior of tissues under mechanical stimulation is critically important to physiological function. We report a combined experimental and modeling study of bioengineered 3D smooth muscle microtissues that reveals a previously unappreciated interaction between active cell mechanics and the viscoplastic properties of the extracellular matrix. The microtissues’ response to stretch/unstretch actuations, as probed by microcantilever force sensors, was dominated by cellular actomyosin dynamics. However, cell lysis revealed a viscoplastic response of the underlying model collagen/fibrin matrix. A model coupling Hill-type actomyosin dynamics with a plastic perfectly viscoplastic description of the matrix quantitatively accounts for the microtissue dynamics, including notably the cells’ shielding of the matrix plasticity. Stretch measurements of single cells confirmed the active cell dynamics, and were well described by a single-cell version of our model. These results reveal the need for new focus on matrix plasticity and its interactions with active cell mechanics in describing tissue dynamics. PMID:27671239

  14. A Principle-Attribute Matrix for Environmentally Sustainable Management Education and Its Application: The Case for Change-Oriented Service-Learning Projects

    ERIC Educational Resources Information Center

    Rands, Gordon P.

    2009-01-01

    The environmental threats humanity faces have led businesses to increasingly commit to improve their environmental performance and to increasing attempts to address environmental issues in management education. This article presents a matrix of (a) principles that can underlie and (b) attributes that can be generated by environmentally focused…

  15. Effectiveness of Matrix Organizations at the TACOM LCMC

    DTIC Science & Technology

    2013-04-05

    Scott Baumgartner UNCLASSIFIED 14 responsibility and involvement in decision making, and offers a greater opportunity to display capabilities and...According to Knight, “Matrix structures are said to facilitate high quality and innovative solutions to complex technical problems” (Knight, 1976...discipline expertise to remain innovative (Davis & Lawrence, 1977). Communications The project teams must have a clear understanding of the vision, goals

  16. Incorporating psychological influences in probabilistic cost analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kujawski, Edouard; Alvaro, Mariana; Edwards, William

    2004-01-08

    Today's typical probabilistic cost analysis assumes an ''ideal'' project that is devoid of the human and organizational considerations that heavily influence the success and cost of real-world projects. In the real world ''Money Allocated Is Money Spent'' (MAIMS principle); cost underruns are rarely available to protect against cost overruns while task overruns are passed on to the total project cost. Realistic cost estimates therefore require a modified probabilistic cost analysis that simultaneously models the cost management strategy including budget allocation. Psychological influences such as overconfidence in assessing uncertainties and dependencies among cost elements and risks are other important considerations thatmore » are generally not addressed. It should then be no surprise that actual project costs often exceed the initial estimates and are delivered late and/or with a reduced scope. This paper presents a practical probabilistic cost analysis model that incorporates recent findings in human behavior and judgment under uncertainty, dependencies among cost elements, the MAIMS principle, and project management practices. Uncertain cost elements are elicited from experts using the direct fractile assessment method and fitted with three-parameter Weibull distributions. The full correlation matrix is specified in terms of two parameters that characterize correlations among cost elements in the same and in different subsystems. The analysis is readily implemented using standard Monte Carlo simulation tools such as {at}Risk and Crystal Ball{reg_sign}. The analysis of a representative design and engineering project substantiates that today's typical probabilistic cost analysis is likely to severely underestimate project cost for probability of success values of importance to contractors and procuring activities. The proposed approach provides a framework for developing a viable cost management strategy for allocating baseline budgets and contingencies. Given the scope and magnitude of the cost-overrun problem, the benefits are likely to be significant.« less

  17. Bayesian estimation of a source term of radiation release with approximately known nuclide ratios

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek

    2016-04-01

    We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  18. System identification of the Large-Angle Magnetic Suspension Test Facility (LAMSTF)

    NASA Technical Reports Server (NTRS)

    Huang, Jen-Kuang

    1993-01-01

    The Large-Angle Magnetic Suspension Test Facility (LAMSTF), a laboratory-scale research project to demonstrate the magnetic suspension of objects over wide ranges of attitudes, has been developed. This system represents a scaled model of a planned Large-Gap Magnetic Suspension System (LGMSS). The LAMSTF system consists of a planar array of five copper electromagnets which actively suspend a small cylindrical permanent magnet. The cylinder is a rigid body and can be controlled to move in five independent degrees of freedom. Five position variables are sensed indirectly by using infra-red light-emitting diodes and light-receiving phototransistors. The motion of the suspended cylinder is in general nonlinear and hence only the linear, time-invariant perturbed motion about an equilibrium state is considered. One of the main challenges in this project is the control of the suspended element over a wide range of orientations. An accurate dynamic model plans an essential role in controller design. The analytical model of the LAMSTF system includes highly unstable real poles (about 10 Hz) and low-frequency flexible modes (about 0.16 Hz). Projection filters are proposed to identify the state space model from closed-loop test data in time domain. A canonical transformation matrix is also derived to transform the identified state space model into the physical coordinate. The LAMSTF system is stabilized by using a linear quadratic regulator (LQR) feedback controller. The rate information is obtained by calculating the back difference of the sensed position signals. The reference inputs contain five uncorrelated random signals. This control input and the system reponse are recorded as input/output data to identify the system directly from the projection filters. The sampling time is 4 ms and the model is fairly accurate in predicting the step responses for different controllers while the analytical model has a deficiency in the pitch axis.

  19. State-Space System Realization with Input- and Output-Data Correlation

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    1997-01-01

    This paper introduces a general version of the information matrix consisting of the autocorrelation and cross-correlation matrices of the shifted input and output data. Based on the concept of data correlation, a new system realization algorithm is developed to create a model directly from input and output data. The algorithm starts by computing a special type of correlation matrix derived from the information matrix. The special correlation matrix provides information on the system-observability matrix and the state-vector correlation. A system model is then developed from the observability matrix in conjunction with other algebraic manipulations. This approach leads to several different algorithms for computing system matrices for use in representing the system model. The relationship of the new algorithms with other realization algorithms in the time and frequency domains is established with matrix factorization of the information matrix. Several examples are given to illustrate the validity and usefulness of these new algorithms.

  20. Covariance specification and estimation to improve top-down Green House Gas emission estimates

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.

    2015-12-01

    The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve accuracy, we perform a sensitivity study to further tune covariance parameters. Finally, we introduce a shrinkage based sample covariance estimation technique for both prior and mismatch covariances. This technique allows us to achieve similar accuracy nonparametrically in a more efficient and automated way.

  1. Efficient system modeling for a small animal PET scanner with tapered DOI detectors.

    PubMed

    Zhang, Mengxi; Zhou, Jian; Yang, Yongfeng; Rodríguez-Villafuerte, Mercedes; Qi, Jinyi

    2016-01-21

    A prototype small animal positron emission tomography (PET) scanner for mouse brain imaging has been developed at UC Davis. The new scanner uses tapered detector arrays with depth of interaction (DOI) measurement. In this paper, we present an efficient system model for the tapered PET scanner using matrix factorization and a virtual scanner geometry. The factored system matrix mainly consists of two components: a sinogram blurring matrix and a geometrical matrix. The geometric matrix is based on a virtual scanner geometry. The sinogram blurring matrix is estimated by matrix factorization. We investigate the performance of different virtual scanner geometries. Both simulation study and real data experiments are performed in the fully 3D mode to study the image quality under different system models. The results indicate that the proposed matrix factorization can maintain image quality while substantially reduce the image reconstruction time and system matrix storage cost. The proposed method can be also applied to other PET scanners with DOI measurement.

  2. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  3. A contrast between DEMATEL-ANP and ANP methods for six sigma project selection: a case study in healthcare industry.

    PubMed

    Ortíz, Miguel A; Felizzola, Heriberto A; Nieto Isaza, Santiago

    2015-01-01

    The project selection process is a crucial step for healthcare organizations at the moment of implementing six sigma programs in both administrative and caring processes. However, six-sigma project selection is often defined as a decision making process with interaction and feedback between criteria; so that it is necessary to explore different methods to help healthcare companies to determine the Six-sigma projects that provide the maximum benefits. This paper describes the application of both ANP (Analytic Network process) and DEMATEL (Decision Making trial and evaluation laboratory)-ANP in a public medical centre to establish the most suitable six sigma project and finally, these methods were compared to evaluate their performance in the decision making process. ANP and DEMATEL-ANP were used to evaluate 6 six sigma project alternatives under an evaluation model composed by 3 strategies, 4 criteria and 15 sub-criteria. Judgement matrixes were completed by the six sigma team whose participants worked in different departments of the medical centre. The improving of care opportunity in obstetric outpatients was elected as the most suitable six sigma project with a score of 0,117 as contribution to the organization goals. DEMATEL-ANP performed better at decision making process since it reduced the error probability due to interactions and feedback. ANP and DEMATEL-ANP effectively supported six sigma project selection processes, helping to create a complete framework that guarantees the prioritization of projects that provide maximum benefits to healthcare organizations. As DEMATEL- ANP performed better, it should be used by practitioners involved in decisions related to the implementation of six sigma programs in healthcare sector accompanied by the adequate identification of the evaluation criteria that support the decision making model. Thus, this comparative study contributes to choosing more effective approaches in this field. Suggestions of further work are also proposed so that these methods can be applied more adequate in six sigma project selection processes in healthcare.

  4. A contrast between DEMATEL-ANP and ANP methods for six sigma project selection: a case study in healthcare industry

    PubMed Central

    2015-01-01

    Background The project selection process is a crucial step for healthcare organizations at the moment of implementing six sigma programs in both administrative and caring processes. However, six-sigma project selection is often defined as a decision making process with interaction and feedback between criteria; so that it is necessary to explore different methods to help healthcare companies to determine the Six-sigma projects that provide the maximum benefits. This paper describes the application of both ANP (Analytic Network process) and DEMATEL (Decision Making trial and evaluation laboratory)-ANP in a public medical centre to establish the most suitable six sigma project and finally, these methods were compared to evaluate their performance in the decision making process. Methods ANP and DEMATEL-ANP were used to evaluate 6 six sigma project alternatives under an evaluation model composed by 3 strategies, 4 criteria and 15 sub-criteria. Judgement matrixes were completed by the six sigma team whose participants worked in different departments of the medical centre. Results The improving of care opportunity in obstetric outpatients was elected as the most suitable six sigma project with a score of 0,117 as contribution to the organization goals. DEMATEL-ANP performed better at decision making process since it reduced the error probability due to interactions and feedback. Conclusions ANP and DEMATEL-ANP effectively supported six sigma project selection processes, helping to create a complete framework that guarantees the prioritization of projects that provide maximum benefits to healthcare organizations. As DEMATEL- ANP performed better, it should be used by practitioners involved in decisions related to the implementation of six sigma programs in healthcare sector accompanied by the adequate identification of the evaluation criteria that support the decision making model. Thus, this comparative study contributes to choosing more effective approaches in this field. Suggestions of further work are also proposed so that these methods can be applied more adequate in six sigma project selection processes in healthcare. PMID:26391445

  5. In vitro model to study the effects of matrix stiffening on Ca2+ handling and myofilament function in isolated adult rat cardiomyocytes.

    PubMed

    van Deel, Elza D; Najafi, Aref; Fontoura, Dulce; Valent, Erik; Goebel, Max; Kardux, Kim; Falcão-Pires, Inês; van der Velden, Jolanda

    2017-07-15

    This paper describes a novel model that allows exploration of matrix-induced cardiomyocyte adaptations independent of the passive effect of matrix rigidity on cardiomyocyte function. Detachment of adult cardiomyocytes from the matrix enables the study of matrix effects on cell shortening, Ca 2+ handling and myofilament function. Cell shortening and Ca 2+ handling are altered in cardiomyocytes cultured for 24 h on a stiff matrix. Matrix stiffness-impaired cardiomyocyte contractility is reversed upon normalization of extracellular stiffness. Matrix stiffness-induced reduction in unloaded shortening is more pronounced in cardiomyocytes isolated from obese ZSF1 rats with heart failure with preserved ejection fraction compared to lean ZSF1 rats. Extracellular matrix (ECM) stiffening is a key element of cardiac disease. Increased rigidity of the ECM passively inhibits cardiac contraction, but if and how matrix stiffening also actively alters cardiomyocyte contractility is incompletely understood. In vitro models designed to study cardiomyocyte-matrix interaction lack the possibility to separate passive inhibition by a stiff matrix from active matrix-induced alterations of cardiomyocyte properties. Here we introduce a novel experimental model that allows exploration of cardiomyocyte functional alterations in response to matrix stiffening. Adult rat cardiomyocytes were cultured for 24 h on matrices of tuneable stiffness representing the healthy and the diseased heart and detached from their matrix before functional measurements. We demonstrate that matrix stiffening, independent of passive inhibition, reduces cell shortening and Ca 2+ handling but does not alter myofilament-generated force. Additionally, detachment of adult cultured cardiomyocytes allowed the transfer of cells from one matrix to another. This revealed that stiffness-induced cardiomyocyte changes are reversed when matrix stiffness is normalized. These matrix stiffness-induced changes in cardiomyocyte function could not be explained by adaptation in the microtubules. Additionally, cardiomyocytes isolated from stiff hearts of the obese ZSF1 rat model of heart failure with preserved ejection fraction show more pronounced reduction in unloaded shortening in response to matrix stiffening. Taken together, we introduce a method that allows evaluation of the influence of ECM properties on cardiomyocyte function separate from the passive inhibitory component of a stiff matrix. As such, it adds an important and physiologically relevant tool to investigate the functional consequences of cardiomyocyte-matrix interactions. © 2017 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.

  6. Matrix approach to land carbon cycle modeling: A case study with the Community Land Model.

    PubMed

    Huang, Yuanyuan; Lu, Xingjie; Shi, Zheng; Lawrence, David; Koven, Charles D; Xia, Jianyang; Du, Zhenggang; Kluzek, Erik; Luo, Yiqi

    2018-03-01

    The terrestrial carbon (C) cycle has been commonly represented by a series of C balance equations to track C influxes into and effluxes out of individual pools in earth system models (ESMs). This representation matches our understanding of C cycle processes well but makes it difficult to track model behaviors. It is also computationally expensive, limiting the ability to conduct comprehensive parametric sensitivity analyses. To overcome these challenges, we have developed a matrix approach, which reorganizes the C balance equations in the original ESM into one matrix equation without changing any modeled C cycle processes and mechanisms. We applied the matrix approach to the Community Land Model (CLM4.5) with vertically-resolved biogeochemistry. The matrix equation exactly reproduces litter and soil organic carbon (SOC) dynamics of the standard CLM4.5 across different spatial-temporal scales. The matrix approach enables effective diagnosis of system properties such as C residence time and attribution of global change impacts to relevant processes. We illustrated, for example, the impacts of CO 2 fertilization on litter and SOC dynamics can be easily decomposed into the relative contributions from C input, allocation of external C into different C pools, nitrogen regulation, altered soil environmental conditions, and vertical mixing along the soil profile. In addition, the matrix tool can accelerate model spin-up, permit thorough parametric sensitivity tests, enable pool-based data assimilation, and facilitate tracking and benchmarking of model behaviors. Overall, the matrix approach can make a broad range of future modeling activities more efficient and effective. © 2017 John Wiley & Sons Ltd.

  7. PWMScan: a fast tool for scanning entire genomes with a position-specific weight matrix.

    PubMed

    Ambrosini, Giovanna; Groux, Romain; Bucher, Philipp

    2018-03-05

    Transcription factors (TFs) regulate gene expression by binding to specific short DNA sequences of 5 to 20-bp to regulate the rate of transcription of genetic information from DNA to messenger RNA. We present PWMScan, a fast web-based tool to scan server-resident genomes for matches to a user-supplied PWM or TF binding site model from a public database. The web server and source code are available at http://ccg.vital-it.ch/pwmscan and https://sourceforge.net/projects/pwmscan, respectively. giovanna.ambrosini@epfl.ch. SUPPLEMENTARY DATA ARE AVAILABLE AT BIOINFORMATICS ONLINE.

  8. Research Supervision: The Research Management Matrix

    ERIC Educational Resources Information Center

    Maxwell, T. W.; Smyth, Robyn

    2010-01-01

    We briefly make a case for re-conceptualising research project supervision/advising as the consideration of three inter-related areas: the learning and teaching process; developing the student; and producing the research project/outcome as a social practice. We use this as our theoretical base for an heuristic tool, "the research management…

  9. Design Concepts for Cooled Ceramic Matrix Composite Turbine Vanes

    NASA Technical Reports Server (NTRS)

    Boyle, Robert

    2014-01-01

    This project demonstrated that higher temperature capabilities of ceramic matrix composites (CMCs) can be used to reduce emissions and improve fuel consumption in gas turbine engines. The work involved closely coupling aerothermal and structural analyses for the first-stage vane of a high-pressure turbine (HPT). These vanes are actively cooled, typically using film cooling. Ceramic materials have structural and thermal properties different from conventional metals used for the first-stage HPT vane. This project identified vane configurations that satisfy CMC structural strength and life constraints while maintaining vane aerodynamic efficiency and reducing vane cooling to improve engine performance and reduce emissions. The project examined modifications to vane internal configurations to achieve the desired objectives. Thermal and pressure stresses are equally important, and both were analyzed using an ANSYS® structural analysis. Three-dimensional fluid and heat transfer analyses were used to determine vane aerodynamic performance and heat load distributions.

  10. Nanoscale Liquid Jets Shape New Line of Business

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Just as a pistol shrimp stuns its prey by quickly closing its oversized claw to shoot out a shock-inducing, high-velocity jet of water, NanoMatrix, Inc., is sending shockwaves throughout the nanotechnology world with a revolutionary, small-scale fabrication process that uses powerful liquid jets to cut and shape objects. Emanuel Barros, a former project engineer at NASA s Ames Research Center, set out to form the Santa Cruz, California-based NanoMatrix firm and materialize the micro/nano cutting process partially inspired by the water-spewing crustacean. Early on in his 6-year NASA career, Barros led the development of re-flown flight hardware for an award-winning Spacelab project called NeuroLab. This project, the sixteenth and final Spacelab mission, focused on a series of experiments to determine the effects of microgravity on the development of the mammalian nervous system.

  11. Local stresses in metal matrix composites subjected to thermal and mechanical loading

    NASA Technical Reports Server (NTRS)

    Highsmith, Alton L.; Shin, Donghee; Naik, Rajiv A.

    1990-01-01

    An elasticity solution has been used to analyze matrix stresses near the fiber/matrix interface in continuous fiber-reinforced metal-matrix composites, modeling the micromechanics in question in terms of a cylindrical fiber and cylindrical matrix sheath which is embedded in an orthotropic medium representing the composite. The model's predictions for lamina thermal and mechanical properties are applied to a laminate analysis determining ply-level stresses due to thermomechanical loading. A comparison is made between these results, which assume cylindrical symmetry, and the predictions yielded by a FEM model in which the fibers are arranged in a square array.

  12. The extension of the thermal-vacuum test optimization program to multiple flights

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Byrd, J.

    1981-01-01

    The thermal vacuum test optimization model developed to provide an approach to the optimization of a test program based on prediction of flight performance with a single flight option in mind is extended to consider reflight as in space shuttle missions. The concept of 'utility', developed under the name of 'availability', is used to follow performance through the various options encountered when the capabilities of reflight and retrievability of space shuttle are available. Also, a 'lost value' model is modified to produce a measure of the probability of a mission's success, achieving a desired utility using a minimal cost test strategy. The resulting matrix of probabilities and their associated costs provides a means for project management to evaluate various test and reflight strategies.

  13. Comparison Of Models Of Metal-Matrix Composites

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.; Johnson, W. S.; Naik, R. A.

    1994-01-01

    Report presents comparative review of four mathematical models of micromechanical behaviors of fiber/metal-matrix composite materials. Models differ in various details, all based on properties of fiber and matrix constituent materials, all involve square arrays of fibers continuous and parallel and all assume complete bonding between constituents. Computer programs implementing models used to predict properties and stress-vs.-strain behaviors of unidirectional- and cross-ply laminated composites made of boron fibers in aluminum matrices and silicon carbide fibers in titanium matrices. Stresses in fiber and matrix constituent materials also predicted.

  14. Sudbury project (University of Muenster-Ontario Geological Survey): Sr-Nd in heterolithic breccias and gabbroic dikes

    NASA Technical Reports Server (NTRS)

    Buhl, D.; Deutsch, A.; Lakomy, R.; Brockmeyer, P.; Dressler, B.

    1992-01-01

    One major objective of our Sudbury project was to define origin and age of the huge breccia units below and above the Sudbury Igneous Complex (SIC). The heterolithic Footwall Breccia (FB) represents a part of the uplifted crater floor. It contains subrounded fragments up to several meters in size and lithic fragments with shock features (greater than 10 GPa) embedded into a fine- to medium-grained matrix. Epsilon(sub Nd)-epsilon(sub Sr) relationships point to almost exclusively parautochthonous precursor lithologies. The different textures of the matrix reflect the metamorphic history of the breccia layer; thermal annealing by the overlying hot impact melt sheet (SIC) at temperatures greater than 1000 C resulted in melting of the fine crushed material, followed by an episode of metasomatic K-feldspar growth and, finally, formation of low-grade minerals such as actinolite and chlorite. Isotope relationships in the Onaping breccias (Gray and Green Member) are much more complex. All attempts to date the breccia formation failed: Zircons are entirely derived from country rocks and lack the pronounced Pb loss caused by the heat of the slowly cooling impact melt sheet (SIC). Rb-Sr techniques using either lithic fragments of different shock stages or the thin slab method, set time limits for the apparently pervasive alkali mobility in these suevitic breccias. The data array and the intercept in the plots point to a major Rb-Sr fractionation around 1.54 Ga ago. This model age is in the same range as the age obtained for the metasomatic matrix of the FB. Rb-Sr dating of a shock event in impact-related breccias seems to be possible only if their matrix had suffered total melting by the hot melt sheet (FB) or if they contain a high fraction of impact melt (suevitic Onaping breccias), whereas the degree of shock metamorphism in rock or lithic fragments plays a minor role. In the Sudbury case, however, the impact melt in the seuvitic breccias is devitrified and recrystallized, which changed Rb/Sr ratios quite drastically. Therefore, the Onaping breccias give only age limits for alteration and low-grade metamorphism. The Sm-Nd system was not reset during the Sudbury event; clasts as well as the matrix in the FB and in the Onaping breccias show preimpact 'Archean' Nd isotope signatures.

  15. Unified continuum damage model for matrix cracking in composite rotor blades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pollayi, Hemaraju; Harursampath, Dineshkumar

    This paper deals with modeling of the first damage mode, matrix micro-cracking, in helicopter rotor/wind turbine blades and how this effects the overall cross-sectional stiffness. The helicopter/wind turbine rotor system operates in a highly dynamic and unsteady environment leading to severe vibratory loads present in the system. Repeated exposure to this loading condition can induce damage in the composite rotor blades. These rotor/turbine blades are generally made of fiber-reinforced laminated composites and exhibit various competing modes of damage such as matrix micro-cracking, delamination, and fiber breakage. There is a need to study the behavior of the composite rotor system undermore » various key damage modes in composite materials for developing Structural Health Monitoring (SHM) system. Each blade is modeled as a beam based on geometrically non-linear 3-D elasticity theory. Each blade thus splits into 2-D analyzes of cross-sections and non-linear 1-D analyzes along the beam reference curves. Two different tools are used here for complete 3-D analysis: VABS for 2-D cross-sectional analysis and GEBT for 1-D beam analysis. The physically-based failure models for matrix in compression and tension loading are used in the present work. Matrix cracking is detected using two failure criterion: Matrix Failure in Compression and Matrix Failure in Tension which are based on the recovered field. A strain variable is set which drives the damage variable for matrix cracking and this damage variable is used to estimate the reduced cross-sectional stiffness. The matrix micro-cracking is performed in two different approaches: (i) Element-wise, and (ii) Node-wise. The procedure presented in this paper is implemented in VABS as matrix micro-cracking modeling module. Three examples are presented to investigate the matrix failure model which illustrate the effect of matrix cracking on cross-sectional stiffness by varying the applied cyclic load.« less

  16. Neutron diffraction measurements and modeling of residual strains in metal matrix composites

    NASA Technical Reports Server (NTRS)

    Saigal, A.; Leisk, G. G.; Hubbard, C. R.; Misture, S. T.; Wang, X. L.

    1996-01-01

    Neutron diffraction measurements at room temperature are used to characterize the residual strains in tungsten fiber-reinforced copper matrix, tungsten fiber-reinforced Kanthal matrix, and diamond particulate-reinforced copper matrix composites. Results of finite element modeling are compared with the neutron diffraction data. In tungsten/Kanthal composites, the fibers are in compression, the matrix is in tension, and the thermal residual strains are a strong function of the volume fraction of fibers. In copper matrix composites, the matrix is in tension and the stresses are independent of the volume fraction of tungsten fibers or diamond particles and the assumed stress free temperature because of the low yield strength of the matrix phase.

  17. A project to establish a skills competency matrix for EU nurses.

    PubMed

    Cowan, David T; Norman, Ian J; Coopamah, Vinoda P

    Enhanced nurse workforce mobility in the European Union (EU) is seen as a remedy to shortages of nurses in some EU countries and a surplus in others. However, knowledge of differences in competence, culture, skill levels and working practices of nursing staff throughout EU countries is not fully documented because currently no tangible method exists to enable comparison. The European Healthcare Training and Accreditation Network (EHTAN) project intends to address this problem by establishing an assessment and evaluation methodology through the compilation of a skills competency matrix. To this end, subsequent to a review of documentation and literature on nursing competence definition and assessment, two versions of a nursing competence self-assessment questionnaire tool have been developed. The final competence matrix will be translated and disseminated for transnational use and it is hoped that this will inform EU and national policies on the training requirements of nurses and nursing mobility and facilitate the promotion of EU-wide recognition of nursing qualifications.

  18. Micro-mechanics modelling of smart materials

    NASA Astrophysics Data System (ADS)

    Shah, Syed Asim Ali

    Metal Matrix ceramic-reinforced composites are rapidly becoming strong candidates as structural materials for many high temperature and engineering applications. Metal matrix composites (MMC) combine the ductile properties of the matrix with a brittle phase of the reinforcement, leading to high stiffness and strength with a reduction in structural weight. The main objective of using a metal matrix composite system is to increase service temperature or improve specific mechanical properties of structural components by replacing existing super alloys.The purpose of the study is to investigate, develop and implement second phase reinforcement alloy strengthening empirical model with SiCp reinforced A359 aluminium alloy composites on the particle-matrix interface and the overall mechanical properties of the material.To predict the interfacial fracture strength of aluminium, in the presence of silicon segregation, an empirical model has been modified. This model considers the interfacial energy caused by segregation of impurities at the interface and uses Griffith crack type arguments to predict the formation energies of impurities at the interface. Based on this, model simulations were conducted at nano scale specifically at the interface and the interfacial strengthening behaviour of reinforced aluminium alloy system was expressed in terms of elastic modulus.The numerical model shows success in making prediction possible of trends in relation to segregation and interfacial fracture strength behaviour in SiC particle-reinforced aluminium matrix composites. The simulation models using various micro scale modelling techniques to the aluminum alloy matrix composite, strengthenedwith varying amounts of silicon carbide particulate were done to predict the material state at critical points with properties of Al-SiC which had been heat treated.In this study an algorithm is developed to model a hard ceramic particle in a soft matrix with a clear distinct interface and a strain based relationship has been proposed for the strengthening behaviour of the MMC at the interface rather than stress based, by successfully completing the numerical modelling of particulate reinforced metal matrix composites.

  19. Application of mathematical modeling in sustained release delivery systems.

    PubMed

    Grassi, Mario; Grassi, Gabriele

    2014-08-01

    This review, presenting as starting point the concept of the mathematical modeling, is aimed at the physical and mathematical description of the most important mechanisms regulating drug delivery from matrix systems. The precise knowledge of the delivery mechanisms allows us to set up powerful mathematical models which, in turn, are essential for the design and optimization of appropriate drug delivery systems. The fundamental mechanisms for drug delivery from matrices are represented by drug diffusion, matrix swelling, matrix erosion, drug dissolution with possible recrystallization (e.g., as in the case of amorphous and nanocrystalline drugs), initial drug distribution inside the matrix, matrix geometry, matrix size distribution (in the case of spherical matrices of different diameter) and osmotic pressure. Depending on matrix characteristics, the above-reported variables may play a different role in drug delivery; thus the mathematical model needs to be built solely on the most relevant mechanisms of the particular matrix considered. Despite the somewhat diffident behavior of the industrial world, in the light of the most recent findings, we believe that mathematical modeling may have a tremendous potential impact in the pharmaceutical field. We do believe that mathematical modeling will be more and more important in the future especially in the light of the rapid advent of personalized medicine, a novel therapeutic approach intended to treat each single patient instead of the 'average' patient.

  20. SU-D-206-02: Evaluation of Partial Storage of the System Matrix for Cone Beam Computed Tomography Using a GPU Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matenine, D; Cote, G; Mascolo-Fortin, J

    2016-06-15

    Purpose: Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersections between the photons’ trajectories and the object, also called ray-tracing or system matrix computation. This work evaluates different ways to store the system matrix, aiming to reconstruct dense image grids in reasonable time. Methods: We propose an optimized implementation of the Siddon’s algorithm using graphics processing units (GPUs) with a novel data storage scheme. The algorithm computes a part of the system matrix on demand, typically, for one projection angle. The proposed method was enhanced with accelerating options: storage of larger subsets of themore » system matrix, systematic reuse of data via geometric symmetries, an arithmetic-rich parallel code and code configuration via machine learning. It was tested on geometries mimicking a cone beam CT acquisition of a human head. To realistically assess the execution time, the ray-tracing routines were integrated into a regularized Poisson-based reconstruction algorithm. The proposed scheme was also compared to a different approach, where the system matrix is fully pre-computed and loaded at reconstruction time. Results: Fast ray-tracing of realistic acquisition geometries, which often lack spatial symmetry properties, was enabled via the proposed method. Ray-tracing interleaved with projection and backprojection operations required significant additional time. In most cases, ray-tracing was shown to use about 66 % of the total reconstruction time. In absolute terms, tracing times varied from 3.6 s to 7.5 min, depending on the problem size. The presence of geometrical symmetries allowed for non-negligible ray-tracing and reconstruction time reduction. Arithmetic-rich parallel code and machine learning permitted a modest reconstruction time reduction, in the order of 1 %. Conclusion: Partial system matrix storage permitted the reconstruction of higher 3D image grid sizes and larger projection datasets at the cost of additional time, when compared to the fully pre-computed approach. This work was supported in part by the Fonds de recherche du Quebec - Nature et technologies (FRQ-NT). The authors acknowledge partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council of Canada (Grant No. 432290).« less

  1. Integrated fringe projection 3D scanning system for large-scale metrology based on laser tracker

    NASA Astrophysics Data System (ADS)

    Du, Hui; Chen, Xiaobo; Zhou, Dan; Guo, Gen; Xi, Juntong

    2017-10-01

    Large scale components exist widely in advance manufacturing industry,3D profilometry plays a pivotal role for the quality control. This paper proposes a flexible, robust large-scale 3D scanning system by integrating a robot with a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. And a mathematical model is established for the global data fusion. Subsequently, a flexible and robust method and mechanism is introduced for the establishment of the end coordination system. Based on this method, a virtual robot noumenon is constructed for hand-eye calibration. And then the transformation matrix between end coordination system and world coordination system is solved. Validation experiment is implemented for verifying the proposed algorithms. Firstly, hand-eye transformation matrix is solved. Then a car body rear is measured for 16 times for the global data fusion algorithm verification. And the 3D shape of the rear is reconstructed successfully.

  2. Fast matrix treatment of 3-D radiative transfer in vegetation canopies: SPARTACUS-Vegetation 1.1

    NASA Astrophysics Data System (ADS)

    Hogan, Robin J.; Quaife, Tristan; Braghiere, Renato

    2018-01-01

    A fast scheme is described to compute the 3-D interaction of solar radiation with vegetation canopies. The canopy is split in the horizontal plane into one clear region and one or more vegetated regions, and the two-stream equations are used for each, but with additional terms representing lateral exchange of radiation between regions that are proportional to the area of the interface between them. The resulting coupled set of ordinary differential equations is solved using the matrix-exponential method. The scheme is compared to solar Monte Carlo calculations for idealized scenes from the RAMI4PILPS intercomparison project, for open forest canopies and shrublands both with and without snow on the ground. Agreement is good in both the visible and infrared: for the cases compared, the root-mean-squared difference in reflectance, transmittance and canopy absorptance is 0.020, 0.038 and 0.033, respectively. The technique has potential application to weather and climate modelling.

  3. Time Dependent Solution for the He I Line Ratio Electron Temperature and Density Diagnostic in TEXTOR and DIII-D

    NASA Astrophysics Data System (ADS)

    Munoz Burgos, J. M.; Schmitz, O.; Unterberg, E. A.; Loch, S. D.; Balance, C. P.

    2010-11-01

    We developed a time dependent solution for the He I line ratio diagnostic. Stationary solution is applied for L-mode at TEXTOR. The radial range is typically limited to a region near the separatrix due to metastable effects, and the atomic data used. We overcome this problem by applying a time dependent solution and thus avoid unphysical results. We use a new R-Matrix with Pseudostates and Convergence Cross-Coupling electron impact excitation and ionization atomic data set into the Collisional Radiative Model (CRM). We include contributions from higher Rydberg states into the CRM by means of the projection matrix. By applying this solution (to the region near the wall) and the stationary solution (near the separatrix), we triple the radial range of the current diagnostic. We explore the possibility of extending this approach to H-mode plasmas in DIII-D by estimating line emission profiles from electron temperature and density Thomson scattering data.

  4. Nonlinear Penalized Estimation of True Q-Matrix in Cognitive Diagnostic Models

    ERIC Educational Resources Information Center

    Xiang, Rui

    2013-01-01

    A key issue of cognitive diagnostic models (CDMs) is the correct identification of Q-matrix which indicates the relationship between attributes and test items. Previous CDMs typically assumed a known Q-matrix provided by domain experts such as those who developed the questions. However, misspecifications of Q-matrix had been discovered in the past…

  5. Assessing Fit of Item Response Models Using the Information Matrix Test

    ERIC Educational Resources Information Center

    Ranger, Jochen; Kuhn, Jorg-Tobias

    2012-01-01

    The information matrix can equivalently be determined via the expectation of the Hessian matrix or the expectation of the outer product of the score vector. The identity of these two matrices, however, is only valid in case of a correctly specified model. Therefore, differences between the two versions of the observed information matrix indicate…

  6. Development and Validation of a Shear Punch Test Fixture

    DTIC Science & Technology

    2013-08-01

    composites (MMC) manufactured by friction stir processing (FSP) that are being developed as part of a Technology Investment Fund (TIF) project, as the...leading a team of government departments and academics to develop a friction stir processing (FSP) based procedure to create metal matrix composite... friction stir process to fabricate surface metal matrix composites in aluminum alloys for potential application in light armoured vehicles. The

  7. A systematic approach for locating optimum sites

    Treesearch

    Angel Ramos; Isabel Otero

    1979-01-01

    The basic information collected for landscape planning studies may be given the form of a "s x m" matrix, where s is the number of landscape units and m the number of data gathered for each unit. The problem of finding the optimum location for a given project is translated in the problem of ranking the series of vectors in the matrix which represent landscape...

  8. Automation in Photogrammetry,

    DTIC Science & Technology

    1980-07-25

    matrix (DTM) and digital planimetric data, combined and integrated into so-called "data bases." I’ll say more about this later. AUTOMATION OF...projection with mechanical inversors to maintain the Scheimpflug condition. Some automation has been achieved, with computer control to determine rectifier... matrix (DTM) form that is not necessarily collected from the same photography as that from which the orthophoto is being produced. Because they are

  9. Statistical Analysis of Q-matrix Based Diagnostic Classification Models

    PubMed Central

    Chen, Yunxiao; Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang

    2014-01-01

    Diagnostic classification models have recently gained prominence in educational assessment, psychiatric evaluation, and many other disciplines. Central to the model specification is the so-called Q-matrix that provides a qualitative specification of the item-attribute relationship. In this paper, we develop theories on the identifiability for the Q-matrix under the DINA and the DINO models. We further propose an estimation procedure for the Q-matrix through the regularized maximum likelihood. The applicability of this procedure is not limited to the DINA or the DINO model and it can be applied to essentially all Q-matrix based diagnostic classification models. Simulation studies are conducted to illustrate its performance. Furthermore, two case studies are presented. The first case is a data set on fraction subtraction (educational application) and the second case is a subsample of the National Epidemiological Survey on Alcohol and Related Conditions concerning the social anxiety disorder (psychiatric application). PMID:26294801

  10. The Cauchy Two-Matrix Model, C-Toda Lattice and CKP Hierarchy

    NASA Astrophysics Data System (ADS)

    Li, Chunxia; Li, Shi-Hao

    2018-06-01

    This paper mainly talks about the Cauchy two-matrix model and its corresponding integrable hierarchy with the help of orthogonal polynomial theory and Toda-type equations. Starting from the symmetric reduction in Cauchy biorthogonal polynomials, we derive the Toda equation of CKP type (or the C-Toda lattice) as well as its Lax pair by introducing time flows. Then, matrix integral solutions to the C-Toda lattice are extended to give solutions to the CKP hierarchy which reveals the time-dependent partition function of the Cauchy two-matrix model is nothing but the τ -function of the CKP hierarchy. At last, the connection between the Cauchy two-matrix model and Bures ensemble is established from the point of view of integrable systems.

  11. Mathematical model of water transport in Bacon and alkaline matrix-type hydrogen-oxygen fuel cells

    NASA Technical Reports Server (NTRS)

    Prokopius, P. R.; Easter, R. W.

    1972-01-01

    Based on general mass continuity and diffusive transport equations, a mathematical model was developed that simulates the transport of water in Bacon and alkaline-matrix fuel cells. The derived model was validated by using it to analytically reproduce various Bacon and matrix-cell experimental water transport transients.

  12. Take the Red Pill: A New Matrix of Literacy

    ERIC Educational Resources Information Center

    Brabazon, Tara

    2011-01-01

    Using "The Matrix" film series as an inspiration, aspiration and model, this article integrates horizontal and vertical models of literacy. My goal is to create a new matrix for media literacy, aligning the best of analogue depth models for meaning making with the rapid scrolling, clicking and moving through the read-write web. To…

  13. Numerical modelling of transdermal delivery from matrix systems: parametric study and experimental validation with silicone matrices.

    PubMed

    Snorradóttir, Bergthóra S; Jónsdóttir, Fjóla; Sigurdsson, Sven Th; Másson, Már

    2014-08-01

    A model is presented for transdermal drug delivery from single-layered silicone matrix systems. The work is based on our previous results that, in particular, extend the well-known Higuchi model. Recently, we have introduced a numerical transient model describing matrix systems where the drug dissolution can be non-instantaneous. Furthermore, our model can describe complex interactions within a multi-layered matrix and the matrix to skin boundary. The power of the modelling approach presented here is further illustrated by allowing the possibility of a donor solution. The model is validated by a comparison with experimental data, as well as validating the parameter values against each other, using various configurations with donor solution, silicone matrix and skin. Our results show that the model is a good approximation to real multi-layered delivery systems. The model offers the ability of comparing drug release for ibuprofen and diclofenac, which cannot be analysed by the Higuchi model because the dissolution in the latter case turns out to be limited. The experiments and numerical model outlined in this study could also be adjusted to more general formulations, which enhances the utility of the numerical model as a design tool for the development of drug-loaded matrices for trans-membrane and transdermal delivery. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  14. Design of monocular head-mounted displays for increased indoor firefighting safety and efficiency

    NASA Astrophysics Data System (ADS)

    Wilson, Joel; Steingart, Dan; Romero, Russell; Reynolds, Jessica; Mellers, Eric; Redfern, Andrew; Lim, Lloyd; Watts, William; Patton, Colin; Baker, Jessica; Wright, Paul

    2005-05-01

    Four monocular Head-Mounted Display (HMD) prototypes from the Fire Information and Rescue Equipment (FIRE) project at UC Berkeley are presented. The FIRE project aims to give firefighters a system of information technology tools for safer and more efficient firefighting in large buildings. The paper begins by describing the FIRE project and its use of a custom wireless sensor network (WSN) called SmokeNet for personnel tracking. The project aims to address urban/industrial firefighting procedures in need of improvement. Two "user-needs" studies with the Chicago and Berkeley Fire Departments are briefly presented. The FIRE project"s initial HMD prototype designs are then discussed with regard to feedback from the user-needs studies. These prototypes are evaluated in their potential costs and benefits to firefighters and found to need improvement. Next, some currently available commercial HMDs are reviewed and compared in their cost, performance, and potential for use by firefighters. Feedback from the Berkeley Fire Department user-needs study, in which the initial prototypes were demonstrated, is compiled into a concept selection matrix for the next prototypes. This matrix is used to evaluate a variety of HMDs, including some of the commercial units presented, and to select the best design options. Finally, the current prototypes of the two best design options are presented and discussed.

  15. Low-rank matrix fitting based on subspace perturbation analysis with applications to structure from motion.

    PubMed

    Jia, Hongjun; Martinez, Aleix M

    2009-05-01

    The task of finding a low-rank (r) matrix that best fits an original data matrix of higher rank is a recurring problem in science and engineering. The problem becomes especially difficult when the original data matrix has some missing entries and contains an unknown additive noise term in the remaining elements. The former problem can be solved by concatenating a set of r-column matrices that share a common single r-dimensional solution space. Unfortunately, the number of possible submatrices is generally very large and, hence, the results obtained with one set of r-column matrices will generally be different from that captured by a different set. Ideally, we would like to find that solution that is least affected by noise. This requires that we determine which of the r-column matrices (i.e., which of the original feature points) are less influenced by the unknown noise term. This paper presents a criterion to successfully carry out such a selection. Our key result is to formally prove that the more distinct the r vectors of the r-column matrices are, the less they are swayed by noise. This key result is then combined with the use of a noise model to derive an upper bound for the effect that noise and occlusions have on each of the r-column matrices. It is shown how this criterion can be effectively used to recover the noise-free matrix of rank r. Finally, we derive the affine and projective structure-from-motion (SFM) algorithms using the proposed criterion. Extensive validation on synthetic and real data sets shows the superiority of the proposed approach over the state of the art.

  16. Snapshot retinal imaging Mueller matrix polarimeter

    NASA Astrophysics Data System (ADS)

    Wang, Yifan; Kudenov, Michael; Kashani, Amir; Schwiegerling, Jim; Escuti, Michael

    2015-09-01

    Early diagnosis of glaucoma, which is a leading cause for visual impairment, is critical for successful treatment. It has been shown that Imaging polarimetry has advantages in early detection of structural changes in the retina. Here, we theoretically and experimentally present a snapshot Mueller Matrix Polarimeter fundus camera, which has the potential to record the polarization-altering characteristics of retina with a single snapshot. It is made by incorporating polarization gratings into a fundus camera design. Complete Mueller Matrix data sets can be obtained by analyzing the polarization fringes projected onto the image plane. In this paper, we describe the experimental implementation of the snapshot retinal imaging Mueller matrix polarimeter (SRIMMP), highlight issues related to calibration, and provide preliminary images acquired from the camera.

  17. Modeling extracellular matrix degradation balance with proteinase/transglutaminase cycle.

    PubMed

    Larreta-Garde, Veronique; Berry, Hugues

    2002-07-07

    Extracellular matrix mass balance is implied in many physiological and pathological events, such as metastasis dissemination. Widely studied, its destructive part is mainly catalysed by extracellular proteinases. Conversely, the properties of the constructive part are less obvious, cellular neo-synthesis being usually considered as its only element. In this paper, we introduce the action of transglutaminase in a mathematical model for extracellular matrix remodeling. This extracellular enzyme, catalysing intermolecular protein cross-linking, is considered here as a reverse proteinase as far as the extracellular matrix physical state is concerned. The model is based on a proteinase/transglutaminase cycle interconverting insoluble matrix and soluble proteolysis fragments, with regulation of cellular proteinase expression by the fragments. Under "closed" (batch) conditions, i.e. neglecting matrix influx and fragment efflux from the system, the model is bistable, with reversible hysteresis. Extracellular matrix proteins concentration abruptly switches from low to high levels when transglutaminase activity exceeds a threshold value. Proteinase concentration usually follows the reverse complementary kinetics, but can become apparently uncoupled from extracellular matrix concentration for some parameter values. When matrix production by the cells and fragment degradation are taken into account, the dynamics change to sustained oscillations because of the emergence of a stable limit cycle. Transitions out of and into oscillation areas are controlled by the model parameters. Biological interpretation indicates that these oscillations could represent the normal homeostatic situation, whereas the other exhibited dynamics can be related to pathologies such as tumor invasion or fibrosis. These results allow to discuss the insights that the model could contribute to the comprehension of these complex biological events.

  18. Estimation of Covariance Matrix on Bi-Response Longitudinal Data Analysis with Penalized Spline Regression

    NASA Astrophysics Data System (ADS)

    Islamiyati, A.; Fatmawati; Chamidah, N.

    2018-03-01

    The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.

  19. Time-dependent deformation of titanium metal matrix composites

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.; Bahei-El-din, Y. A.; Mirdamadi, M.

    1995-01-01

    A three-dimensional finite element program called VISCOPAC was developed and used to conduct a micromechanics analysis of titanium metal matrix composites. The VISCOPAC program uses a modified Eisenberg-Yen thermo-viscoplastic constitutive model to predict matrix behavior under thermomechanical fatigue loading. The analysis incorporated temperature-dependent elastic properties in the fiber and temperature-dependent viscoplastic properties in the matrix. The material model was described and the necessary material constants were determined experimentally. Fiber-matrix interfacial behavior was analyzed using a discrete fiber-matrix model. The thermal residual stresses due to the fabrication cycle were predicted with a failed interface, The failed interface resulted in lower thermal residual stresses in the matrix and fiber. Stresses due to a uniform transverse load were calculated at two temperatures, room temperature and an elevated temperature of 650 C. At both temperatures, a large stress concentration was calculated when the interface had failed. The results indicate the importance of accuracy accounting for fiber-matrix interface failure and the need for a micromechanics-based analytical technique to understand and predict the behavior of titanium metal matrix composites.

  20. Faces of matrix models

    NASA Astrophysics Data System (ADS)

    Morozov, A.

    2012-08-01

    Partition functions of eigenvalue matrix models possess a number of very different descriptions: as matrix integrals, as solutions to linear and nonlinear equations, as τ-functions of integrable hierarchies and as special-geometry prepotentials, as result of the action of W-operators and of various recursions on elementary input data, as gluing of certain elementary building blocks. All this explains the central role of such matrix models in modern mathematical physics: they provide the basic "special functions" to express the answers and relations between them, and they serve as a dream model of what one should try to achieve in any other field.

  1. NLTE steady-state response matrix method.

    NASA Astrophysics Data System (ADS)

    Faussurier, G.; More, R. M.

    2000-05-01

    A connection between atomic kinetics and non-equilibrium thermodynamics has been recently established by using a collisional-radiative model modified to include line absorption. The calculated net emission can be expressed as a non-local thermodynamic equilibrium (NLTE) symmetric response matrix. In the paper, this connection is extended to both cases of the average-atom model and the Busquet's model (RAdiative-Dependent IOnization Model, RADIOM). The main properties of the response matrix still remain valid. The RADIOM source function found in the literature leads to a diagonal response matrix, stressing the absence of any frequency redistribution among the frequency groups at this order of calculation.

  2. Matrix approach to uncertainty assessment and reduction for modeling terrestrial carbon cycle

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Xia, J.; Ahlström, A.; Zhou, S.; Huang, Y.; Shi, Z.; Wang, Y.; Du, Z.; Lu, X.

    2017-12-01

    Terrestrial ecosystems absorb approximately 30% of the anthropogenic carbon dioxide emissions. This estimate has been deduced indirectly: combining analyses of atmospheric carbon dioxide concentrations with ocean observations to infer the net terrestrial carbon flux. In contrast, when knowledge about the terrestrial carbon cycle is integrated into different terrestrial carbon models they make widely different predictions. To improve the terrestrial carbon models, we have recently developed a matrix approach to uncertainty assessment and reduction. Specifically, the terrestrial carbon cycle has been commonly represented by a series of carbon balance equations to track carbon influxes into and effluxes out of individual pools in earth system models. This representation matches our understanding of carbon cycle processes well and can be reorganized into one matrix equation without changing any modeled carbon cycle processes and mechanisms. We have developed matrix equations of several global land C cycle models, including CLM3.5, 4.0 and 4.5, CABLE, LPJ-GUESS, and ORCHIDEE. Indeed, the matrix equation is generic and can be applied to other land carbon models. This matrix approach offers a suite of new diagnostic tools, such as the 3-dimensional (3-D) parameter space, traceability analysis, and variance decomposition, for uncertainty analysis. For example, predictions of carbon dynamics with complex land models can be placed in a 3-D parameter space (carbon input, residence time, and storage potential) as a common metric to measure how much model predictions are different. The latter can be traced to its source components by decomposing model predictions to a hierarchy of traceable components. Then, variance decomposition can help attribute the spread in predictions among multiple models to precisely identify sources of uncertainty. The highly uncertain components can be constrained by data as the matrix equation makes data assimilation computationally possible. We will illustrate various applications of this matrix approach to uncertainty assessment and reduction for terrestrial carbon cycle models.

  3. Exact solution of corner-modified banded block-Toeplitz eigensystems

    NASA Astrophysics Data System (ADS)

    Cobanera, Emilio; Alase, Abhijeet; Ortiz, Gerardo; Viola, Lorenza

    2017-05-01

    Motivated by the challenge of seeking a rigorous foundation for the bulk-boundary correspondence for free fermions, we introduce an algorithm for determining exactly the spectrum and a generalized-eigenvector basis of a class of banded block quasi-Toeplitz matrices that we call corner-modified. Corner modifications of otherwise arbitrary banded block-Toeplitz matrices capture the effect of boundary conditions and the associated breakdown of translational invariance. Our algorithm leverages the interplay between a non-standard, projector-based method of kernel determination (physically, a bulk-boundary separation) and families of linear representations of the algebra of matrix Laurent polynomials. Thanks to the fact that these representations act on infinite-dimensional carrier spaces in which translation symmetry is restored, it becomes possible to determine the eigensystem of an auxiliary projected block-Laurent matrix. This results in an analytic eigenvector Ansatz, independent of the system size, which we prove is guaranteed to contain the full solution of the original finite-dimensional problem. The actual solution is then obtained by imposing compatibility with a boundary matrix, whose shape is also independent of system size. As an application, we show analytically that eigenvectors of short-ranged fermionic tight-binding models may display power-law corrections to exponential behavior, and demonstrate the phenomenon for the paradigmatic Majorana chain of Kitaev.

  4. A Matlab toolkit for three-dimensional electrical impedance tomography: a contribution to the Electrical Impedance and Diffuse Optical Reconstruction Software project

    NASA Astrophysics Data System (ADS)

    Polydorides, Nick; Lionheart, William R. B.

    2002-12-01

    The objective of the Electrical Impedance and Diffuse Optical Reconstruction Software project is to develop freely available software that can be used to reconstruct electrical or optical material properties from boundary measurements. Nonlinear and ill posed problems such as electrical impedance and optical tomography are typically approached using a finite element model for the forward calculations and a regularized nonlinear solver for obtaining a unique and stable inverse solution. Most of the commercially available finite element programs are unsuitable for solving these problems because of their conventional inefficient way of calculating the Jacobian, and their lack of accurate electrode modelling. A complete package for the two-dimensional EIT problem was officially released by Vauhkonen et al at the second half of 2000. However most industrial and medical electrical imaging problems are fundamentally three-dimensional. To assist the development we have developed and released a free toolkit of Matlab routines which can be employed to solve the forward and inverse EIT problems in three dimensions based on the complete electrode model along with some basic visualization utilities, in the hope that it will stimulate further development. We also include a derivation of the formula for the Jacobian (or sensitivity) matrix based on the complete electrode model.

  5. Continuous fiber ceramic matrix composites for heat engine components

    NASA Technical Reports Server (NTRS)

    Tripp, David E.

    1988-01-01

    High strength at elevated temperatures, low density, resistance to wear, and abundance of nonstrategic raw materials make structural ceramics attractive for advanced heat engine applications. Unfortunately, ceramics have a low fracture toughness and fail catastrophically because of overload, impact, and contact stresses. Ceramic matrix composites provide the means to achieve improved fracture toughness while retaining desirable characteristics, such as high strength and low density. Materials scientists and engineers are trying to develop the ideal fibers and matrices to achieve the optimum ceramic matrix composite properties. A need exists for the development of failure models for the design of ceramic matrix composite heat engine components. Phenomenological failure models are currently the most frequently used in industry, but they are deterministic and do not adequately describe ceramic matrix composite behavior. Semi-empirical models were proposed, which relate the failure of notched composite laminates to the stress a characteristic distance away from the notch. Shear lag models describe composite failure modes at the micromechanics level. The enhanced matrix cracking stress occurs at the same applied stress level predicted by the two models of steady state cracking. Finally, statistical models take into consideration the distribution in composite failure strength. The intent is to develop these models into computer algorithms for the failure analysis of ceramic matrix composites under monotonically increasing loads. The algorithms will be included in a postprocessor to general purpose finite element programs.

  6. Correction of projective distortion in long-image-sequence mosaics without prior information

    NASA Astrophysics Data System (ADS)

    Yang, Chenhui; Mao, Hongwei; Abousleman, Glen; Si, Jennie

    2010-04-01

    Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is shown to be effective and suitable for real-time implementation.

  7. Targeting Learning Needs in an Australian Aid Project in Thailand.

    ERIC Educational Resources Information Center

    Kinder, Rex; Karawanan, Chaisak

    1996-01-01

    The Thailand Land Titling Project includes a training and development component aimed at long-term sustainability. A training target matrix was developed to identify the knowledge, skills, experience, and performance standards required and needs for training at various levels. Six broad and flexible career paths allow for logical succession,…

  8. An Overview of Three PCDC Projects.

    ERIC Educational Resources Information Center

    Horbaly, Marilyn; And Others

    This report provides in matrix form a comprehensive overview of three Parent Child Development Centers (PCDC) projects located in Birmingham, Houston, and New Orleans. The report is divided into five sections. In Section I, the introduction, a brief description is given of the study's purpose. Section II provides demographic data from each of the…

  9. Non-Finite Complements in Russian, Serbian/Croatian, and Macedonian

    ERIC Educational Resources Information Center

    Kim, Bo Ra

    2010-01-01

    This study investigates the coherence properties of non-finite complements in Russian, Serbian/Croatian, and Macedonian. I demonstrate that Slavic non-finite complements do not project a uniform syntactic structure. The maximal projection of non-finite complements is not fixed but depends on the selectional properties of the matrix verb. I present…

  10. A projection-free method for representing plane-wave DFT results in an atom-centered basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunnington, Benjamin D.; Schmidt, J. R., E-mail: schmidt@chem.wisc.edu

    2015-09-14

    Plane wave density functional theory (DFT) is a powerful tool for gaining accurate, atomic level insight into bulk and surface structures. Yet, the delocalized nature of the plane wave basis set hinders the application of many powerful post-computation analysis approaches, many of which rely on localized atom-centered basis sets. Traditionally, this gap has been bridged via projection-based techniques from a plane wave to atom-centered basis. We instead propose an alternative projection-free approach utilizing direct calculation of matrix elements of the converged plane wave DFT Hamiltonian in an atom-centered basis. This projection-free approach yields a number of compelling advantages, including strictmore » orthonormality of the resulting bands without artificial band mixing and access to the Hamiltonian matrix elements, while faithfully preserving the underlying DFT band structure. The resulting atomic orbital representation of the Kohn-Sham wavefunction and Hamiltonian provides a gateway to a wide variety of analysis approaches. We demonstrate the utility of the approach for a diverse set of chemical systems and example analysis approaches.« less

  11. QRAP: A numerical code for projected (Q)uasiparticle (RA)ndom (P)hase approximation

    NASA Astrophysics Data System (ADS)

    Samana, A. R.; Krmpotić, F.; Bertulani, C. A.

    2010-06-01

    A computer code for quasiparticle random phase approximation - QRPA and projected quasiparticle random phase approximation - PQRPA models of nuclear structure is explained in details. The residual interaction is approximated by a simple δ-force. An important application of the code consists in evaluating nuclear matrix elements involved in neutrino-nucleus reactions. As an example, cross sections for 56Fe and 12C are calculated and the code output is explained. The application to other nuclei and the description of other nuclear and weak decay processes are also discussed. Program summaryTitle of program: QRAP ( Quasiparticle RAndom Phase approximation) Computers: The code has been created on a PC, but also runs on UNIX or LINUX machines Operating systems: WINDOWS or UNIX Program language used: Fortran-77 Memory required to execute with typical data: 16 Mbytes of RAM memory and 2 MB of hard disk space No. of lines in distributed program, including test data, etc.: ˜ 8000 No. of bytes in distributed program, including test data, etc.: ˜ 256 kB Distribution format: tar.gz Nature of physical problem: The program calculates neutrino- and antineutrino-nucleus cross sections as a function of the incident neutrino energy, and muon capture rates, using the QRPA or PQRPA as nuclear structure models. Method of solution: The QRPA, or PQRPA, equations are solved in a self-consistent way for even-even nuclei. The nuclear matrix elements for the neutrino-nucleus interaction are treated as the beta inverse reaction of odd-odd nuclei as function of the transfer momentum. Typical running time: ≈ 5 min on a 3 GHz processor for Data set 1.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernhardt, A. F.; Smith, P. M.

    This project was a collaborative effort between the University of California, Lawrence Livermore National Laboratory (LLNL) and FlexICs, Inc. to develop thin film transistor (TFT) electronics for active matrix displays.

  13. The Significance of Quality Assurance within Model Intercomparison Projects at the World Data Centre for Climate (WDCC)

    NASA Astrophysics Data System (ADS)

    Toussaint, F.; Hoeck, H.; Stockhause, M.; Lautenschlager, M.

    2014-12-01

    The classical goals of a quality assessment system in the data life cycle are (1) to encourage data creators to improve their quality assessment procedures to reach the next quality level and (2) enable data consumers to decide, whether a dataset has a quality that is sufficient for usage in the target application, i.e. to appraise the data usability for their own purpose.As the data volumes of projects and the interdisciplinarity of data usage grow, the need for homogeneous structure and standardised notation of data and metadata increases. This third aspect is especially valid for the data repositories, as they manage data through machine agents. So checks for homogeneity and consistency in early parts of the workflow become essential to cope with today's data volumes.Selected parts of the workflow in the model intercomparison project CMIP5 and the archival of the data for the interdiscipliary user community of the IPCC-DDC AR5 and the associated quality checks are reviewed. We compare data and metadata checks and relate different types of checks to their positions in the data life cycle.The project's data citation approach is included in the discussion, with focus on temporal aspects of the time necessary to comply with the project's requirements for formal data citations and the demand for the availability of such data citations.In order to make different quality assessments of projects comparable, WDCC developed a generic Quality Assessment System. Based on the self-assessment approach of a maturity matrix, an objective and uniform quality level system for all data at WDCC is derived which consists of five maturity quality levels.

  14. Concurrent engineering research center

    NASA Technical Reports Server (NTRS)

    Callahan, John R.

    1995-01-01

    The projects undertaken by The Concurrent Engineering Research Center (CERC) at West Virginia University are reported and summarized. CERC's participation in the Department of Defense's Defense Advanced Research Project relating to technology needed to improve the product development process is described, particularly in the area of advanced weapon systems. The efforts committed to improving collaboration among the diverse and distributed health care providers are reported, along with the research activities for NASA in Independent Software Verification and Validation. CERC also takes part in the electronic respirator certification initiated by The National Institute for Occupational Safety and Health, as well as in the efforts to find a solution to the problem of producing environment-friendly end-products for product developers worldwide. The 3M Fiber Metal Matrix Composite Model Factory Program is discussed. CERC technologies, facilities,and personnel-related issues are described, along with its library and technical services and recent publications.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blume-Kohout, Robin J; Scholten, Travis L.

    Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) shouldmore » not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.« less

  16. Puerto Rico water resources planning model program description

    USGS Publications Warehouse

    Moody, D.W.; Maddock, Thomas; Karlinger, M.R.; Lloyd, J.J.

    1973-01-01

    Because the use of the Mathematical Programming System -Extended (MPSX) to solve large linear and mixed integer programs requires the preparation of many input data cards, a matrix generator program to produce the MPSX input data from a much more limited set of data may expedite the use of the mixed integer programming optimization technique. The Model Definition and Control Program (MODCQP) is intended to assist a planner in preparing MPSX input data for the Puerto Rico Water Resources Planning Model. The model utilizes a mixed-integer mathematical program to identify a minimum present cost set of water resources projects (diversions, reservoirs, ground-water fields, desalinization plants, water treatment plants, and inter-basin transfers of water) which will meet a set of future water demands and to determine their sequence of construction. While MODCOP was specifically written to generate MPSX input data for the planning model described in this report, the program can be easily modified to reflect changes in the model's mathematical structure.

  17. Inverse modeling of the terrestrial carbon flux in China with flux covariance among inverted regions

    NASA Astrophysics Data System (ADS)

    Wang, H.; Jiang, F.; Chen, J. M.; Ju, W.; Wang, H.

    2011-12-01

    Quantitative understanding of the role of ocean and terrestrial biosphere in the global carbon cycle, their response and feedback to climate change is required for the future projection of the global climate. China has the largest amount of anthropogenic CO2 emission, diverse terrestrial ecosystems and an unprecedented rate of urbanization. Thus information on spatial and temporal distributions of the terrestrial carbon flux in China is of great importance in understanding the global carbon cycle. We developed a nested inversion with focus in China. Based on Transcom 22 regions for the globe, we divide China and its neighboring countries into 17 regions, making 39 regions in total for the globe. A Bayesian synthesis inversion is made to estimate the terrestrial carbon flux based on GlobalView CO2 data. In the inversion, GEOS-Chem is used as the transport model to develop the transport matrix. A terrestrial ecosystem model named BEPS is used to produce the prior surface flux to constrain the inversion. However, the sparseness of available observation stations in Asia poses a challenge to the inversion for the 17 small regions. To obtain additional constraint on the inversion, a prior flux covariance matrix is constructed using the BEPS model through analyzing the correlation in the net carbon flux among regions under variable climate conditions. The use of the covariance among different regions in the inversion effectively extends the information content of CO2 observations to more regions. The carbon flux over the 39 land and ocean regions are inverted for the period from 2004 to 2009. In order to investigate the impact of introducing the covariance matrix with non-zero off-diagonal values to the inversion, the inverted terrestrial carbon flux over China is evaluated against ChinaFlux eddy-covariance observations after applying an upscaling methodology.

  18. Brief announcement: Hypergraph parititioning for parallel sparse matrix-matrix multiplication

    DOE PAGES

    Ballard, Grey; Druinsky, Alex; Knight, Nicholas; ...

    2015-01-01

    The performance of parallel algorithms for sparse matrix-matrix multiplication is typically determined by the amount of interprocessor communication performed, which in turn depends on the nonzero structure of the input matrices. In this paper, we characterize the communication cost of a sparse matrix-matrix multiplication algorithm in terms of the size of a cut of an associated hypergraph that encodes the computation for a given input nonzero structure. Obtaining an optimal algorithm corresponds to solving a hypergraph partitioning problem. Furthermore, our hypergraph model generalizes several existing models for sparse matrix-vector multiplication, and we can leverage hypergraph partitioners developed for that computationmore » to improve application-specific algorithms for multiplying sparse matrices.« less

  19. Forecasting extinction risk with nonstationary matrix models.

    PubMed

    Gotelli, Nicholas J; Ellison, Aaron M

    2006-02-01

    Matrix population growth models are standard tools for forecasting population change and for managing rare species, but they are less useful for predicting extinction risk in the face of changing environmental conditions. Deterministic models provide point estimates of lambda, the finite rate of increase, as well as measures of matrix sensitivity and elasticity. Stationary matrix models can be used to estimate extinction risk in a variable environment, but they assume that the matrix elements are randomly sampled from a stationary (i.e., non-changing) distribution. Here we outline a method for using nonstationary matrix models to construct realistic forecasts of population fluctuation in changing environments. Our method requires three pieces of data: (1) field estimates of transition matrix elements, (2) experimental data on the demographic responses of populations to altered environmental conditions, and (3) forecasting data on environmental drivers. These three pieces of data are combined to generate a series of sequential transition matrices that emulate a pattern of long-term change in environmental drivers. Realistic estimates of population persistence and extinction risk can be derived from stochastic permutations of such a model. We illustrate the steps of this analysis with data from two populations of Sarracenia purpurea growing in northern New England. Sarracenia purpurea is a perennial carnivorous plant that is potentially at risk of local extinction because of increased nitrogen deposition. Long-term monitoring records or models of environmental change can be used to generate time series of driver variables under different scenarios of changing environments. Both manipulative and natural experiments can be used to construct a linking function that describes how matrix parameters change as a function of the environmental driver. This synthetic modeling approach provides quantitative estimates of extinction probability that have an explicit mechanistic basis.

  20. Advanced High-Temperature Engine Materials Technology Progresses

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The objective of the Advanced High Temperature Engine Materials Technology Program (HITEMP) is to generate technology for advanced materials and structural analysis that will increase fuel economy, improve reliability, extend life, and reduce operating costs for 21st century civil propulsion systems. The primary focus is on fan and compressor materials (polymer-matrix composites--PMC's), compressor and turbine materials (superalloys, and metal-matrix and intermetallic-matrix composites--MMC's and IMC's) and turbine materials (ceramic-matrix composites--CMC's). These advanced materials are being developed by in-house researchers and on grants and contracts. NASA considers this program to be a focused materials and structures research effort that builds on our base research programs and supports component-development projects. HITEMP is coordinated with the Advanced Subsonic Technology (AST) Program and the Department of Defense/NASA Integrated High-Performance Turbine Engine Technology (IHPTET) Program. Advanced materials and structures technologies from HITEMP may be used in these future applications. Recent technical accomplishments have not only improved the state-of-the-art but have wideranging applications to industry. A high-temperature thin-film strain gage was developed to measure both dynamic and static strain up to 1100 C (2000 F). The gage's unique feature is that it is minimally intrusive. This technology, which received a 1995 R&D 100 Award, has been transferred to AlliedSignal Engines, General Electric Company, and Ford Motor Company. Analytical models developed at the NASA Lewis Research Center were used to study Textron Specialty Materials' manufacturing process for titanium-matrix composite rings. Implementation of our recommendations on tooling and processing conditions resulted in the production of defect free rings. In the Lincoln Composites/AlliedSignal/Lewis cooperative program, a composite compressor case is being manufactured with a Lewis-developed matrix, VCAP. The compressor case, which will reduce weight by 30 percent and costs by 50 percent, is scheduled to be engine tested in the near future.

  1. Large-N and Bethe Ansatz

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav

    We describe an integrable model, related to the Gaudin magnet, and its relation to the matrix model of Brézin, Itzykson, Parisi and Zuber. Relation is based on Bethe ansatz for the integrable model and its interpretation using orthogonal polynomials and saddle point approximation. Large-N limit of the matrix model corresponds to the thermodynamic limit of the integrable system. In this limit (functional) Bethe ansatz is the same as the generating function for correlators of the matrix models.

  2. The QUELCE Method: Using Change Drivers to Estimate Program Costs

    DTIC Science & Technology

    2016-08-01

    QUELCE computes a distribution of program costs based on Monte Carlo analysis of program cost drivers—assessed via analyses of dependency structure...possible scenarios. These include  a dependency structure matrix to understand the interaction of change drivers for a specific project  a...performed by the SEI or by company analysts. From the workshop results, analysts create a dependency structure matrix (DSM) of the change drivers

  3. Interactive display system having a matrix optical detector

    DOEpatents

    Veligdan, James T.; DeSanto, Leonard

    2007-01-23

    A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. An image beam is projected across the inlet face laterally and transversely for display on the outlet face. An optical detector including a matrix of detector elements is optically aligned with the inlet face for detecting a corresponding lateral and transverse position of an inbound light spot on the outlet face.

  4. Comparison of Damage Models for Predicting the Non-Linear Response of Laminates Under Matrix Dominated Loading Conditions

    NASA Technical Reports Server (NTRS)

    Schuecker, Clara; Davila, Carlos G.; Rose, Cheryl A.

    2010-01-01

    Five models for matrix damage in fiber reinforced laminates are evaluated for matrix-dominated loading conditions under plane stress and are compared both qualitatively and quantitatively. The emphasis of this study is on a comparison of the response of embedded plies subjected to a homogeneous stress state. Three of the models are specifically designed for modeling the non-linear response due to distributed matrix cracking under homogeneous loading, and also account for non-linear (shear) behavior prior to the onset of cracking. The remaining two models are localized damage models intended for predicting local failure at stress concentrations. The modeling approaches of distributed vs. localized cracking as well as the different formulations of damage initiation and damage progression are compared and discussed.

  5. Matrix approaches to assess terrestrial nitrogen scheme in CLM4.5

    NASA Astrophysics Data System (ADS)

    Du, Z.

    2017-12-01

    Terrestrial carbon (C) and nitrogen (N) cycles have been commonly represented by a series of balance equations to track their influxes into and effluxes out of individual pools in earth system models (ESMs). This representation matches our understanding of C and N cycle processes well but makes it difficult to track model behaviors. To overcome these challenges, we developed a matrix approach, which reorganizes the series of terrestrial C and N balance equations in the CLM4.5 into two matrix equations based on original representation of C and N cycle processes and mechanisms. The matrix approach would consequently help improve the comparability of models and data, evaluate impacts of additional model components, facilitate benchmark analyses, model intercomparisons, and data-model fusion, and improve model predictive power.

  6. Robust characterization of small grating boxes using rotating stage Mueller matrix polarimeter

    NASA Astrophysics Data System (ADS)

    Foldyna, M.; De Martino, A.; Licitra, C.; Foucher, J.

    2010-03-01

    In this paper we demonstrate the robustness of the Mueller matrix polarimetry used in multiple-azimuth configuration. We first demonstrate the efficiency of the method for the characterization of small pitch gratings filling 250 μm wide square boxes. We used a Mueller matrix polarimeter directly installed in the clean room has motorized rotating stage allowing the access to arbitrary conical grating configurations. The projected beam spot size could be reduced to 60x25 μm, but for the measurements reported here this size was 100x100 μm. The optimal values of parameters of a trapezoidal profile model, acquired for each azimuthal angle separately using a non-linear least-square minimization algorithm, are shown for a typical grating. Further statistical analysis of the azimuth-dependent dimensional parameters provided realistic estimates of the confidence interval giving direct information about the accuracy of the results. The mean values and the standard deviations were calculated for 21 different grating boxes featuring in total 399 measured spectra and fits. The results for all boxes are summarized in a table which compares the optical method to the 3D-AFM. The essential conclusion of our work is that the 3D-AFM values always fall into the confidence intervals provided by the optical method, which means that we have successfully estimated the accuracy of our results without using direct comparison with another, non-optical, method. Moreover, this approach may provide a way to improve the accuracy of grating profile modeling by minimizing the standard deviations evaluated from multiple-azimuths results.

  7. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhary, Kenny; Najm, Habib N.

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  8. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE PAGES

    Chowdhary, Kenny; Najm, Habib N.

    2016-04-13

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  9. An Open-Access Modeled Passenger Flow Matrix for the Global Air Network in 2010

    PubMed Central

    Huang, Zhuojie; Wu, Xiao; Garcia, Andres J.; Fik, Timothy J.; Tatem, Andrew J.

    2013-01-01

    The expanding global air network provides rapid and wide-reaching connections accelerating both domestic and international travel. To understand human movement patterns on the network and their socioeconomic, environmental and epidemiological implications, information on passenger flow is required. However, comprehensive data on global passenger flow remain difficult and expensive to obtain, prompting researchers to rely on scheduled flight seat capacity data or simple models of flow. This study describes the construction of an open-access modeled passenger flow matrix for all airports with a host city-population of more than 100,000 and within two transfers of air travel from various publicly available air travel datasets. Data on network characteristics, city population, and local area GDP amongst others are utilized as covariates in a spatial interaction framework to predict the air transportation flows between airports. Training datasets based on information from various transportation organizations in the United States, Canada and the European Union were assembled. A log-linear model controlling the random effects on origin, destination and the airport hierarchy was then built to predict passenger flows on the network, and compared to the results produced using previously published models. Validation analyses showed that the model presented here produced improved predictive power and accuracy compared to previously published models, yielding the highest successful prediction rate at the global scale. Based on this model, passenger flows between 1,491 airports on 644,406 unique routes were estimated in the prediction dataset. The airport node characteristics and estimated passenger flows are freely available as part of the Vector-Borne Disease Airline Importation Risk (VBD-Air) project at: www.vbd-air.com/data. PMID:23691194

  10. An open-access modeled passenger flow matrix for the global air network in 2010.

    PubMed

    Huang, Zhuojie; Wu, Xiao; Garcia, Andres J; Fik, Timothy J; Tatem, Andrew J

    2013-01-01

    The expanding global air network provides rapid and wide-reaching connections accelerating both domestic and international travel. To understand human movement patterns on the network and their socioeconomic, environmental and epidemiological implications, information on passenger flow is required. However, comprehensive data on global passenger flow remain difficult and expensive to obtain, prompting researchers to rely on scheduled flight seat capacity data or simple models of flow. This study describes the construction of an open-access modeled passenger flow matrix for all airports with a host city-population of more than 100,000 and within two transfers of air travel from various publicly available air travel datasets. Data on network characteristics, city population, and local area GDP amongst others are utilized as covariates in a spatial interaction framework to predict the air transportation flows between airports. Training datasets based on information from various transportation organizations in the United States, Canada and the European Union were assembled. A log-linear model controlling the random effects on origin, destination and the airport hierarchy was then built to predict passenger flows on the network, and compared to the results produced using previously published models. Validation analyses showed that the model presented here produced improved predictive power and accuracy compared to previously published models, yielding the highest successful prediction rate at the global scale. Based on this model, passenger flows between 1,491 airports on 644,406 unique routes were estimated in the prediction dataset. The airport node characteristics and estimated passenger flows are freely available as part of the Vector-Borne Disease Airline Importation Risk (VBD-Air) project at: www.vbd-air.com/data.

  11. Evaluating Process Improvement Courses of Action Through Modeling and Simulation

    DTIC Science & Technology

    2017-09-16

    changes to a process is time consuming and has potential to overlook stochastic effects. By modeling a process as a Numerical Design Structure Matrix...13 Methods to Evaluate Process Performance ................................................................15 The Design Structure...Matrix ......................................................................................16 Numerical Design Structure Matrix

  12. Space applications of Automation, Robotics and Machine Intelligence Systems (ARAMIS). Volume 2: Space projects overview

    NASA Technical Reports Server (NTRS)

    Miller, R. H.; Minsky, M. L.; Smith, D. B. S.

    1982-01-01

    Applications of automation, robotics, and machine intelligence systems (ARAMIS) to space activities, and their related ground support functions are studied so that informed decisions can be made on which aspects of ARAMIS to develop. The space project breakdowns, which are used to identify tasks ('functional elements'), are described. The study method concentrates on the production of a matrix relating space project tasks to pieces of ARAMIS.

  13. Ordering Unstructured Meshes for Sparse Matrix Computations on Leading Parallel Systems

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Li, Xiaoye; Heber, Gerd; Biswas, Rupak

    2000-01-01

    The ability of computers to solve hitherto intractable problems and simulate complex processes using mathematical models makes them an indispensable part of modern science and engineering. Computer simulations of large-scale realistic applications usually require solving a set of non-linear partial differential equations (PDES) over a finite region. For example, one thrust area in the DOE Grand Challenge projects is to design future accelerators such as the SpaHation Neutron Source (SNS). Our colleagues at SLAC need to model complex RFQ cavities with large aspect ratios. Unstructured grids are currently used to resolve the small features in a large computational domain; dynamic mesh adaptation will be added in the future for additional efficiency. The PDEs for electromagnetics are discretized by the FEM method, which leads to a generalized eigenvalue problem Kx = AMx, where K and M are the stiffness and mass matrices, and are very sparse. In a typical cavity model, the number of degrees of freedom is about one million. For such large eigenproblems, direct solution techniques quickly reach the memory limits. Instead, the most widely-used methods are Krylov subspace methods, such as Lanczos or Jacobi-Davidson. In all the Krylov-based algorithms, sparse matrix-vector multiplication (SPMV) must be performed repeatedly. Therefore, the efficiency of SPMV usually determines the eigensolver speed. SPMV is also one of the most heavily used kernels in large-scale numerical simulations.

  14. Path statistics, memory, and coarse-graining of continuous-time random walks on networks

    PubMed Central

    Kion-Crosby, Willow; Morozov, Alexandre V.

    2015-01-01

    Continuous-time random walks (CTRWs) on discrete state spaces, ranging from regular lattices to complex networks, are ubiquitous across physics, chemistry, and biology. Models with coarse-grained states (for example, those employed in studies of molecular kinetics) or spatial disorder can give rise to memory and non-exponential distributions of waiting times and first-passage statistics. However, existing methods for analyzing CTRWs on complex energy landscapes do not address these effects. Here we use statistical mechanics of the nonequilibrium path ensemble to characterize first-passage CTRWs on networks with arbitrary connectivity, energy landscape, and waiting time distributions. Our approach can be applied to calculating higher moments (beyond the mean) of path length, time, and action, as well as statistics of any conservative or non-conservative force along a path. For homogeneous networks, we derive exact relations between length and time moments, quantifying the validity of approximating a continuous-time process with its discrete-time projection. For more general models, we obtain recursion relations, reminiscent of transfer matrix and exact enumeration techniques, to efficiently calculate path statistics numerically. We have implemented our algorithm in PathMAN (Path Matrix Algorithm for Networks), a Python script that users can apply to their model of choice. We demonstrate the algorithm on a few representative examples which underscore the importance of non-exponential distributions, memory, and coarse-graining in CTRWs. PMID:26646868

  15. Atomic approximation to the projection on electronic states in the Douglas-Kroll-Hess approach to the relativistic Kohn-Sham method.

    PubMed

    Matveev, Alexei V; Rösch, Notker

    2008-06-28

    We suggest an approximate relativistic model for economical all-electron calculations on molecular systems that exploits an atomic ansatz for the relativistic projection transformation. With such a choice, the projection transformation matrix is by definition both transferable and independent of the geometry. The formulation is flexible with regard to the level at which the projection transformation is approximated; we employ the free-particle Foldy-Wouthuysen and the second-order Douglas-Kroll-Hess variants. The (atomic) infinite-order decoupling scheme shows little effect on structural parameters in scalar-relativistic calculations; also, the use of a screened nuclear potential in the definition of the projection transformation shows hardly any effect in the context of the present work. Applications to structural and energetic parameters of various systems (diatomics AuH, AuCl, and Au(2), two structural isomers of Ir(4), and uranyl dication UO(2) (2+) solvated by 3-6 water ligands) show that the atomic approximation to the conventional second-order Douglas-Kroll-Hess projection (ADKH) transformation yields highly accurate results at substantial computational savings, in particular, when calculating energy derivatives of larger systems. The size-dependence of the intrinsic error of the ADKH method in extended systems of heavy elements is analyzed for the atomization energies of Pd(n) clusters (n

  16. Population dynamics of the Concho water snake in rivers and reservoirs

    USGS Publications Warehouse

    Whiting, M.J.; Dixon, J.R.; Greene, B.D.; Mueller, J.M.; Thornton, O.W.; Hatfield, J.S.; Nichols, J.D.; Hines, J.E.

    2008-01-01

    The Concho Water Snake (Nerodia harteri paucimaculata) is confined to the Concho–Colorado River valley of central Texas, thereby occupying one of the smallest geographic ranges of any North American snake. In 1986, N. h. paucimaculata was designated as a federally threatened species, in large part because of reservoir projects that were perceived to adversely affect the amount of habitat available to the snake. During a ten-year period (1987–1996), we conducted capture–recapture field studies to assess dynamics of five subpopulations of snakes in both natural (river) and man-made (reservoir) habitats. Because of differential sampling of subpopulations, we present separate results for all five subpopulations combined (including large reservoirs) and three of the five subpopulations (excluding large reservoirs). We used multistate capture–recapture models to deal with stochastic transitions between pre-reproductive and reproductive size classes and to allow for the possibility of different survival and capture probabilities for the two classes. We also estimated both the finite rate of increase (λ) for a deterministic, stage-based, female-only matrix model using the average litter size, and the average rate of adult population change, λ ˆ, which describes changes in numbers of adult snakes, using a direct capture–recapture approach to estimation. Average annual adult survival was about 0.23 and similar for males and females. Average annual survival for subadults was about 0.14. The parameter estimates from the stage-based projection matrix analysis all yielded asymptotic values of λ < 1, suggesting populations that are not viable. However, the direct estimates of average adult λ for the three subpopulations excluding major reservoirs were λ ˆ  =  1.26, SE ˆ(λ ˆ)  =  0.18 and λ ˆ  =  0.99, SE ˆ(λ ˆ)  =  0.79, based on two different models. Thus, the direct estimation approach did not provide strong evidence of population declines of the riverine subpopulations, but the estimates are characterized by substantial uncertainty.

  17. Frequency-dependent population dynamics: effect of sex ratio and mating system on the elasticity of population growth rate.

    PubMed

    Haridas, C V; Eager, Eric Alan; Rebarber, Richard; Tenhumberg, Brigitte

    2014-11-01

    When vital rates depend on population structure (e.g., relative frequencies of males or females), an important question is how the long-term population growth rate λ responds to changes in rates. For instance, availability of mates may depend on the sex ratio of the population and hence reproductive rates could be frequency-dependent. In such cases change in any vital rate alters the structure, which in turn, affect frequency-dependent rates. We show that the elasticity of λ to a rate is the sum of (i) the effect of the linear change in the rate and (ii) the effect of nonlinear changes in frequency-dependent rates. The first component is always positive and is the classical elasticity in density-independent models obtained directly from the population projection matrix. The second component can be positive or negative and is absent in density-independent models. We explicitly express each component of the elasticity as a function of vital rates, eigenvalues and eigenvectors of the population projection matrix. We apply this result to a two-sex model, where male and female fertilities depend on adult sex ratio α (ratio of females to males) and the mating system (e.g., polygyny) through a harmonic mating function. We show that the nonlinear component of elasticity to a survival rate is negligible only when the average number of mates (per male) is close to α. In a strictly monogamous species, elasticity to female survival is larger than elasticity to male survival when α<1 (less females). In a polygynous species, elasticity to female survival can be larger than that of male survival even when sex ratio is female biased. Our results show how demography and mating system together determine the response to selection on sex-specific vital rates. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Cobimaximal lepton mixing from soft symmetry breaking

    NASA Astrophysics Data System (ADS)

    Grimus, W.; Lavoura, L.

    2017-11-01

    Cobimaximal lepton mixing, i.e.θ23 = 45 ° and δ = ± 90 ° in the lepton mixing matrix V, arises as a consequence of SV =V* P, where S is the permutation matrix that interchanges the second and third rows of V and P is a diagonal matrix of phase factors. We prove that any such V may be written in the form V = URP, where U is any predefined unitary matrix satisfying SU =U*, R is an orthogonal, i.e. real, matrix, and P is a diagonal matrix satisfying P2 = P. Using this theorem, we demonstrate the equivalence of two ways of constructing models for cobimaximal mixing-one way that uses a standard CP symmetry and a different way that uses a CP symmetry including μ-τ interchange. We also present two simple seesaw models to illustrate this equivalence; those models have, in addition to the CP symmetry, flavour symmetries broken softly by the Majorana mass terms of the right-handed neutrino singlets. Since each of the two models needs four scalar doublets, we investigate how to accommodate the Standard Model Higgs particle in them.

  19. Tuning stochastic matrix models with hydrologic data to predict the population dynamics of a riverine fish

    USGS Publications Warehouse

    Sakaris, P.C.; Irwin, E.R.

    2010-01-01

    We developed stochastic matrix models to evaluate the effects of hydrologic alteration and variable mortality on the population dynamics of a lotie fish in a regulated river system. Models were applied to a representative lotic fish species, the flathead catfish (Pylodictis olivaris), for which two populations were examined: a native population from a regulated reach of the Coosa River (Alabama, USA) and an introduced population from an unregulated section of the Ocmulgee River (Georgia, USA). Size-classified matrix models were constructed for both populations, and residuals from catch-curve regressions were used as indices of year class strength (i.e., recruitment). A multiple regression model indicated that recruitment of flathead catfish in the Coosa River was positively related to the frequency of spring pulses between 283 and 566 m3/s. For the Ocmulgee River population, multiple regression models indicated that year class strength was negatively related to mean March discharge and positively related to June low flow. When the Coosa population was modeled to experience five consecutive years of favorable hydrologic conditions during a 50-year projection period, it exhibited a substantial spike in size and increased at an overall 0.2% annual rate. When modeled to experience five years of unfavorable hydrologic conditions, the Coosa population initially exhibited a decrease in size but later stabilized and increased at a 0.4% annual rate following the decline. When the Ocmulgee River population was modeled to experience five years of favorable conditions, it exhibited a substantial spike in size and increased at an overall 0.4% annual rate. After the Ocmulgee population experienced five years of unfavorable conditions, a sharp decline in population size was predicted. However, the population quickly recovered, with population size increasing at a 0.3% annual rate following the decline. In general, stochastic population growth in the Ocmulgee River was more erratic and variable than population growth in the Coosa River. We encourage ecologists to develop similar models for other lotic species, particularly in regulated river systems. Successful management of fish populations in regulated systems requires that we are able to predict how hydrology affects recruitment and will ultimately influence the population dynamics of fishes. ?? 2010 by the Ecological Society of America.

  20. A finite element code for modelling tracer transport in a non-isothermal two-phase flow system for CO2 geological storage characterization

    NASA Astrophysics Data System (ADS)

    Tong, F.; Niemi, A. P.; Yang, Z.; Fagerlund, F.; Licha, T.; Sauter, M.

    2011-12-01

    This paper presents a new finite element method (FEM) code for modeling tracer transport in a non-isothermal two-phase flow system. The main intended application is simulation of the movement of so-called novel tracers for the purpose of characterization of geologically stored CO2 and its phase partitioning and migration in deep saline formations. The governing equations are based on the conservation of mass and energy. Among the phenomena accounted for are liquid-phase flow, gas flow, heat transport and the movement of the novel tracers. The movement of tracers includes diffusion and the advection associated with the gas and liquid flow. The temperature, gas pressure, suction, concentration of tracer in liquid phase and concentration of tracer in gas phase are chosen as the five primary variables. Parameters such as the density, viscosity, thermal expansion coefficient are expressed in terms of the primary variables. The governing equations are discretized in space using the Galerkin finite element formulation, and are discretized in time by one-dimensional finite difference scheme. This leads to an ill-conditioned FEM equation that has many small entries along the diagonal of the non-symmetric coefficient matrix. In order to deal with the problem of non-symmetric ill-conditioned matrix equation, special techniques are introduced . Firstly, only nonzero elements of the matrix need to be stored. Secondly, it is avoided to directly solve the whole large matrix. Thirdly, a strategy has been used to keep the diversity of solution methods in the calculation process. Additionally, an efficient adaptive mesh technique is included in the code in order to track the wetting front. The code has been validated against several classical analytical solutions, and will be applied for simulating the CO2 injection experiment to be carried out at the Heletz site, Israel, as part of the EU FP7 project MUSTANG.

  1. Analysis of Interferometric Synthetic Aperture Radar Phase Data at Brady Hot Springs, Nevada, USA Using Prior Information

    NASA Astrophysics Data System (ADS)

    Reinisch, E. C.; Ali, S. T.; Cardiff, M. A.; Morency, C.; Kreemer, C.; Feigl, K. L.; Team, P.

    2016-12-01

    Time-dependent deformation has been observed at Brady Hot Springs using interferometric synthetic aperture radar (InSAR) [Ali et al. 2016, http://dx.doi.org/10.1016/j.geothermics.2016.01.008]. Our goal is to evaluate multiple competing hypotheses to explain the observed deformation at Brady. To do so requires statistical tests that account for uncertainty. Graph theory is useful for such an analysis of InSAR data [Reinisch, et al. 2016, http://dx.doi.org/10.1007/s00190-016-0934-5]. In particular, the normalized edge Laplacian matrix calculated from the edge-vertex incidence matrix of the graph of the pair-wise data set represents its correlation and leads to a full data covariance matrix in the weighted least squares problem. This formulation also leads to the covariance matrix of the epoch-wise measurements, representing their relative uncertainties. While the formulation in terms of incidence graphs applies to any quantity derived from pair-wise differences, the modulo-2π ambiguity of wrapped phase renders the problem non-linear. The conventional practice is to unwrap InSAR phase before modeling, which can introduce mistakes without increasing the corresponding measurement uncertainty. To address this issue, we are applying Bayesian inference. To build the likelihood, we use three different observables: (a) wrapped phase [e.g., Feigl and Thurber 2009, http://dx.doi.org/10.1111/j.1365-246X.2008.03881.x]; (b) range gradients, as defined by Ali and Feigl [2012, http://dx.doi.org/10.1029/2012GC004112]; and (c) unwrapped phase, i.e. range change in mm, which we validate using GPS data. We apply our method to InSAR data taken over Brady Hot Springs geothermal field in Nevada as part of a project entitled "Poroelastic Tomography by Adjoint Inverse Modeling of Data from Seismology, Geodesy, and Hydrology" (PoroTomo) [ http://geoscience.wisc.edu/feigl/porotomo].

  2. Yielding physically-interpretable emulators - A Sparse PCA approach

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.

    2015-12-01

    Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.

  3. Mapping past, present, and future climatic suitability for invasive Aedes aegypti and Aedes albopictus in the United States: a process-based modeling approach using CMIP5 downscaled climate scenarios

    NASA Astrophysics Data System (ADS)

    Donnelly, M. A. P.; Marcantonio, M.; Melton, F. S.; Barker, C. M.

    2016-12-01

    The ongoing spread of the mosquitoes, Aedes aegypti and Aedes albopictus, in the continental United States leaves new areas at risk for local transmission of dengue, chikungunya, and Zika viruses. All three viruses have caused major disease outbreaks in the Americas with infected travelers returning regularly to the U.S. The expanding range of these mosquitoes raises questions about whether recent spread has been enabled by climate change or other anthropogenic influences. In this analysis, we used downscaled climate scenarios from the NASA Earth Exchange Global Daily Downscaled Projections (NEX GDDP) dataset to model Ae. aegypti and Ae. albopictus population growth rates across the United States. We used a stage-structured matrix population model to understand past and present climatic suitability for these vectors, and to project future suitability under CMIP5 climate change scenarios. Our results indicate that much of the southern U.S. is suitable for both Ae. aegypti and Ae. albopictus year-round. In addition, a large proportion of the U.S. is seasonally suitable for mosquito population growth, creating the potential for periodic incursions into new areas. Changes in climatic suitability in recent decades for Ae. aegypti and Ae. albopictus have occurred already in many regions of the U.S., and model projections of future climate suggest that climate change will continue to reshape the range of Ae. aegypti and Ae. albopictus in the U.S., and potentially the risk of the viruses they transmit.

  4. Mapping Past, Present, and Future Climatic Suitability for Invasive Aedes Aegypti and Aedes Albopictus in the United States: A Process-Based Modeling Approach Using CMIP5 Downscaled Climate Scenarios

    NASA Technical Reports Server (NTRS)

    Donnelly, Marisa Anne Pella; Marcantonio, Matteo; Melton, Forrest S.; Barker, Christopher M.

    2016-01-01

    The ongoing spread of the mosquitoes, Aedes aegypti and Aedes albopictus, in the continental United States leaves new areas at risk for local transmission of dengue, chikungunya, and Zika viruses. All three viruses have caused major disease outbreaks in the Americas with infected travelers returning regularly to the U.S. The expanding range of these mosquitoes raises questions about whether recent spread has been enabled by climate change or other anthropogenic influences. In this analysis, we used downscaled climate scenarios from the NASA Earth Exchange Global Daily Downscaled Projections (NEX GDDP) dataset to model Ae. aegypti and Ae. albopictus population growth rates across the United States. We used a stage-structured matrix population model to understand past and present climatic suitability for these vectors, and to project future suitability under CMIP5 climate change scenarios. Our results indicate that much of the southern U.S. is suitable for both Ae. aegypti and Ae. albopictus year-round. In addition, a large proportion of the U.S. is seasonally suitable for mosquito population growth, creating the potential for periodic incursions into new areas. Changes in climatic suitability in recent decades for Ae. aegypti and Ae. albopictus have occurred already in many regions of the U.S., and model projections of future climate suggest that climate change will continue to reshape the range of Ae. aegypti and Ae. albopictus in the U.S., and potentially the risk of the viruses they transmit.

  5. Ranking of small scale proposals for water system repair using the Rapid Impact Assessment Matrix (RIAM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shakib-Manesh, T.E.; Hirvonen, K.O.; Jalava, K.J.

    2014-11-15

    Environmental impacts of small scale projects are often assessed poorly, or not assessed at all. This paper examines the usability of the Rapid Impact Assessment Matrix (RIAM) as a tool to prioritize project proposals for small scale water restoration projects in relation to proposals' potential to improve the environment. The RIAM scoring system was used to assess and rank the proposals based on their environmental impacts, the costs of the projects to repair the harmful impacts, and the size of human population living around the sites. A four-member assessment group (The expert panel) gave the RIAM-scores to the proposals. Themore » assumed impacts of the studied projects at the Eastern Finland water systems were divided into the ecological and social impacts. The more detailed assessment categories of the ecological impacts in this study were impacts on landscape, natural state, and limnology. The social impact categories were impacts to recreational use of the area, fishing, industry, population, and economy. These impacts were scored according to their geographical and social significance, their magnitude of change, their character, permanence, reversibility, and cumulativeness. The RIAM method proved to be an appropriate and recommendable method for the small-scale assessment and prioritizing of project proposals. If the assessments are well documented, the RIAM can be a method for easy assessing and comparison of the various kinds of projects. In the studied project proposals there were no big surprises in the results: the best ranks were received by the projects, which were assumed to return watersheds toward their original state.« less

  6. Study of Interpolated Timing Recovery Phase-Locked Loop with Linearly Constrained Adaptive Prefilter for Higher-Density Optical Disc

    NASA Astrophysics Data System (ADS)

    Kajiwara, Yoshiyuki; Shiraishi, Junya; Kobayashi, Shoei; Yamagami, Tamotsu

    2009-03-01

    A digital phase-locked loop (PLL) with a linearly constrained adaptive filter (LCAF) has been studied for higher-linear-density optical discs. LCAF has been implemented before an interpolated timing recovery (ITR) PLL unit in order to improve the quality of phase error calculation by using an adaptively equalized partial response (PR) signal. Coefficient update of an asynchronous sampled adaptive FIR filter with a least-mean-square (LMS) algorithm has been constrained by a projection matrix in order to suppress the phase shift of the tap coefficients of the adaptive filter. We have developed projection matrices that are suitable for Blu-ray disc (BD) drive systems by numerical simulation. Results have shown the properties of the projection matrices. Then, we have designed the read channel system of the ITR PLL with an LCAF model on the FPGA board for experiments. Results have shown that the LCAF improves the tilt margins of 30 gigabytes (GB) recordable BD (BD-R) and 33 GB BD read-only memory (BD-ROM) with a sufficient LMS adaptation stability.

  7. On Connected Diagrams and Cumulants of Erdős-Rényi Matrix Models

    NASA Astrophysics Data System (ADS)

    Khorunzhiy, O.

    2008-08-01

    Regarding the adjacency matrices of n-vertex graphs and related graph Laplacian we introduce two families of discrete matrix models constructed both with the help of the Erdős-Rényi ensemble of random graphs. Corresponding matrix sums represent the characteristic functions of the average number of walks and closed walks over the random graph. These sums can be considered as discrete analogues of the matrix integrals of random matrix theory. We study the diagram structure of the cumulant expansions of logarithms of these matrix sums and analyze the limiting expressions as n → ∞ in the cases of constant and vanishing edge probabilities.

  8. TH-EF-207A-03: Photon Counting Implementation Challenges Using An Electron Multiplying Charged-Coupled Device Based Micro-CT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Podgorsak, A; Bednarek, D; Rudin, S

    2016-06-15

    Purpose: To successfully implement and operate a photon counting scheme on an electron multiplying charged-coupled device (EMCCD) based micro-CT system. Methods: We built an EMCCD based micro-CT system and implemented a photon counting scheme. EMCCD detectors use avalanche transfer registries to multiply the input signal far above the readout noise floor. Due to intrinsic differences in the pixel array, using a global threshold for photon counting is not optimal. To address this shortcoming, we generated a threshold array based on sixty dark fields (no x-ray exposure). We calculated an average matrix and a variance matrix of the dark field sequence.more » The average matrix was used for the offset correction while the variance matrix was used to set individual pixel thresholds for the photon counting scheme. Three hundred photon counting frames were added for each projection and 360 projections were acquired for each object. The system was used to scan various objects followed by reconstruction using an FDK algorithm. Results: Examination of the projection images and reconstructed slices of the objects indicated clear interior detail free of beam hardening artifacts. This suggests successful implementation of the photon counting scheme on our EMCCD based micro-CT system. Conclusion: This work indicates that it is possible to implement and operate a photon counting scheme on an EMCCD based micro-CT system, suggesting that these devices might be able to operate at very low x-ray exposures in a photon counting mode. Such devices could have future implications in clinical CT protocols. NIH Grant R01EB002873; Toshiba Medical Systems Corp.« less

  9. Three-dimensional finite element modeling of pericellular matrix and cell mechanics in the nucleus pulposus of the intervertebral disk based on in situ morphology.

    PubMed

    Cao, Li; Guilak, Farshid; Setton, Lori A

    2011-02-01

    Nucleus pulposus (NP) cells of the intervertebral disk (IVD) have unique morphological characteristics and biologic responses to mechanical stimuli that may regulate maintenance and health of the IVD. NP cells reside as single cell, paired or multiple cells in a contiguous pericellular matrix (PCM), whose structure and properties may significantly influence cell and extracellular matrix mechanics. In this study, a computational model was developed to predict the stress-strain, fluid pressure and flow fields for cells and their surrounding PCM in the NP using three-dimensional (3D) finite element models based on the in situ morphology of cell-PCM regions of the mature rat NP, measured using confocal microscopy. Three-dimensional geometries of the extracellular matrix and representative cell-matrix units were used to construct 3D finite element models of the structures as isotropic and biphasic materials. In response to compressive strain of the extracellular matrix, NP cells and PCM regions were predicted to experience volumetric strains that were 1.9-3.7 and 1.4-2.1 times greater than the extracellular matrix, respectively. Volumetric and deviatoric strain concentrations were generally found at the cell/PCM interface, while von Mises stress concentrations were associated with the PCM/extracellular matrix interface. Cell-matrix units containing greater cell numbers were associated with higher peak cell strains and lower rates of fluid pressurization upon loading. These studies provide new model predictions for micromechanics of NP cells that can contribute to an understanding of mechanotransduction in the IVD and its changes with aging and degeneration.

  10. Numerical simulation of elasto-plastic deformation of composites: evolution of stress microfields and implications for homogenization models

    NASA Astrophysics Data System (ADS)

    González, C.; Segurado, J.; LLorca, J.

    2004-07-01

    The deformation of a composite made up of a random and homogeneous dispersion of elastic spheres in an elasto-plastic matrix was simulated by the finite element analysis of three-dimensional multiparticle cubic cells with periodic boundary conditions. "Exact" results (to a few percent) in tension and shear were determined by averaging 12 stress-strain curves obtained from cells containing 30 spheres, and they were compared with the predictions of secant homogenization models. In addition, the numerical simulations supplied detailed information of the stress microfields, which was used to ascertain the accuracy and the limitations of the homogenization models to include the nonlinear deformation of the matrix. It was found that secant approximations based on the volume-averaged second-order moment of the matrix stress tensor, combined with a highly accurate linear homogenization model, provided excellent predictions of the composite response when the matrix strain hardening rate was high. This was not the case, however, in composites which exhibited marked plastic strain localization in the matrix. The analysis of the evolution of the matrix stresses revealed that better predictions of the composite behavior can be obtained with new homogenization models which capture the essential differences in the stress carried by the elastic and plastic regions in the matrix at the onset of plastic deformation.

  11. A Deep Stochastic Model for Detecting Community in Complex Networks

    NASA Astrophysics Data System (ADS)

    Fu, Jingcheng; Wu, Jianliang

    2017-01-01

    Discovering community structures is an important step to understanding the structure and dynamics of real-world networks in social science, biology and technology. In this paper, we develop a deep stochastic model based on non-negative matrix factorization to identify communities, in which there are two sets of parameters. One is the community membership matrix, of which the elements in a row correspond to the probabilities of the given node belongs to each of the given number of communities in our model, another is the community-community connection matrix, of which the element in the i-th row and j-th column represents the probability of there being an edge between a randomly chosen node from the i-th community and a randomly chosen node from the j-th community. The parameters can be evaluated by an efficient updating rule, and its convergence can be guaranteed. The community-community connection matrix in our model is more precise than the community-community connection matrix in traditional non-negative matrix factorization methods. Furthermore, the method called symmetric nonnegative matrix factorization, is a special case of our model. Finally, based on the experiments on both synthetic and real-world networks data, it can be demonstrated that our algorithm is highly effective in detecting communities.

  12. Multiscale Modeling of Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Mital, Subodh K.; Pineda, Evan J.; Arnold, Steven M.

    2015-01-01

    Results of multiscale modeling simulations of the nonlinear response of SiC/SiC ceramic matrix composites are reported, wherein the microstructure of the ceramic matrix is captured. This micro scale architecture, which contains free Si material as well as the SiC ceramic, is responsible for residual stresses that play an important role in the subsequent thermo-mechanical behavior of the SiC/SiC composite. Using the novel Multiscale Generalized Method of Cells recursive micromechanics theory, the microstructure of the matrix, as well as the microstructure of the composite (fiber and matrix) can be captured.

  13. The Effects of Q-Matrix Design on Classification Accuracy in the Log-Linear Cognitive Diagnosis Model.

    PubMed

    Madison, Matthew J; Bradshaw, Laine P

    2015-06-01

    Diagnostic classification models are psychometric models that aim to classify examinees according to their mastery or non-mastery of specified latent characteristics. These models are well-suited for providing diagnostic feedback on educational assessments because of their practical efficiency and increased reliability when compared with other multidimensional measurement models. A priori specifications of which latent characteristics or attributes are measured by each item are a core element of the diagnostic assessment design. This item-attribute alignment, expressed in a Q-matrix, precedes and supports any inference resulting from the application of the diagnostic classification model. This study investigates the effects of Q-matrix design on classification accuracy for the log-linear cognitive diagnosis model. Results indicate that classification accuracy, reliability, and convergence rates improve when the Q-matrix contains isolated information from each measured attribute.

  14. FluorMODgui V3.0: A graphic user interface for the spectral simulation of leaf and canopy chlorophyll fluorescence

    NASA Astrophysics Data System (ADS)

    Zarco-Tejada, P. J.; Miller, J. R.; Pedrós, R.; Verhoef, W.; Berger, M.

    2006-06-01

    The FluorMODgui Graphic User Interface (GUI) software package developed within the frame of the FluorMOD project Development of a Vegetation Fluorescence Canopy Model is presented in this manuscript. The FluorMOD project was launched in 2002 by the European Space Agency (ESA) to advance the science of vegetation fluorescence simulation through the development and integration of leaf and canopy fluorescence models based on physical methods. The design of airborne or space missions dedicated to the measurement of solar-induced chlorophyll fluorescence using remote-sensing instruments require physical methods for quantitative feasibility analysis and sensor specification studies. The FluorMODgui model developed as part of this project is designed to simulate the effects of chlorophyll fluorescence at leaf and canopy levels using atmospheric inputs, running the leaf model, FluorMODleaf, and the canopy model, FluorSAIL, independently, through a coupling scheme, and by a multiple iteration protocol to simulate changes in the viewing geometry and atmospheric characteristics. Inputs for the FluorMODleaf model are the number of leaf layers, chlorophyll a+ b content, water equivalent thickness, dry matter content, fluorescence quantum efficiency, temperature, species type, and stoichiometry. Inputs for the FluorSAIL canopy model are a MODTRAN-4 6-parameter spectra or measured direct horizontal irradiance and diffuse irradiance spectra, a soil reflectance spectrum, leaf reflectance & transmittance spectra and a excitation-fluorescence response matrix in upward and downward directions (all from FluorMODleaf), 2 PAR-dependent coefficients for the fluorescence response to light level, relative azimuth angle and viewing zenith angle, canopy leaf area index, leaf inclination distribution function, and a hot spot parameter. Outputs available in the 400-1000 nm spectral range from the graphical user interface, FluorMODgui, are the leaf spectral reflectance and transmittance, and the canopy reflectance, with and without fluorescence effects. In addition, solar and sky irradiance on the ground, radiance with and without fluorescence on the ground, and top-of-atmosphere (TOA) radiances for bare soil and surroundings same as target are also produced. The models and documentation regarding the FluorMOD project can be downloaded at http://www.ias.csic.es/fluormod.

  15. Quantitative (31)P NMR spectroscopy and (1)H MRI measurements of bone mineral and matrix density differentiate metabolic bone diseases in rat models.

    PubMed

    Cao, Haihui; Nazarian, Ara; Ackerman, Jerome L; Snyder, Brian D; Rosenberg, Andrew E; Nazarian, Rosalynn M; Hrovat, Mirko I; Dai, Guangping; Mintzopoulos, Dionyssios; Wu, Yaotang

    2010-06-01

    In this study, bone mineral density (BMD) of normal (CON), ovariectomized (OVX), and partially nephrectomized (NFR) rats was measured by (31)P NMR spectroscopy; bone matrix density was measured by (1)H water- and fat-suppressed projection imaging (WASPI); and the extent of bone mineralization (EBM) was obtained by the ratio of BMD/bone matrix density. The capability of these MR methods to distinguish the bone composition of the CON, OVX, and NFR groups was evaluated against chemical analysis (gravimetry). For cortical bone specimens, BMD of the CON and OVX groups was not significantly different; BMD of the NFR group was 22.1% (by (31)P NMR) and 17.5% (by gravimetry) lower than CON. For trabecular bone specimens, BMD of the OVX group was 40.5% (by (31)P NMR) and 24.6% (by gravimetry) lower than CON; BMD of the NFR group was 26.8% (by (31)P NMR) and 21.5% (by gravimetry) lower than CON. No significant change of cortical bone matrix density between CON and OVX was observed by WASPI or gravimetry; NFR cortical bone matrix density was 10.3% (by WASPI) and 13.9% (by gravimetry) lower than CON. OVX trabecular bone matrix density was 38.0% (by WASPI) and 30.8% (by gravimetry) lower than CON, while no significant change in NFR trabecular bone matrix density was observed by either method. The EBMs of OVX cortical and trabecular specimens were slightly higher than CON but not significantly different from CON. Importantly, EBMs of NFR cortical and trabecular specimens were 12.4% and 26.3% lower than CON by (31)P NMR/WASPI, respectively, and 4.0% and 11.9% lower by gravimetry. Histopathology showed evidence of osteoporosis in the OVX group and severe secondary hyperparathyroidism (renal osteodystrophy) in the NFR group. These results demonstrate that the combined (31)P NMR/WASPI method is capable of discerning the difference in EBM between animals with osteoporosis and those with impaired bone mineralization. Copyright 2010 Elsevier Inc. All rights reserved.

  16. Modeling the Monotonic and Cyclic Tensile Stress-Strain Behavior of 2D and 2.5D Woven C/SiC Ceramic-Matrix Composites

    NASA Astrophysics Data System (ADS)

    Li, L. B.

    2018-05-01

    The deformation of 2D and 2.5 C/SiC woven ceramic-matrix composites (CMCs) in monotonic and cyclic loadings has been investigated. Statistical matrix multicracking and fiber failure models and the fracture mechanics interface debonding approach are used to determine the spacing of matrix cracks, the debonded length of interface, and the fraction of broken fibers. The effects of fiber volume fraction and fiber Weibull modulus on the damage evolution in the composites and on their tensile stress-strain curves are analyzed. When matrix multicracking and fiber/matrix interface debonding occur, the fiber slippage relative to the matrix in the debonded interface region of the 0° warp yarns is the main reason for the emergance of stress-strain hysteresis loops for 2D and 2.5D woven CMCs. A model of these loops is developed, and histeresis loops for the composites in cyclic loadings/unloadings are predicted.

  17. Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors

    NASA Astrophysics Data System (ADS)

    Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay

    2017-11-01

    Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α , the appropriate FRCG model has the effective range d =b2/N =α2/N , for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.

  18. Modeling the Tensile Behavior of Cross-Ply C/SiC Ceramic-Matrix Composites

    NASA Astrophysics Data System (ADS)

    Li, L. B.; Song, Y. D.; Sun, Y. C.

    2015-07-01

    The tensile behavior of cross-ply C/SiC ceramic-matrix composites (CMCs) at room temperature has been investigated. Under tensile loading, the damage evolution process was observed with an optical microscope. A micromechanical approach was developed to predict the tensile stress-strain curve, which considers the damage mechanisms of transverse multicracking, matrix multicracking, fiber/matrix interface debonding, and fiber fracture. The shear-lag model was used to describe the microstress field of the damaged composite. By combining the shear-lag model with different damage models, the tensile stress-strain curve of cross-ply CMCs corresponding to each damage stage was modeled. The predicted tensile stress-strain curves of cross-ply C/SiC composites agreed with experimental data.

  19. Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.

    PubMed

    Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay

    2017-11-01

    Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.

  20. Symmetry Transition Preserving Chirality in QCD: A Versatile Random Matrix Model

    NASA Astrophysics Data System (ADS)

    Kanazawa, Takuya; Kieburg, Mario

    2018-06-01

    We consider a random matrix model which interpolates between the chiral Gaussian unitary ensemble and the Gaussian unitary ensemble while preserving chiral symmetry. This ensemble describes flavor symmetry breaking for staggered fermions in 3D QCD as well as in 4D QCD at high temperature or in 3D QCD at a finite isospin chemical potential. Our model is an Osborn-type two-matrix model which is equivalent to the elliptic ensemble but we consider the singular value statistics rather than the complex eigenvalue statistics. We report on exact results for the partition function and the microscopic level density of the Dirac operator in the ɛ regime of QCD. We compare these analytical results with Monte Carlo simulations of the matrix model.

  1. A Perron-Frobenius theory for block matrices associated to a multiplex network

    NASA Astrophysics Data System (ADS)

    Romance, Miguel; Solá, Luis; Flores, Julio; García, Esther; García del Amo, Alejandro; Criado, Regino

    2015-03-01

    The uniqueness of the Perron vector of a nonnegative block matrix associated to a multiplex network is discussed. The conclusions come from the relationships between the irreducibility of some nonnegative block matrix associated to a multiplex network and the irreducibility of the corresponding matrices to each layer as well as the irreducibility of the adjacency matrix of the projection network. In addition the computation of that Perron vector in terms of the Perron vectors of the blocks is also addressed. Finally we present the precise relations that allow to express the Perron eigenvector of the multiplex network in terms of the Perron eigenvectors of its layers.

  2. A robust method of computing finite difference coefficients based on Vandermonde matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Yijie; Gao, Jinghuai; Peng, Jigen; Han, Weimin

    2018-05-01

    When the finite difference (FD) method is employed to simulate the wave propagation, high-order FD method is preferred in order to achieve better accuracy. However, if the order of FD scheme is high enough, the coefficient matrix of the formula for calculating finite difference coefficients is close to be singular. In this case, when the FD coefficients are computed by matrix inverse operator of MATLAB, inaccuracy can be produced. In order to overcome this problem, we have suggested an algorithm based on Vandermonde matrix in this paper. After specified mathematical transformation, the coefficient matrix is transformed into a Vandermonde matrix. Then the FD coefficients of high-order FD method can be computed by the algorithm of Vandermonde matrix, which prevents the inverse of the singular matrix. The dispersion analysis and numerical results of a homogeneous elastic model and a geophysical model of oil and gas reservoir demonstrate that the algorithm based on Vandermonde matrix has better accuracy compared with matrix inverse operator of MATLAB.

  3. A nonequilibrium model for reactive contaminant transport through fractured porous media: Model development and semianalytical solution

    NASA Astrophysics Data System (ADS)

    Joshi, Nitin; Ojha, C. S. P.; Sharma, P. K.

    2012-10-01

    In this study a conceptual model that accounts for the effects of nonequilibrium contaminant transport in a fractured porous media is developed. Present model accounts for both physical and sorption nonequilibrium. Analytical solution was developed using the Laplace transform technique, which was then numerically inverted to obtain solute concentration in the fracture matrix system. The semianalytical solution developed here can incorporate both semi-infinite and finite fracture matrix extent. In addition, the model can account for flexible boundary conditions and nonzero initial condition in the fracture matrix system. The present semianalytical solution was validated against the existing analytical solutions for the fracture matrix system. In order to differentiate between various sorption/transport mechanism different cases of sorption and mass transfer were analyzed by comparing the breakthrough curves and temporal moments. It was found that significant differences in the signature of sorption and mass transfer exists. Applicability of the developed model was evaluated by simulating the published experimental data of Calcium and Strontium transport in a single fracture. The present model simulated the experimental data reasonably well in comparison to the model based on equilibrium sorption assumption in fracture matrix system, and multi rate mass transfer model.

  4. Balancing Chemical Reactions With Matrix Methods and Computer Assistance. Applications of Linear Algebra to Chemistry. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 339.

    ERIC Educational Resources Information Center

    Grimaldi, Ralph P.

    This material was developed to provide an application of matrix mathematics in chemistry, and to show the concepts of linear independence and dependence in vector spaces of dimensions greater than three in a concrete setting. The techniques presented are not intended to be considered as replacements for such chemical methods as oxidation-reduction…

  5. Practical recipes for the model order reduction, dynamical simulation and compressive sampling of large-scale open quantum systems

    NASA Astrophysics Data System (ADS)

    Sidles, John A.; Garbini, Joseph L.; Harrell, Lee E.; Hero, Alfred O.; Jacky, Jonathan P.; Malcomb, Joseph R.; Norman, Anthony G.; Williamson, Austin M.

    2009-06-01

    Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kähler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kählerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kähler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candès-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.

  6. On the Restricted Toda and c-KdV Flows of Neumann Type

    NASA Astrophysics Data System (ADS)

    Zhou, RuGuang; Qiao, ZhiJun

    2000-09-01

    It is proven that on a symplectic submanifold the restricted c-KdV flow is just the interpolating Hamiltonian flow of invariant for the restricted Toda flow, which is an integrable symplectic map of Neumann type. They share the common Lax matrix, dynamical r-matrix and system of involutive conserved integrals. Furthermore, the procedure of separation of variables is considered for the restricted c-KdV flow of Neumann type. The project supported by the Chinese National Basic Research Project "Nonlinear Science" and the Doctoral Programme Foundation of Institution of High Education of China. The first author also thanks the National Natural Science Foundation of China (19801031) and "Qinglan Project" of Jiangsu Province of China; and the second author also thanks the Alexander von Humboldt Fellowships, Deutschland, the Special Grant of Excellent Ph. D Thesis of China, the Science & Technology Foundation (Youth Talent Foundation) and the Science Research Foundation of Education Committee of Liaoning Province of China.

  7. Analysis on Patterns of Globally Coupled Phase Oscillators with Attractive and Repulsive Interactions

    NASA Astrophysics Data System (ADS)

    Wang, Peng-Fei; Ruan, Xiao-Dong; Xu, Zhong-Bin; Fu, Xin

    2015-11-01

    The Hong-Strogatz (HS) model of globally coupled phase oscillators with attractive and repulsive interactions reflects the fact that each individual (oscillator) has its own attitude (attractive or repulsive) to the same environment (mean field). Previous studies on HS model focused mainly on the stable states on Ott-Antonsen (OA) manifold. In this paper, the eigenvalues of the Jacobi matrix of each fixed point in HS model are explicitly derived, with the aim to understand the local dynamics around each fixed point. Phase transitions are described according to relative population and coupling strength. Besides, the dynamics off OA manifold is studied. Supported by the National Basic Research Program of China under Grant No. 2015CB057301, the Applied Research Project of Public Welfare Technology of Zhejiang Province under Grant No. 201SC31109 and China Postdoctoral Science Foundation under Grant No. 2014M560483

  8. Dissipative stability analysis and control of two-dimensional Fornasini-Marchesini local state-space model

    NASA Astrophysics Data System (ADS)

    Wang, Lanning; Chen, Weimin; Li, Lizhen

    2017-06-01

    This paper is concerned with the problems of dissipative stability analysis and control of the two-dimensional (2-D) Fornasini-Marchesini local state-space (FM LSS) model. Based on the characteristics of the system model, a novel definition of 2-D FM LSS (Q, S, R)-α-dissipativity is given first, and then a sufficient condition in terms of linear matrix inequality (LMI) is proposed to guarantee the asymptotical stability and 2-D (Q, S, R)-α-dissipativity of the systems. As its special cases, 2-D passivity performance and 2-D H∞ performance are also discussed. Furthermore, by use of this dissipative stability condition and projection lemma technique, 2-D (Q, S, R)-α-dissipative state-feedback control problem is solved as well. Finally, a numerical example is given to illustrate the effectiveness of the proposed method.

  9. On the role of hydrogel structure and degradation in controlling the transport of cell-secreted matrix molecules for engineered cartilage.

    PubMed

    Dhote, Valentin; Skaalure, Stacey; Akalp, Umut; Roberts, Justine; Bryant, Stephanie J; Vernerey, Franck J

    2013-03-01

    Damage to cartilage caused by injury or disease can lead to pain and loss of mobility, diminishing one's quality of life. Because cartilage has a limited capacity for self-repair, tissue engineering strategies, such as cells encapsulated in synthetic hydrogels, are being investigated as a means to restore the damaged cartilage. However, strategies to date are suboptimal in part because designing degradable hydrogels is complicated by structural and temporal complexities of the gel and evolving tissue along multiple length scales. To address this problem, this study proposes a multi-scale mechanical model using a triphasic formulation (solid, fluid, unbound matrix molecules) based on a single chondrocyte releasing extracellular matrix molecules within a degrading hydrogel. This model describes the key players (cells, proteoglycans, collagen) of the biological system within the hydrogel encompassing different length scales. Two mechanisms are included: temporal changes of bulk properties due to hydrogel degradation, and matrix transport. Numerical results demonstrate that the temporal change of bulk properties is a decisive factor in the diffusion of unbound matrix molecules through the hydrogel. Transport of matrix molecules in the hydrogel contributes both to the development of the pericellular matrix and the extracellular matrix and is dependent on the relative size of matrix molecules and the hydrogel mesh. The numerical results also demonstrate that osmotic pressure, which leads to changes in mesh size, is a key parameter for achieving a larger diffusivity for matrix molecules in the hydrogel. The numerical model is confirmed with experimental results of matrix synthesis by chondrocytes in biodegradable poly(ethylene glycol)-based hydrogels. This model may ultimately be used to predict key hydrogel design parameters towards achieving optimal cartilage growth. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. On the role of hydrogel structure and degradation in controlling the transport of cell-secreted matrix molecules for engineered cartilage

    PubMed Central

    Dhote, Valentin; Skaalure, Stacey; Akalp, Umut; Roberts, Justine; Bryant, Stephanie J.; Vernerey, Franck J.

    2012-01-01

    Damage to cartilage caused by injury or disease can lead to pain and loss of mobility, diminishing one’s quality of life. Because cartilage has a limited capacity for self-repair, tissue engineering strategies, such as cells encapsulated in synthetic hydrogels, are being investigated as a means to restore the damaged cartilage. However, strategies to date are suboptimal in part because designing degradable hydrogels is complicated by structural and temporal complexities of the gel and evolving tissue along multiple length scales. To address this problem, this study proposes a multi-scale mechanical model using a triphasic formulation (solid, fluid, unbound matrix molecules) based on a single chondrocyte releasing extracellular matrix molecules within a degrading hydrogel. This model describes the key players (cells, proteoglycans, collagen) of the biological system within the hydrogel encompassing different length scales. Two mechanisms are included: temporal changes of bulk properties due to hydrogel degradation, and matrix transport. Numerical results demonstrate that the temporal change of bulk properties is a decisive factor in the diffusion of unbound matrix molecules through the hydrogel. Transport of matrix molecules in the hydrogel contributes both to the development of the pericellular matrix and the extracellular matrix and is dependent on the relative size of matrix molecules and the hydrogel mesh. The numerical results also demonstrate that osmotic pressure, which leads to changes in mesh size, is a key parameter for achieving a larger diffusivity for matrix molecules in the hydrogel. The numerical model is confirmed with experimental results of matrix synthesis by chondrocytes in biodegradable poly(ethylene glycol)-based hydrogels. This model may ultimately be used to predict key hydrogel design parameters towards achieving optimal cartilage growth. PMID:23276516

  11. Nonnegative Matrix Factorization for Efficient Hyperspectral Image Projection

    NASA Technical Reports Server (NTRS)

    Iacchetta, Alexander S.; Fienup, James R.; Leisawitz, David T.; Bolcar, Matthew R.

    2015-01-01

    Hyperspectral imaging for remote sensing has prompted development of hyperspectral image projectors that can be used to characterize hyperspectral imaging cameras and techniques in the lab. One such emerging astronomical hyperspectral imaging technique is wide-field double-Fourier interferometry. NASA's current, state-of-the-art, Wide-field Imaging Interferometry Testbed (WIIT) uses a Calibrated Hyperspectral Image Projector (CHIP) to generate test scenes and provide a more complete understanding of wide-field double-Fourier interferometry. Given enough time, the CHIP is capable of projecting scenes with astronomically realistic spatial and spectral complexity. However, this would require a very lengthy data collection process. For accurate but time-efficient projection of complicated hyperspectral images with the CHIP, the field must be decomposed both spectrally and spatially in a way that provides a favorable trade-off between accurately projecting the hyperspectral image and the time required for data collection. We apply nonnegative matrix factorization (NMF) to decompose hyperspectral astronomical datacubes into eigenspectra and eigenimages that allow time-efficient projection with the CHIP. Included is a brief analysis of NMF parameters that affect accuracy, including the number of eigenspectra and eigenimages used to approximate the hyperspectral image to be projected. For the chosen field, the normalized mean squared synthesis error is under 0.01 with just 8 eigenspectra. NMF of hyperspectral astronomical fields better utilizes the CHIP's capabilities, providing time-efficient and accurate representations of astronomical scenes to be imaged with the WIIT.

  12. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix [A projected preconditioned conjugate gradient algorithm for computing a large eigenspace of a Hermitian matrix

    DOE PAGES

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-02-25

    Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimalmore » block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.« less

  13. Basigin/EMMPRIN/CD147 mediates neuron-glia interactions in the optic lamina of Drosophila.

    PubMed

    Curtin, Kathryn D; Wyman, Robert J; Meinertzhagen, Ian A

    2007-11-15

    Basigin, an IgG family glycoprotein found on the surface of human metastatic tumors, stimulates fibroblasts to secrete matrix metalloproteases (MMPs) that remodel the extracellular matrix, and is thus also known as Extracellular Matrix MetalloPRotease Inducer (EMMPRIN). Using Drosophila we previously identified novel roles for basigin. Specifically, photoreceptors of flies with basigin eyes show misplaced nuclei, rough ER and mitochondria, and swollen axon terminals, suggesting cytoskeletal disruptions. Here we demonstrate that basigin is required for normal neuron-glia interactions in the Drosophila visual system. Flies with basigin mutant photoreceptors have misplaced epithelial glial cells within the first optic neuropile, or lamina. In addition, epithelial glia insert finger-like projections--capitate projections (CPs)--sites of vesicle endocytosis and possibly neurotransmitter recycling. When basigin is missing from photoreceptors terminals, CP formation between glia and photoreceptor terminals is disrupted. Visual system function is also altered in flies with basigin mutant eyes. While photoreceptors depolarize normally to light, synaptic transmission is greatly diminished, consistent with a defect in neurotransmitter release. Basigin expression in photoreceptor neurons is required for normal structure and placement of glia cells.

  14. A study of fiber volume fraction effects in notched unidirectional SCS-6/Ti-15V-3Cr-3Al-3Sn composite. Ph.D. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Covey, Steven J.

    1993-01-01

    Notched unidirectional SCS-6/Ti-15-3 composite of three different fiber volume fractions (vf = 0.15, 0.37, and 0.41) was investigated for various room temperature microstructural and material properties including: fatigue crack initiation, fatigue crack growth, and fracture toughness. While the matrix hardness is similar for all fiber volume fractions, the fiber/matrix interfacial shear strength and matrix residual stress increases with fiber volume fraction. The composite fatigue crack initiation stress is shown to be matrix controlled and occurs when the net maximum matrix stress approaches the endurance limit stress of the matrix. A model is presented which includes residual stresses and presents the composite initiation stress as a function of fiber volume fraction. This model predicts a maximum composite initiation stress at vf approximately 0.15 which agrees with the experimental data. The applied composite stress levels were increased as necessary for continued crack growth. The applied Delta(K) values at crack arrest increase with fiber volume fraction by an amount better approximated using an energy based formulation rather than when scaled linear with modulus. After crack arrest, the crack growth rate exponents for vf37 and vf41 were much lower and toughness much higher, when compared to the unreinforced matrix, because of the bridged region which parades with the propagating fatigue crack. However, the vf15 material exhibited a higher crack growth rate exponent and lower toughness than the unreinforced matrix because once the bridged fibers nearest the crack mouth broke, the stress redistribution broke all bridged fibers, leaving an unbridged crack. Degraded, unbridged behavior is modeled using the residual stress state in the matrix ahead of the crack tip. Plastic zone sizes were directly measured using a metallographic technique and allow prediction of an effective matrix stress intensity which agrees with the fiber pressure model if residual stresses are considered. The sophisticated macro/micro finite element models of the 0.15 and 0.37 fiber volume fractions presented show good agreement with experimental data and the fiber pressure model when an estimated effective fiber/matrix debond length is used.

  15. Data-Driven Learning of Q-Matrix

    ERIC Educational Resources Information Center

    Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang

    2012-01-01

    The recent surge of interests in cognitive assessment has led to developments of novel statistical models for diagnostic classification. Central to many such models is the well-known "Q"-matrix, which specifies the item-attribute relationships. This article proposes a data-driven approach to identification of the "Q"-matrix and estimation of…

  16. Semiclassical matrix model for quantum chaotic transport with time-reversal symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novaes, Marcel, E-mail: marcel.novaes@gmail.com

    2015-10-15

    We show that the semiclassical approach to chaotic quantum transport in the presence of time-reversal symmetry can be described by a matrix model. In other words, we construct a matrix integral whose perturbative expansion satisfies the semiclassical diagrammatic rules for the calculation of transport statistics. One of the virtues of this approach is that it leads very naturally to the semiclassical derivation of universal predictions from random matrix theory.

  17. Deformation, Failure, and Fatigue Life of SiC/Ti-15-3 Laminates Accurately Predicted by MAC/GMC

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2002-01-01

    NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) (ref.1) has been extended to enable fully coupled macro-micro deformation, failure, and fatigue life predictions for advanced metal matrix, ceramic matrix, and polymer matrix composites. Because of the multiaxial nature of the code's underlying micromechanics model, GMC--which allows the incorporation of complex local inelastic constitutive models--MAC/GMC finds its most important application in metal matrix composites, like the SiC/Ti-15-3 composite examined here. Furthermore, since GMC predicts the microscale fields within each constituent of the composite material, submodels for local effects such as fiber breakage, interfacial debonding, and matrix fatigue damage can and have been built into MAC/GMC. The present application of MAC/GMC highlights the combination of these features, which has enabled the accurate modeling of the deformation, failure, and life of titanium matrix composites.

  18. Stress and Damage in Polymer Matrix Composite Materials Due to Material Degradation at High Temperatures

    NASA Technical Reports Server (NTRS)

    McManus, Hugh L.; Chamis, Christos C.

    1996-01-01

    This report describes analytical methods for calculating stresses and damage caused by degradation of the matrix constituent in polymer matrix composite materials. Laminate geometry, material properties, and matrix degradation states are specified as functions of position and time. Matrix shrinkage and property changes are modeled as functions of the degradation states. The model is incorporated into an existing composite mechanics computer code. Stresses, strains, and deformations at the laminate, ply, and micro levels are calculated, and from these calculations it is determined if there is failure of any kind. The rationale for the model (based on published experimental work) is presented, its integration into the laminate analysis code is outlined, and example results are given, with comparisons to existing material and structural data. The mechanisms behind the changes in properties and in surface cracking during long-term aging of polyimide matrix composites are clarified. High-temperature-material test methods are also evaluated.

  19. Computational Modeling of Single-Cell Migration: The Leading Role of Extracellular Matrix Fibers

    PubMed Central

    Schlüter, Daniela K.; Ramis-Conde, Ignacio; Chaplain, Mark A.J.

    2012-01-01

    Cell migration is vitally important in a wide variety of biological contexts ranging from embryonic development and wound healing to malignant diseases such as cancer. It is a very complex process that is controlled by intracellular signaling pathways as well as the cell’s microenvironment. Due to its importance and complexity, it has been studied for many years in the biomedical sciences, and in the last 30 years it also received an increasing amount of interest from theoretical scientists and mathematical modelers. Here we propose a force-based, individual-based modeling framework that links single-cell migration with matrix fibers and cell-matrix interactions through contact guidance and matrix remodelling. With this approach, we can highlight the effect of the cell’s environment on its migration. We investigate the influence of matrix stiffness, matrix architecture, and cell speed on migration using quantitative measures that allow us to compare the results to experiments. PMID:22995486

  20. The Effect of Fiber Architecture on Matrix Cracking in Sic/sic Cmc's

    NASA Technical Reports Server (NTRS)

    Morscher, Gregory N.

    2005-01-01

    Applications incorporating silicon carbide fiber reinforced silicon carbide matrix composites (CMC's) will require a wide range of fiber architectures in order to fabricate complex shape. The stress-strain response of a given SiC/SiC system for different architectures and orientations will be required in order to design and effectively life-model future components. The mechanism for non-linear stress-strain behavior in CMC's is the formation and propagation of bridged-matrix cracks throughout the composite. A considerable amount of understanding has been achieved for the stress-dependent matrix cracking behavior of SiC fiber reinforced SiC matrix systems containing melt-infiltrated Si. This presentation will outline the effect of 2D and 3D architectures and orientation on stress-dependent matrix-cracking and how this information can be used to model material behavior and serve as the starting point foe mechanistic-based life-models.

  1. Quantifying avian nest survival along an urbanization gradient using citizen- and scientist-generated data.

    PubMed

    Ryder, Thomas B; Reitsma, Robert; Evans, Brian; Marra, Peter P

    2010-03-01

    Despite the increasing pace of urbanization little is known about the factors that limit bird populations (i.e., population-level processes) within the urban/suburban land-use matrix. Here, we report rates of nest survival within the matrix of an urban land-use gradient in the greater Washington, D.C., USA, area for five common songbirds using data collected by scientists and citizens as part of a project called Neighborhood Nestwatch. Using program MARK, we modeled the effects of species, urbanization at multiple spatial scales (canopy cover and impervious surface), and observer (citizen vs. scientist) on nest survival of four open-cup and one cavity-nesting species. In addition, artificial nests were used to determine the relative impacts of specific predators along the land-use gradient. Our results suggest that predation on nests within the land-use matrix declines with urbanization but that there are species-specific differences. Moreover, variation in nest survival among species was best explained by urbanization metrics measured at larger "neighborhood" spatial scales (e.g., 1000 m). Trends were supported by data from artificial nests and suggest that variable predator communities (avian vs. mammalian) are one possible mechanism to explain differential nest survival. In addition, we assessed the quality of citizen science data and show that citizens had no negative effect on nest survival and provided estimates of nest survival comparable to Smithsonian biologists. Although birds nesting within the urban matrix experienced higher nest survival, individuals also faced a multitude of other challenges such as contaminants and invasive species, all of which could reduce adult survival.

  2. An A{sub r} threesome: Matrix models, 2d conformal field theories, and 4dN=2 gauge theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiappa, Ricardo; Wyllard, Niclas

    We explore the connections between three classes of theories: A{sub r} quiver matrix models, d=2 conformal A{sub r} Toda field theories, and d=4N=2 supersymmetric conformal A{sub r} quiver gauge theories. In particular, we analyze the quiver matrix models recently introduced by Dijkgraaf and Vafa (unpublished) and make detailed comparisons with the corresponding quantities in the Toda field theories and the N=2 quiver gauge theories. We also make a speculative proposal for how the matrix models should be modified in order for them to reproduce the instanton partition functions in quiver gauge theories in five dimensions.

  3. Modeling the Stress Strain Behavior of Woven Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Morscher, Gregory N.

    2006-01-01

    Woven SiC fiber reinforced SiC matrix composites represent one of the most mature composite systems to date. Future components fabricated out of these woven ceramic matrix composites are expected to vary in shape, curvature, architecture, and thickness. The design of future components using woven ceramic matrix composites necessitates a modeling approach that can account for these variations which are physically controlled by local constituent contents and architecture. Research over the years supported primarily by NASA Glenn Research Center has led to the development of simple mechanistic-based models that can describe the entire stress-strain curve for composite systems fabricated with chemical vapor infiltrated matrices and melt-infiltrated matrices for a wide range of constituent content and architecture. Several examples will be presented that demonstrate the approach to modeling which incorporates a thorough understanding of the stress-dependent matrix cracking properties of the composite system.

  4. A Method of Q-Matrix Validation for the Linear Logistic Test Model

    PubMed Central

    Baghaei, Purya; Hohensinn, Christine

    2017-01-01

    The linear logistic test model (LLTM) is a well-recognized psychometric model for examining the components of difficulty in cognitive tests and validating construct theories. The plausibility of the construct model, summarized in a matrix of weights, known as the Q-matrix or weight matrix, is tested by (1) comparing the fit of LLTM with the fit of the Rasch model (RM) using the likelihood ratio (LR) test and (2) by examining the correlation between the Rasch model item parameters and LLTM reconstructed item parameters. The problem with the LR test is that it is almost always significant and, consequently, LLTM is rejected. The drawback of examining the correlation coefficient is that there is no cut-off value or lower bound for the magnitude of the correlation coefficient. In this article we suggest a simulation method to set a minimum benchmark for the correlation between item parameters from the Rasch model and those reconstructed by the LLTM. If the cognitive model is valid then the correlation coefficient between the RM-based item parameters and the LLTM-reconstructed item parameters derived from the theoretical weight matrix should be greater than those derived from the simulated matrices. PMID:28611721

  5. Perturbed generalized multicritical one-matrix models

    NASA Astrophysics Data System (ADS)

    Ambjørn, J.; Chekhov, L.; Makeenko, Y.

    2018-03-01

    We study perturbations around the generalized Kazakov multicritical one-matrix model. The multicritical matrix model has a potential where the coefficients of zn only fall off as a power 1 /n s + 1. This implies that the potential and its derivatives have a cut along the real axis, leading to technical problems when one performs perturbations away from the generalized Kazakov model. Nevertheless it is possible to relate the perturbed partition function to the tau-function of a KdV hierarchy and solve the model by a genus expansion in the double scaling limit.

  6. Habitat or matrix: which is more relevant to predict road-kill of vertebrates?

    PubMed

    Bueno, C; Sousa, C O M; Freitas, S R

    2015-11-01

    We believe that in tropics we need a community approach to evaluate road impacts on wildlife, and thus, suggest mitigation measures for groups of species instead a focal-species approach. Understanding which landscape characteristics indicate road-kill events may also provide models that can be applied in other regions. We intend to evaluate if habitat or matrix is more relevant to predict road-kill events for a group of species. Our hypothesis is: more permeable matrix is the most relevant factor to explain road-kill events. To test this hypothesis, we chose vertebrates as the studied assemblage and a highway crossing in an Atlantic Forest region in southeastern Brazil as the study site. Logistic regression models were designed using presence/absence of road-kill events as dependent variables and landscape characteristics as independent variables, which were selected by Akaike's Information Criterion. We considered a set of candidate models containing four types of simple regression models: Habitat effect model; Matrix types effect models; Highway effect model; and, Reference models (intercept and buffer distance). Almost three hundred road-kills and 70 species were recorded. River proximity and herbaceous vegetation cover, both matrix effect models, were associated to most road-killed vertebrate groups. Matrix was more relevant than habitat to predict road-kill of vertebrates. The association between river proximity and road-kill indicates that rivers may be a preferential route for most species. We discuss multi-species mitigation measures and implications to movement ecology and conservation strategies.

  7. Shrinkage estimation of the realized relationship matrix

    USDA-ARS?s Scientific Manuscript database

    The additive relationship matrix plays an important role in mixed model prediction of breeding values. For genotype matrix X (loci in columns), the product XX' is widely used as a realized relationship matrix, but the scaling of this matrix is ambiguous. Our first objective was to derive a proper ...

  8. Recovering hidden diagonal structures via non-negative matrix factorization with multiple constraints.

    PubMed

    Yang, Xi; Han, Guoqiang; Cai, Hongmin; Song, Yan

    2017-03-31

    Revealing data with intrinsically diagonal block structures is particularly useful for analyzing groups of highly correlated variables. Earlier researches based on non-negative matrix factorization (NMF) have been shown to be effective in representing such data by decomposing the observed data into two factors, where one factor is considered to be the feature and the other the expansion loading from a linear algebra perspective. If the data are sampled from multiple independent subspaces, the loading factor would possess a diagonal structure under an ideal matrix decomposition. However, the standard NMF method and its variants have not been reported to exploit this type of data via direct estimation. To address this issue, a non-negative matrix factorization with multiple constraints model is proposed in this paper. The constraints include an sparsity norm on the feature matrix and a total variational norm on each column of the loading matrix. The proposed model is shown to be capable of efficiently recovering diagonal block structures hidden in observed samples. An efficient numerical algorithm using the alternating direction method of multipliers model is proposed for optimizing the new model. Compared with several benchmark models, the proposed method performs robustly and effectively for simulated and real biological data.

  9. On the constrained minimization of smooth Kurdyka—Łojasiewicz functions with the scaled gradient projection method

    NASA Astrophysics Data System (ADS)

    Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone

    2016-10-01

    The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.

  10. MIMAC-He3: MICRO-TPC MATRIX OF CHAMBERS OF 3He

    NASA Astrophysics Data System (ADS)

    Santos, D.; Guillaudin, O.; Lamy, Th.; Mayet, F.; Moulin, E.

    2007-08-01

    The project of a micro-TPC matrix of chambers of 3He for direct detection of non-baryonic dark matter is outlined. The privileged properties of 3He are highlighted. The double detection (ionization - projection of tracks) will assure the electron-recoil discrimination. The complementarity of MIMAC-He3 for supersymmetric dark matter search with respect to other experiments is illustrated. The modular character of the detector allows to have different gases to get A-dependence. The pressure degreee of freedom gives the possibility to work at high and low pressure. The low pressure regime gives the possibility to get the directionality of the tracks. The first measurements of ionization at very few keVs for 3He in 4He gas are described.

  11. General structure of democratic mass matrix of quark sector in E6 model

    NASA Astrophysics Data System (ADS)

    Ciftci, R.; ćiftci, A. K.

    2016-03-01

    An extension of the Standard Model (SM) fermion sector, which is inspired by the E6 Grand Unified Theory (GUT) model, might be a good candidate to explain a number of unanswered questions in SM. Existence of the isosinglet quarks might explain great mass difference of bottom and top quarks. Also, democracy on mass matrix elements is a natural approach in SM. In this study, we have given general structure of Democratic Mass Matrix (DMM) of quark sector in E6 model.

  12. An analysis of the wear behavior of SiC whisker reinforced alumina from 25 to 1200 C

    NASA Technical Reports Server (NTRS)

    Dellacorte, Christopher

    1991-01-01

    A model is described for predicting the wear behavior of whisker reinforced ceramics. The model was successfully applied to a silicon carbide whisker reinforced alumina ceramic composite subjected to sliding contact. The model compares the friction forces on the whiskers due to sliding, which act to pull or push them out of the matrix, to the clamping or compressive forces on the whiskers due to the matrix, which act to hold the whiskers in the composite. At low temperatures, the whiskers are held strongly in the matrix and are fractured into pieces during the wear process along with the matrix. At elevated temperatures differential thermal expansion between the whiskers and matrix can cause loosening of the whiskers and lead to pullout during the wear process and to higher wear. The model, which represents the combination of elastic stress analysis and a friction heating analysis, predicts a transition temperature at which the strength of the whiskers equals the clamping force holding them in the matrix. Above the transition the whiskers are pulled out of the matrix during sliding, and below the transition the whiskers are simply fractured. The existence of the transition gives rise to a dual wear mode or mechanism behavior for this material which was observed in laboratory experiments. The results from this model correlate well with experimentally observed behavior indicating that the model may be useful in obtaining a better understanding of material behavior and in making material improvements.

  13. An analysis of the wear behavior of SiC whisker-reinforced alumina from 25 to 1200 C

    NASA Technical Reports Server (NTRS)

    Dellacorte, Christopher

    1993-01-01

    A model is described for predicting the wear behavior of whisker reinforced ceramics. The model was successfully applied to a silicon carbide whisker reinforced alumina ceramic composite subjected to sliding contact. The model compares the friction forces on the whiskers due to sliding, which act to pull or push them out of the matrix, to the clamping or compressive forces on the whiskers due to the matrix, which act to hold the whiskers in the composite. At low temperatures, the whiskers are held strongly in the matrix and are fractured into pieces during the wear process along with the matrix. At elevated temperatures differential thermal expansion between the whiskers and matrix can cause loosening of the whiskers and lead to pullout during the wear process and to higher wear. The model, which represents the combination of elastic stress analysis and a friction heating analysis, predicts a transition temperature at which the strength of the whiskers equals the clamping force holding them in the matrix. Above the transition the whiskers are pulled out of the matrix during sliding, and below the transition the whiskers are simply fractured. The existence of the transition gives rise to a dual wear mode or mechanism behavior for this material which was observed in laboratory experiments. The results from this model correlate well with experimentally observed behavior indicating that the model may be useful in obtaining a better understanding of material behavior and in making material improvements.

  14. Model and Data Reduction for Control, Identification and Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Kramer, Boris

    This dissertation focuses on problems in design, optimization and control of complex, large-scale dynamical systems from different viewpoints. The goal is to develop new algorithms and methods, that solve real problems more efficiently, together with providing mathematical insight into the success of those methods. There are three main contributions in this dissertation. In Chapter 3, we provide a new method to solve large-scale algebraic Riccati equations, which arise in optimal control, filtering and model reduction. We present a projection based algorithm utilizing proper orthogonal decomposition, which is demonstrated to produce highly accurate solutions at low rank. The method is parallelizable, easy to implement for practitioners, and is a first step towards a matrix free approach to solve AREs. Numerical examples for n ≥ 106 unknowns are presented. In Chapter 4, we develop a system identification method which is motivated by tangential interpolation. This addresses the challenge of fitting linear time invariant systems to input-output responses of complex dynamics, where the number of inputs and outputs is relatively large. The method reduces the computational burden imposed by a full singular value decomposition, by carefully choosing directions on which to project the impulse response prior to assembly of the Hankel matrix. The identification and model reduction step follows from the eigensystem realization algorithm. We present three numerical examples, a mass spring damper system, a heat transfer problem, and a fluid dynamics system. We obtain error bounds and stability results for this method. Chapter 5 deals with control and observation design for parameter dependent dynamical systems. We address this by using local parametric reduced order models, which can be used online. Data available from simulations of the system at various configurations (parameters, boundary conditions) is used to extract a sparse basis to represent the dynamics (via dynamic mode decomposition). Subsequently, a new, compressed sensing based classification algorithm is developed which incorporates the extracted dynamic information into the sensing basis. We show that this augmented classification basis makes the method more robust to noise, and results in superior identification of the correct parameter. Numerical examples consist of a Navier-Stokes, as well as a Boussinesq flow application.

  15. Effect of the fiber-matrix interphase on the transverse tensile strength of the unidirectional composite material

    NASA Technical Reports Server (NTRS)

    Tsai, H. C.; Arocho, A. M.

    1992-01-01

    A simple one-dimensional fiber-matrix interphase model has been developed and analytical results obtained correlated well with available experimental data. It was found that by including the interphase between the fiber and matrix in the model, much better local stress results were obtained than with the model without the interphase. A more sophisticated two-dimensional micromechanical model, which included the interphase properties was also developed. Both one-dimensional and two-dimensional models were used to study the effect of the interphase properties on the local stresses at the fiber, interphase and matrix. From this study, it was found that interphase modulus and thickness have significant influence on the transverse tensile strength and mode of failure in fiber reinforced composites.

  16. Reconstruction of the two-dimensional gravitational potential of galaxy clusters from X-ray and Sunyaev-Zel'dovich measurements

    NASA Astrophysics Data System (ADS)

    Tchernin, C.; Bartelmann, M.; Huber, K.; Dekel, A.; Hurier, G.; Majer, C. L.; Meyer, S.; Zinger, E.; Eckert, D.; Meneghetti, M.; Merten, J.

    2018-06-01

    Context. The mass of galaxy clusters is not a direct observable, nonetheless it is commonly used to probe cosmological models. Based on the combination of all main cluster observables, that is, the X-ray emission, the thermal Sunyaev-Zel'dovich (SZ) signal, the velocity dispersion of the cluster galaxies, and gravitational lensing, the gravitational potential of galaxy clusters can be jointly reconstructed. Aims: We derive the two main ingredients required for this joint reconstruction: the potentials individually reconstructed from the observables and their covariance matrices, which act as a weight in the joint reconstruction. We show here the method to derive these quantities. The result of the joint reconstruction applied to a real cluster will be discussed in a forthcoming paper. Methods: We apply the Richardson-Lucy deprojection algorithm to data on a two-dimensional (2D) grid. We first test the 2D deprojection algorithm on a β-profile. Assuming hydrostatic equilibrium, we further reconstruct the gravitational potential of a simulated galaxy cluster based on synthetic SZ and X-ray data. We then reconstruct the projected gravitational potential of the massive and dynamically active cluster Abell 2142, based on the X-ray observations collected with XMM-Newton and the SZ observations from the Planck satellite. Finally, we compute the covariance matrix of the projected reconstructed potential of the cluster Abell 2142 based on the X-ray measurements collected with XMM-Newton. Results: The gravitational potentials of the simulated cluster recovered from synthetic X-ray and SZ data are consistent, even though the potential reconstructed from X-rays shows larger deviations from the true potential. Regarding Abell 2142, the projected gravitational cluster potentials recovered from SZ and X-ray data reproduce well the projected potential inferred from gravitational-lensing observations. We also observe that the covariance matrix of the potential for Abell 2142 reconstructed from XMM-Newton data sensitively depends on the resolution of the deprojected grid and on the smoothing scale used in the deprojection. Conclusions: We show that the Richardson-Lucy deprojection method can be effectively applied on a grid and that the projected potential is well recovered from real and simulated data based on X-ray and SZ signal. The comparison between the reconstructed potentials from the different observables provides additional information on the validity of the assumptions as function of the projected radius.

  17. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION

    PubMed Central

    Allen, Genevera I.; Tibshirani, Robert

    2015-01-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility. PMID:26877823

  18. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.

    PubMed

    Allen, Genevera I; Tibshirani, Robert

    2010-06-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.

  19. Implementation of thermal residual stresses in the analysis of fiber bridged matrix crack growth in titanium matrix composites

    NASA Technical Reports Server (NTRS)

    Bakuckas, John G., Jr.; Johnson, W. Steven

    1994-01-01

    In this research, thermal residual stresses were incorporated in an analysis of fiber-bridged matrix cracks in unidirectional and cross-ply titanium matrix composites (TMC) containing center holes or center notches. Two TMC were investigated, namely, SCS-6/Timelal-21S laminates. Experimentally, matrix crack initiation and growth were monitored during tension-tension fatigue tests conducted at room temperature and at an elevated temperature of 200 C. Analytically, thermal residual stresses were included in a fiber bridging (FB) model. The local R-ratio and stress-intensity factor in the matrix due to thermal and mechanical loadings were calculated and used to evaluate the matrix crack growth behavior in the two materials studied. The frictional shear stress term, tau, assumed in this model was used as a curve-fitting parameter to matrix crack growth data. The scatter band in the values of tau used to fit the matrix crack growth data was significantly reduced when thermal residual stresses were included in the fiber bridging analysis. For a given material system, lay-up and temperature, a single value of tau was sufficient to analyze the crack growth data. It was revealed in this study that thermal residual stresses are an important factor overlooked in the original FB models.

  20. Modelling impacts of second generation bioenergy production on Ecosystem Services in Europe

    NASA Astrophysics Data System (ADS)

    Henner, Dagmar N.; Smith, Pete; Davies, Christian; McNamara, Niall P.

    2015-04-01

    Bioenergy crops are an important source of renewable energy and are a possible mechanism to mitigate global climate warming, by replacing fossil fuel energy with higher greenhouse gas emissions. There is, however, uncertainty about the impacts of the growth of bioenergy crops on ecosystem services. This uncertainty is further enhanced by the unpredictable climate change currently going on. The goal of this project is to develop a comprehensive model that covers as many ecosystem services as possible at a Continental level including biodiversity, water, GHG emissions, soil, and cultural services. The distribution and production of second generation energy crops, such as Miscanthus, Short Rotation Coppice (SRC) and Short Rotation Forestry (SRF), is currently being modelled, and ecosystem models will be used to examine the impacts of these crops on ecosystem services. The project builds on models of energy crop production, biodiversity, soil impacts, greenhouse gas emissions and other ecosystem services, and on work undertaken in the UK on the ETI-funded ELUM project (www.elum.ac.uk). In addition, methods like water footprint tools, tourism value maps and ecosystem valuation tools and models (e.g. InVest, TEEB database, GREET LCA Model, World Business Council for Sustainable Development corporate ecosystem valuation, Millennium Ecosystem Assessment and the Ecosystem Services Framework) will be utilised. Research will focus on optimisation of land use change feedbacks on ecosystem services and biodiversity, and weighting of the importance of the individual ecosystem services. Energy crops will be modelled using low, medium and high climate change scenarios for the years between 2015 and 2050. We will present first results for GHG emissions and soil organic carbon change after different land use change scenarios (e.g. arable to Miscanthus, forest to SRF), and with different climate warming scenarios. All this will be complemented by the presentation of a matrix including all the factors and ecosystem services influenced by land use change to bioenergy crop production under different climate change scenarios.

  1. Creep of Heat-Resistant Composites of an Oxide-Fiber/Ni-Matrix Family

    NASA Astrophysics Data System (ADS)

    Mileiko, S. T.

    2001-09-01

    A creep model of a composite with a creeping matrix and initially continuous elastic brittle fibers is developed. The model accounts for the fiber fragmentation in the stage of unsteady creep of the composite, which ends with a steady-state creep, where a minimum possible average length of the fiber is achieved. The model makes it possible to analyze the creep rate of the composite in relation to such parameters of its structure as the statistic characteristics of the fiber strength, the creep characteristics of the matrix, and the strength of the fiber-matrix interface, the latter being of fundamental importance. A comparison between the calculation results and the experimental ones obtained on composites with a Ni-matrix and monocrystalline and eutectic oxide fibers as well as on sapphire fiber/TiAl-matrix composites shows that the model is applicable to the computer simulation of the creep behavior of heat-resistant composites and to the optimization of the structure of such composites. By combining the experimental data with calculation results, it is possible to evaluate the heat resistance of composites and the potential of oxide-fiber/Ni-matrix composites. The composite specimens obtained and tested to date reveal their high creep resistance up to a temperature of 1150°C. The maximum operating temperature of the composites can be considerably raised by strengthening the fiber-matrix interface.

  2. Coherent Microwave Scattering Model of Marsh Grass

    NASA Astrophysics Data System (ADS)

    Duan, Xueyang; Jones, Cathleen E.

    2017-12-01

    In this work, we developed an electromagnetic scattering model to analyze radar scattering from tall-grass-covered lands such as wetlands and marshes. The model adopts the generalized iterative extended boundary condition method (GIEBCM) algorithm, previously developed for buried cylindrical media such as vegetation roots, to simulate the scattering from the grass layer. The major challenge of applying GIEBCM to tall grass is the extremely time-consuming iteration among the large number of short subcylinders building up the grass. To overcome this issue, we extended the GIEBCM to multilevel GIEBCM, or M-GIEBCM, in which we first use GIEBCM to calculate a T matrix (transition matrix) database of "straws" with various lengths, thicknesses, orientations, curvatures, and dielectric properties; we then construct the grass with a group of straws from the database and apply GIEBCM again to calculate the T matrix of the overall grass scene. The grass T matrix is transferred to S matrix (scattering matrix) and combined with the ground S matrix, which is computed using the stabilized extended boundary condition method, to obtain the total scattering. In this article, we will demonstrate the capability of the model by simulating scattering from scenes with different grass densities, different grass structures, different grass water contents, and different ground moisture contents. This model will help with radar experiment design and image interpretation for marshland and wetland observations.

  3. A model to predict thermal conductivity of irradiated U-Mo dispersion fuel

    NASA Astrophysics Data System (ADS)

    Burkes, Douglas E.; Huber, Tanja K.; Casella, Andrew M.

    2016-05-01

    Numerous global programs are focused on the continued development of existing and new research and test reactor fuels to achieve maximum attainable uranium loadings to support the conversion of a number of the world's remaining high-enriched uranium fueled reactors to low-enriched uranium fuel. Some of these programs are focused on assisting with the development and qualification of a fuel design that consists of a uranium-molybdenum (U-Mo) alloy dispersed in an aluminum matrix as one option for reactor conversion. Thermal conductivity is an important consideration in determining the operational temperature of the fuel and can be influenced by interaction layer formation between the dispersed phase and matrix and upon the concentration of the dispersed phase within the matrix. This paper extends the use of a simple model developed previously to study the influence of interaction layer formation as well as the size and volume fraction of fuel particles dispersed in the matrix, Si additions to the matrix, and Mo concentration in the fuel particles on the effective thermal conductivity of the U-Mo/Al composite during irradiation. The model has been compared to experimental measurements recently conducted on U-Mo/Al dispersion fuels at two different fission densities with acceptable agreement. Observations of the modeled results indicate that formation of an interaction layer and subsequent consumption of the matrix reveals a rather significant effect on effective thermal conductivity. The modeled interaction layer formation and subsequent consumption of the high thermal conductivity matrix was sensitive to the average dispersed fuel particle size, suggesting this parameter as one of the most effective in minimizing thermal conductivity degradation of the composite, while the influence of Si additions to the matrix in the model was highly dependent upon irradiation conditions.

  4. A model to predict thermal conductivity of irradiated U–Mo dispersion fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burkes, Douglas E.; Huber, Tanja K.; Casella, Andrew M.

    The Office of Materials Management and Minimization Reactor Conversion Program continues to develop existing and new research and test reactor fuels to achieve maximum attainable uranium loadings to support the conversion of a number of the world’s remaining high-enriched uranium fueled reactors to low-enriched uranium fuel. The program is focused on assisting with the development and qualification of a fuel design that consists of a uranium-molybdenum (U-Mo) alloy dispersed in an aluminum matrix as one option for reactor conversion. Thermal conductivity is an important consideration in determining the operational temperature of the fuel and can be influenced by interaction layermore » formation between the dispersed phase and matrix and upon the concentration of the dispersed phase within the matrix. This paper extends the use of a simple model developed previously to study the influence of interaction layer formation as well as the size and volume fraction of fuel particles dispersed in the matrix, Si additions to the matrix, and Mo concentration in the fuel particles on the effective thermal conductivity of the U-Mo/Al composite during irradiation. The model has been compared to experimental measurements recently conducted on U-Mo/Al dispersion fuels at two different fission densities with acceptable agreement. Observations of the modeled results indicate that formation of an interaction layer and subsequent consumption of the matrix reveals a rather significant effect on effective thermal conductivity. The modeled interaction layer formation and subsequent consumption of the high thermal conductivity matrix was sensitive to the average dispersed fuel particle size, suggesting this parameter as one of the most effective in minimizing thermal conductivity degradation of the composite, while the influence of Si additions to the matrix in the model was highly dependent upon irradiation conditions.« less

  5. Assessment of Matrix Multiplication Learning with a Rule-Based Analytical Model--"A Bayesian Network Representation"

    ERIC Educational Resources Information Center

    Zhang, Zhidong

    2016-01-01

    This study explored an alternative assessment procedure to examine learning trajectories of matrix multiplication. It took rule-based analytical and cognitive task analysis methods specifically to break down operation rules for a given matrix multiplication. Based on the analysis results, a hierarchical Bayesian network, an assessment model,…

  6. Constructing a Covariance Matrix that Yields a Specified Minimizer and a Specified Minimum Discrepancy Function Value.

    ERIC Educational Resources Information Center

    Cudeck, Robert; Browne, Michael W.

    1992-01-01

    A method is proposed for constructing a population covariance matrix as the sum of a particular model plus a nonstochastic residual matrix, with the stipulation that the model holds with a prespecified lack of fit. The procedure is considered promising for Monte Carlo studies. (SLD)

  7. Cable Television: End of a Dream. The Network Project Notebook Number Eight.

    ERIC Educational Resources Information Center

    Columbia Univ., New York, NY. Network Project.

    The Notebook is divided into two parts. The first half reprints the transcript of a radio documentary on cable television, one in a series of five MATRIX radio programs produced by the Network Project in 1974. It includes discussions of planning for the new technology and of its present control by corporate conglomerates, and forecasts a…

  8. A new family Jacobian solver for global three-dimensional modeling of atmospheric chemistry

    NASA Astrophysics Data System (ADS)

    Zhao, Xuepeng; Turco, Richard P.; Shen, Mei

    1999-01-01

    We present a new technique to solve complex sets of photochemical rate equations that is applicable to global modeling of the troposphere and stratosphere. The approach is based on the concept of "families" of species, whose chemical rate equations are tightly coupled. Variations of species concentrations within a family can be determined by inverting a linearized Jacobian matrix representing the family group. Since this group consists of a relatively small number of species the corresponding Jacobian has a low order (a minimatrix) compared to the Jacobian of the entire system. However, we go further and define a super-family that is the set of all families. The super-family is also solved by linearization and matrix inversion. The resulting Super-Family Matrix Inversion (SFMI) scheme is more stable and accurate than common family approaches. We discuss the numerical structure of the SFMI scheme and apply our algorithms to a comprehensive set of photochemical reactions. To evaluate performance, the SFMI scheme is compared with an optimized Gear solver. We find that the SFMI technique can be at least an order of magnitude more efficient than existing chemical solvers while maintaining relative errors in the calculations of 15% or less over a diurnal cycle. The largest SFMI errors arise at sunrise and sunset and during the evening when species concentrations may be very low. We show that sunrise/sunset errors can be minimized through a careful treatment of photodissociation during these periods; the nighttime deviations are negligible from the point of view of acceptable computational accuracy. The stability and flexibility of the SFMI algorithm should be sufficient for most modeling applications until major improvements in other modeling factors are achieved. In addition, because of its balanced computational design, SFMI can easily be adapted to parallel computing architectures. SFMI thus should allow practical long-term integrations of global chemistry coupled to general circulation and climate models, studies of interannual and interdecadal variability in atmospheric composition, simulations of past multidecadal trends owing to anthropogenic emissions, long-term forecasting associated with projected emissions, and sensitivity analyses for a wide range of physical and chemical parameters.

  9. Metal matrix composite micromechanics: In-situ behavior influence on composite properties

    NASA Technical Reports Server (NTRS)

    Murthy, P. L. N.; Hopkins, D. A.; Chamis, C. C.

    1989-01-01

    Recent efforts in computational mechanics methods for simulating the nonlinear behavior of metal matrix composites have culminated in the implementation of the Metal Matrix Composite Analyzer (METCAN) computer code. In METCAN material nonlinearity is treated at the constituent (fiber, matrix, and interphase) level where the current material model describes a time-temperature-stress dependency of the constituent properties in a material behavior space. The composite properties are synthesized from the constituent instantaneous properties by virtue of composite micromechanics and macromechanics models. The behavior of metal matrix composites depends on fabrication process variables, in situ fiber and matrix properties, bonding between the fiber and matrix, and/or the properties of an interphase between the fiber and matrix. Specifically, the influence of in situ matrix strength and the interphase degradation on the unidirectional composite stress-strain behavior is examined. These types of studies provide insight into micromechanical behavior that may be helpful in resolving discrepancies between experimentally observed composite behavior and predicted response.

  10. The Biological Observation Matrix (BIOM) format or: how I learned to stop worrying and love the ome-ome.

    PubMed

    McDonald, Daniel; Clemente, Jose C; Kuczynski, Justin; Rideout, Jai Ram; Stombaugh, Jesse; Wendel, Doug; Wilke, Andreas; Huse, Susan; Hufnagle, John; Meyer, Folker; Knight, Rob; Caporaso, J Gregory

    2012-07-12

    We present the Biological Observation Matrix (BIOM, pronounced "biome") format: a JSON-based file format for representing arbitrary observation by sample contingency tables with associated sample and observation metadata. As the number of categories of comparative omics data types (collectively, the "ome-ome") grows rapidly, a general format to represent and archive this data will facilitate the interoperability of existing bioinformatics tools and future meta-analyses. The BIOM file format is supported by an independent open-source software project (the biom-format project), which initially contains Python objects that support the use and manipulation of BIOM data in Python programs, and is intended to be an open development effort where developers can submit implementations of these objects in other programming languages. The BIOM file format and the biom-format project are steps toward reducing the "bioinformatics bottleneck" that is currently being experienced in diverse areas of biological sciences, and will help us move toward the next phase of comparative omics where basic science is translated into clinical and environmental applications. The BIOM file format is currently recognized as an Earth Microbiome Project Standard, and as a Candidate Standard by the Genomic Standards Consortium.

  11. Dual signal subspace projection (DSSP): a novel algorithm for removing large interference in biomagnetic measurements

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Kawabata, Yuya; Ushio, Shuta; Sumiya, Satoshi; Kawabata, Shigenori; Adachi, Yoshiaki; Nagarajan, Srikantan S.

    2016-06-01

    Objective. In functional electrophysiological imaging, signals are often contaminated by interference that can be of considerable magnitude compared to the signals of interest. This paper proposes a novel algorithm for removing such interferences that does not require separate noise measurements. Approach. The algorithm is based on a dual definition of the signal subspace in the spatial- and time-domains. Since the algorithm makes use of this duality, it is named the dual signal subspace projection (DSSP). The DSSP algorithm first projects the columns of the measured data matrix onto the inside and outside of the spatial-domain signal subspace, creating a set of two preprocessed data matrices. The intersection of the row spans of these two matrices is estimated as the time-domain interference subspace. The original data matrix is projected onto the subspace that is orthogonal to this interference subspace. Main results. The DSSP algorithm is validated by using the computer simulation, and using two sets of real biomagnetic data: spinal cord evoked field data measured from a healthy volunteer and magnetoencephalography data from a patient with a vagus nerve stimulator. Significance. The proposed DSSP algorithm is effective for removing overlapped interference in a wide variety of biomagnetic measurements.

  12. An application of the Krylov-FSP-SSA method to parameter fitting with maximum likelihood

    NASA Astrophysics Data System (ADS)

    Dinh, Khanh N.; Sidje, Roger B.

    2017-12-01

    Monte Carlo methods such as the stochastic simulation algorithm (SSA) have traditionally been employed in gene regulation problems. However, there has been increasing interest to directly obtain the probability distribution of the molecules involved by solving the chemical master equation (CME). This requires addressing the curse of dimensionality that is inherent in most gene regulation problems. The finite state projection (FSP) seeks to address the challenge and there have been variants that further reduce the size of the projection or that accelerate the resulting matrix exponential. The Krylov-FSP-SSA variant has proved numerically efficient by combining, on one hand, the SSA to adaptively drive the FSP, and on the other hand, adaptive Krylov techniques to evaluate the matrix exponential. Here we apply this Krylov-FSP-SSA to a mutual inhibitory gene network synthetically engineered in Saccharomyces cerevisiae, in which bimodality arises. We show numerically that the approach can efficiently approximate the transient probability distribution, and this has important implications for parameter fitting, where the CME has to be solved for many different parameter sets. The fitting scheme amounts to an optimization problem of finding the parameter set so that the transient probability distributions fit the observations with maximum likelihood. We compare five optimization schemes for this difficult problem, thereby providing further insights into this approach of parameter estimation that is often applied to models in systems biology where there is a need to calibrate free parameters. Work supported by NSF grant DMS-1320849.

  13. COMADRE: a global data base of animal demography.

    PubMed

    Salguero-Gómez, Roberto; Jones, Owen R; Archer, C Ruth; Bein, Christoph; de Buhr, Hendrik; Farack, Claudia; Gottschalk, Fränce; Hartmann, Alexander; Henning, Anne; Hoppe, Gabriel; Römer, Gesa; Ruoff, Tara; Sommer, Veronika; Wille, Julia; Voigt, Jakob; Zeh, Stefan; Vieregg, Dirk; Buckley, Yvonne M; Che-Castaldo, Judy; Hodgson, David; Scheuerlein, Alexander; Caswell, Hal; Vaupel, James W

    2016-03-01

    The open-data scientific philosophy is being widely adopted and proving to promote considerable progress in ecology and evolution. Open-data global data bases now exist on animal migration, species distribution, conservation status, etc. However, a gap exists for data on population dynamics spanning the rich diversity of the animal kingdom world-wide. This information is fundamental to our understanding of the conditions that have shaped variation in animal life histories and their relationships with the environment, as well as the determinants of invasion and extinction. Matrix population models (MPMs) are among the most widely used demographic tools by animal ecologists. MPMs project population dynamics based on the reproduction, survival and development of individuals in a population over their life cycle. The outputs from MPMs have direct biological interpretations, facilitating comparisons among animal species as different as Caenorhabditis elegans, Loxodonta africana and Homo sapiens. Thousands of animal demographic records exist in the form of MPMs, but they are dispersed throughout the literature, rendering comparative analyses difficult. Here, we introduce the COMADRE Animal Matrix Database, an open-data online repository, which in its version 1.0.0 contains data on 345 species world-wide, from 402 studies with a total of 1625 population projection matrices. COMADRE also contains ancillary information (e.g. ecoregion, taxonomy, biogeography, etc.) that facilitates interpretation of the numerous demographic metrics that can be derived from its MPMs. We provide R code to some of these examples. We introduce the COMADRE Animal Matrix Database, a resource for animal demography. Its open-data nature, together with its ancillary information, will facilitate comparative analysis, as will the growing availability of databases focusing on other aspects of the rich animal diversity, and tools to query and combine them. Through future frequent updates of COMADRE, and its integration with other online resources, we encourage animal ecologists to tackle global ecological and evolutionary questions with unprecedented sample size. © 2016 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.

  14. Digital filtering and model updating methods for improving the robustness of near-infrared multivariate calibrations.

    PubMed

    Kramer, Kirsten E; Small, Gary W

    2009-02-01

    Fourier transform near-infrared (NIR) transmission spectra are used for quantitative analysis of glucose for 17 sets of prediction data sampled as much as six months outside the timeframe of the corresponding calibration data. Aqueous samples containing physiological levels of glucose in a matrix of bovine serum albumin and triacetin are used to simulate clinical samples such as blood plasma. Background spectra of a single analyte-free matrix sample acquired during the instrumental warm-up period on the prediction day are used for calibration updating and for determining the optimal frequency response of a preprocessing infinite impulse response time-domain digital filter. By tuning the filter and the calibration model to the specific instrumental response associated with the prediction day, the calibration model is given enhanced ability to operate over time. This methodology is demonstrated in conjunction with partial least squares calibration models built with a spectral range of 4700-4300 cm(-1). By using a subset of the background spectra to evaluate the prediction performance of the updated model, projections can be made regarding the success of subsequent glucose predictions. If a threshold standard error of prediction (SEP) of 1.5 mM is used to establish successful model performance with the glucose samples, the corresponding threshold for the SEP of the background spectra is found to be 1.3 mM. For calibration updating in conjunction with digital filtering, SEP values of all 17 prediction sets collected over 3-178 days displaced from the calibration data are below 1.5 mM. In addition, the diagnostic based on the background spectra correctly assesses the prediction performance in 16 of the 17 cases.

  15. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  16. The Iron Project and the Rmax Project: X-ray Spectroscopy of Highly Charged Ions

    NASA Astrophysics Data System (ADS)

    Oelgoetz, Justin; Pradhan, Anil; Nahar, Sultana; Montenegro, Maximiliano; Eissner, Werner

    2006-05-01

    We will describe recent work on (1) the modeling of spectra arising from highly charged ions, and (2) the data that goes into such models. Emission from the Kα, and in some cases, Kβ lines of the Li, He, and H-like states of ions is of great interest in X-ray astronomy and high-temperature laboratory sources such as fusion devices. Current results at modeling these lines including all relevant atomic processes for the elements Fe, Ni and Ca will be presented, along with a discussion of the computational methods employed and the possible implications of the work. An extensive set of oscillator strengths, line strengths and radiative decay rates for the allowed and forbidden transitions in Fe XVIII have been obtained in the relativistic Briet-Pauli R-Matrix approximation. The results include 1174 fine structure levels of total angular momenta J= 12 - 172 and n le 10 and about 171,500 transitions among them. Sample results will be presented. Parts of this work were supported under grants from the NSF and the NASA Astrophysical Theory Program as well as by Los Alamos National Laboratory which is operated under Department of Energy contract W-7405-ENG-36 by the University of California. Many of the calculations were carried out at the Ohio Supercomputer Center.

  17. Electronic properties of quasi one-dimensional quantum wire models under equal coupling strength superpositions of Rashba and Dresselhaus spin-orbit interactions in the presence of an in-plane magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papp, E.; Micu, C.; Racolta, D.

    In this paper one deals with the theoretical derivation of energy bands and of related wavefunctions characterizing quasi 1D semiconductor heterostructures, such as InAs quantum wire models. Such models get characterized this time by equal coupling strength superpositions of Rashba and Dresselhaus spin-orbit interactions of dimensionless magnitude a under the influence of in-plane magnetic fields of magnitude B. We found that the orientations of the field can be selected by virtue of symmetry requirements. For this purpose one resorts to spin conservations, but alternative conditions providing sensible simplifications of the energy-band formula can be reasonably accounted for. Besides the wavenumbermore » k relying on the 1D electron, one deals with the spin-like s=±1 factors in the front of the square root term of the energy. Having obtained the spinorial wavefunction, opens the way to the derivation of spin precession effects. For this purpose one resorts to the projections of the wavenumber operator on complementary spin states. Such projections are responsible for related displacements proceeding along the Ox-axis. This results in a 2D rotation matrix providing both the precession angle as well as the precession axis.« less

  18. Restoring Function after Volumetric Muscle Loss: Extracellular Matrix Allograft or Minced Muscle Autograft

    DTIC Science & Technology

    2017-10-01

    at the site of the VML. Prior small and large animal studies in our laboratory have demonstrated that minced muscle autograft (MMA), by virtue of...minced and placed intramuscularly at the site of the VML. Prior small and large animal studies in our laboratory have demonstrated that minced muscle...significant delay in the project initiation. First, a large animal study at the ISR indicated some concerns with the extra cellular matrix allograft that

  19. Understanding the interdiffusion behavior and determining the long term stability of tungsten fiber reinforced niobium-base matrix composite systems

    NASA Technical Reports Server (NTRS)

    Tien, John K.

    1990-01-01

    The long term interdiffusional stability of tungsten fiber reinforced niobium alloy composites is addressed. The matrix alloy that is most promising for use as a high temperature structural material for reliable long-term space power generation is Nb1Zr. As an ancillary project to this program, efforts were made to assess the nature and kinetics of interphase reaction between selected beryllide intermetallics and nickel and iron aluminides.

  20. CMC Research at NASA Glenn in 2014: Recent Progress and Plans

    NASA Technical Reports Server (NTRS)

    Grady, Joseph E.

    2014-01-01

    As part of NASA's Aeronautical Sciences project, Glenn Research Center has developed advanced fiber and matrix constituents for a 2700F CMC for turbine engine applications. Fiber, matrix and CMC development activities will be reviewed and the improvements in the properties and durability of each will be summarized. Plans for 2014 will be summarized, including fabrication and durability testing of the 2700F CMC and status updates on research collaborations underway with AFRL and DOE

  1. A fast fully constrained geometric unmixing of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Li, Xiao-run; Cui, Jian-tao; Zhao, Liao-ying; Zheng, Jun-peng

    2014-11-01

    A great challenge in hyperspectral image analysis is decomposing a mixed pixel into a collection of endmembers and their corresponding abundance fractions. This paper presents an improved implementation of Barycentric Coordinate approach to unmix hyperspectral images, integrating with the Most-Negative Remove Projection method to meet the abundance sum-to-one constraint (ASC) and abundance non-negativity constraint (ANC). The original barycentric coordinate approach interprets the endmember unmixing problem as a simplex volume ratio problem, which is solved by calculate the determinants of two augmented matrix. One consists of all the members and the other consist of the to-be-unmixed pixel and all the endmembers except for the one corresponding to the specific abundance that is to be estimated. In this paper, we first modified the algorithm of Barycentric Coordinate approach by bringing in the Matrix Determinant Lemma to simplify the unmixing process, which makes the calculation only contains linear matrix and vector operations. So, the matrix determinant calculation of every pixel, as the original algorithm did, is avoided. By the end of this step, the estimated abundance meet the ASC constraint. Then, the Most-Negative Remove Projection method is used to make the abundance fractions meet the full constraints. This algorithm is demonstrated both on synthetic and real images. The resulting algorithm yields the abundance maps that are similar to those obtained by FCLS, while the runtime is outperformed as its computational simplicity.

  2. Environmental impact assessment of Gonabad municipal waste landfill site using Leopold Matrix

    PubMed Central

    Sajjadi, Seyed Ali; Aliakbari, Zohreh; Matlabi, Mohammad; Biglari, Hamed; Rasouli, Seyedeh Samira

    2017-01-01

    Introduction An environmental impact assessment (EIA) before embarking on any project is a useful tool to reduce the potential effects of each project, including landfill, if possible. The main objective of this study was to assess the environmental impact of the current municipal solid waste disposal site of Gonabad by using the Iranian Leopold matrix method. Methods This cross-sectional study was conducted to assess the environmental impacts of a landfill site in Gonabad in 2015 by an Iranian matrix (modified Leopold matrix). This study was conducted based on field visits of the landfill, and collected information from various sources and analyzing and comparing between five available options, including the continuation of the current disposal practices, construction of new sanitary landfills, recycling plans, composting, and incineration plants was examined. The best option was proposed to replace the existing landfill. Results The current approach has a score of 2.35, the construction of new sanitary landfill has a score of 1.59, a score of 1.57 for the compost plant, and recycling and incineration plant, respectively, have scores of 1.68 and 2.3. Conclusion Results showed that continuation of the current method of disposal, due to severe environmental damage and health problems, is rejected. A compost plant with the lowest negative score is the best option for the waste disposal site of Gonabad City and has priority over the other four options. PMID:28465797

  3. Environmental impact assessment of Gonabad municipal waste landfill site using Leopold Matrix.

    PubMed

    Sajjadi, Seyed Ali; Aliakbari, Zohreh; Matlabi, Mohammad; Biglari, Hamed; Rasouli, Seyedeh Samira

    2017-02-01

    An environmental impact assessment (EIA) before embarking on any project is a useful tool to reduce the potential effects of each project, including landfill, if possible. The main objective of this study was to assess the environmental impact of the current municipal solid waste disposal site of Gonabad by using the Iranian Leopold matrix method. This cross-sectional study was conducted to assess the environmental impacts of a landfill site in Gonabad in 2015 by an Iranian matrix (modified Leopold matrix). This study was conducted based on field visits of the landfill, and collected information from various sources and analyzing and comparing between five available options, including the continuation of the current disposal practices, construction of new sanitary landfills, recycling plans, composting, and incineration plants was examined. The best option was proposed to replace the existing landfill. The current approach has a score of 2.35, the construction of new sanitary landfill has a score of 1.59, a score of 1.57 for the compost plant, and recycling and incineration plant, respectively, have scores of 1.68 and 2.3. Results showed that continuation of the current method of disposal, due to severe environmental damage and health problems, is rejected. A compost plant with the lowest negative score is the best option for the waste disposal site of Gonabad City and has priority over the other four options.

  4. Amerciamysis bahia Stochastic Matrix Population Model for Laboratory Populations

    EPA Science Inventory

    The population model described here is a stochastic, density-independent matrix model for integrating the effects of toxicants on survival and reproduction of the marine invertebrate, Americamysis bahia. The model was constructed using Microsoft® Excel 2003. The focus of the mode...

  5. Hybrid-dimensional modelling of two-phase flow through fractured porous media with enhanced matrix fracture transmission conditions

    NASA Astrophysics Data System (ADS)

    Brenner, Konstantin; Hennicker, Julian; Masson, Roland; Samier, Pierre

    2018-03-01

    In this work, we extend, to two-phase flow, the single-phase Darcy flow model proposed in [26], [12] in which the (d - 1)-dimensional flow in the fractures is coupled with the d-dimensional flow in the matrix. Three types of so called hybrid-dimensional two-phase Darcy flow models are proposed. They all account for fractures acting either as drains or as barriers, since they allow pressure jumps at the matrix-fracture interfaces. The models also permit to treat gravity dominated flow as well as discontinuous capillary pressure at the material interfaces. The three models differ by their transmission conditions at matrix fracture interfaces: while the first model accounts for the nonlinear two-phase Darcy flux conservations, the second and third ones are based on the linear single phase Darcy flux conservations combined with different approximations of the mobilities. We adapt the Vertex Approximate Gradient (VAG) scheme to this problem, in order to account for anisotropy and heterogeneity aspects as well as for applicability on general meshes. Several test cases are presented to compare our hybrid-dimensional models to the generic equi-dimensional model, in which fractures have the same dimension as the matrix, leading to deep insight about the quality of the proposed reduced models.

  6. Investigation on Constrained Matrix Factorization for Hyperspectral Image Analysis

    DTIC Science & Technology

    2005-07-25

    analysis. Keywords: matrix factorization; nonnegative matrix factorization; linear mixture model ; unsupervised linear unmixing; hyperspectral imagery...spatial resolution permits different materials present in the area covered by a single pixel. The linear mixture model says that a pixel reflectance in...in r. In the linear mixture model , r is considered as the linear mixture of m1, m2, …, mP as nMαr += (1) where n is included to account for

  7. The Development of Multicultural Counselling Competencies (MCC) Training Module Based on MCC Matrix Model by Sue et al. (1992)

    ERIC Educational Resources Information Center

    Anuar, Azad Athahiri; Rozubi, Norsayyidatina Che; Abdullah, Haslee Sharil

    2015-01-01

    The aims of this study were to develop and validate a MCC training module for trainee counselor based on MCC matrix model by Sue et al. (1992). This module encompassed five sub modules and 11 activities developed along the concepts and components of the MCC matrix model developed by Sue, Arredondo dan McDavis (1992). The design method used in this…

  8. Supersymmetric gauged matrix models from dimensional reduction on a sphere

    NASA Astrophysics Data System (ADS)

    Closset, Cyril; Ghim, Dongwook; Seong, Rak-Kyeong

    2018-05-01

    It was recently proposed that N = 1 supersymmetric gauged matrix models have a duality of order four — that is, a quadrality — reminiscent of infrared dualities of SQCD theories in higher dimensions. In this note, we show that the zero-dimensional quadrality proposal can be inferred from the two-dimensional Gadde-Gukov-Putrov triality. We consider two-dimensional N = (0, 2) SQCD compactified on a sphere with the half-topological twist. For a convenient choice of R-charge, the zero-mode sector on the sphere gives rise to a simple N = 1 gauged matrix model. Triality on the sphere then implies a triality relation for the supersymmetric matrix model, which can be completed to the full quadrality.

  9. Modeling food matrix effects on chemical reactivity: Challenges and perspectives.

    PubMed

    Capuano, Edoardo; Oliviero, Teresa; van Boekel, Martinus A J S

    2017-06-29

    The same chemical reaction may be different in terms of its position of the equilibrium (i.e., thermodynamics) and its kinetics when studied in different foods. The diversity in the chemical composition of food and in its structural organization at macro-, meso-, and microscopic levels, that is, the food matrix, is responsible for this difference. In this viewpoint paper, the multiple, and interconnected ways the food matrix can affect chemical reactivity are summarized. Moreover, mechanistic and empirical approaches to explain and predict the effect of food matrix on chemical reactivity are described. Mechanistic models aim to quantify the effect of food matrix based on a detailed understanding of the chemical and physical phenomena occurring in food. Their applicability is limited at the moment to very simple food systems. Empirical modeling based on machine learning combined with data-mining techniques may represent an alternative, useful option to predict the effect of the food matrix on chemical reactivity and to identify chemical and physical properties to be further tested. In such a way the mechanistic understanding of the effect of the food matrix on chemical reactions can be improved.

  10. Applications of Perron-Frobenius theory to population dynamics.

    PubMed

    Li, Chi-Kwong; Schneider, Hans

    2002-05-01

    By the use of Perron-Frobenius theory, simple proofs are given of the Fundamental Theorem of Demography and of a theorem of Cushing and Yicang on the net reproductive rate occurring in matrix models of population dynamics. The latter result, which is closely related to the Stein-Rosenberg theorem in numerical linear algebra, is further refined with some additional nonnegative matrix theory. When the fertility matrix is scaled by the net reproductive rate, the growth rate of the model is $1$. More generally, we show how to achieve a given growth rate for the model by scaling the fertility matrix. Demographic interpretations of the results are given.

  11. An Intelligent Architecture Based on Field Programmable Gate Arrays Designed to Detect Moving Objects by Using Principal Component Analysis

    PubMed Central

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406

  12. Predicting thermo-mechanical behaviour of high minor actinide content composite oxide fuel in a dedicated transmutation facility

    NASA Astrophysics Data System (ADS)

    Lemehov, S. E.; Sobolev, V. P.; Verwerft, M.

    2011-09-01

    The European Facility for Industrial Transmutation (EFIT) of the minor actinides (MA), from LWR spent fuel is being developed in the integrated project EUROTRANS within the 6th Framework Program of EURATOM. Two composite uranium-free fuel systems, containing a large fraction of MA, are proposed as the main candidates: a CERCER with magnesia matrix hosting (Pu,MA)O 2-x particles, and a CERMET with metallic molybdenum matrix. The long-term thermal and mechanical behaviour of the fuel under the expected EFIT operating conditions is one of the critical issues in the core design. To make a reliable prediction of long-term thermo-mechanical behaviour of the hottest fuel rods in the lead-cooled version of EFIT with thermal power of 400 MW, different fuel performance codes have been used. This study describes the main results of modelling the thermo-mechanical behaviour of the hottest CERCER fuel rods with the fuel performance code MACROS which indicate that the CERCER fuel residence time can safely reach at least 4-5 effective full power years.

  13. An intelligent architecture based on Field Programmable Gate Arrays designed to detect moving objects by using Principal Component Analysis.

    PubMed

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices.

  14. Grassmann matrix quantum mechanics

    DOE PAGES

    Anninos, Dionysios; Denef, Frederik; Monten, Ruben

    2016-04-21

    We explore quantum mechanical theories whose fundamental degrees of freedom are rectangular matrices with Grassmann valued matrix elements. We study particular models where the low energy sector can be described in terms of a bosonic Hermitian matrix quantum mechanics. We describe the classical curved phase space that emerges in the low energy sector. The phase space lives on a compact Kähler manifold parameterized by a complex matrix, of the type discovered some time ago by Berezin. The emergence of a semiclassical bosonic matrix quantum mechanics at low energies requires that the original Grassmann matrices be in the long rectangular limit.more » In conclusion, we discuss possible holographic interpretations of such matrix models which, by construction, are endowed with a finite dimensional Hilbert space.« less

  15. Salient Object Detection via Structured Matrix Decomposition.

    PubMed

    Peng, Houwen; Li, Bing; Ling, Haibin; Hu, Weiming; Xiong, Weihua; Maybank, Stephen J

    2016-05-04

    Low-rank recovery models have shown potential for salient object detection, where a matrix is decomposed into a low-rank matrix representing image background and a sparse matrix identifying salient objects. Two deficiencies, however, still exist. First, previous work typically assumes the elements in the sparse matrix are mutually independent, ignoring the spatial and pattern relations of image regions. Second, when the low-rank and sparse matrices are relatively coherent, e.g., when there are similarities between the salient objects and background or when the background is complicated, it is difficult for previous models to disentangle them. To address these problems, we propose a novel structured matrix decomposition model with two structural regularizations: (1) a tree-structured sparsity-inducing regularization that captures the image structure and enforces patches from the same object to have similar saliency values, and (2) a Laplacian regularization that enlarges the gaps between salient objects and the background in feature space. Furthermore, high-level priors are integrated to guide the matrix decomposition and boost the detection. We evaluate our model for salient object detection on five challenging datasets including single object, multiple objects and complex scene images, and show competitive results as compared with 24 state-of-the-art methods in terms of seven performance metrics.

  16. Random matrix approach to the dynamics of stock inventory variations

    NASA Astrophysics Data System (ADS)

    Zhou, Wei-Xing; Mu, Guo-Hua; Kertész, János

    2012-09-01

    It is well accepted that investors can be classified into groups owing to distinct trading strategies, which forms the basic assumption of many agent-based models for financial markets when agents are not zero-intelligent. However, empirical tests of these assumptions are still very rare due to the lack of order flow data. Here we adopt the order flow data of Chinese stocks to tackle this problem by investigating the dynamics of inventory variations for individual and institutional investors that contain rich information about the trading behavior of investors and have a crucial influence on price fluctuations. We find that the distributions of cross-correlation coefficient Cij have power-law forms in the bulk that are followed by exponential tails, and there are more positive coefficients than negative ones. In addition, it is more likely that two individuals or two institutions have a stronger inventory variation correlation than one individual and one institution. We find that the largest and the second largest eigenvalues (λ1 and λ2) of the correlation matrix cannot be explained by random matrix theory and the projections of investors' inventory variations on the first eigenvector u(λ1) are linearly correlated with stock returns, where individual investors play a dominating role. The investors are classified into three categories based on the cross-correlation coefficients CV R between inventory variations and stock returns. A strong Granger causality is unveiled from stock returns to inventory variations, which means that a large proportion of individuals hold the reversing trading strategy and a small part of individuals hold the trending strategy. Our empirical findings have scientific significance in the understanding of investors' trading behavior and in the construction of agent-based models for emerging stock markets.

  17. Can Condensing Organic Aerosols Lead to Less Cloud Particles?

    NASA Astrophysics Data System (ADS)

    Gao, C. Y.; Tsigaridis, K.; Bauer, S.

    2017-12-01

    We examined the impact of condensing organic aerosols on activated cloud number concentration in a new aerosol microphysics box model, MATRIX-VBS. The model includes the volatility-basis set (VBS) framework in an aerosol microphysical scheme MATRIX (Multiconfiguration Aerosol TRacker of mIXing state) that resolves aerosol mass and number concentrations and aerosol mixing state. Preliminary results show that by including the condensation of organic aerosols, the new model (MATRIX-VBS) has less activated particles compared to the original model (MATRIX), which treats organic aerosols as non-volatile. Parameters such as aerosol chemical composition, mass and number concentrations, and particle sizes which affect activated cloud number concentration are thoroughly evaluated via a suite of Monte-Carlo simulations. The Monte-Carlo simulations also provide information on which climate-relevant parameters play a critical role in the aerosol evolution in the atmosphere. This study also helps simplifying the newly developed box model which will soon be implemented in the global model GISS ModelE as a module.

  18. Graphics Flutter Analysis Methods, an interactive computing system at Lockheed-California Company

    NASA Technical Reports Server (NTRS)

    Radovcich, N. A.

    1975-01-01

    An interactive computer graphics system, Graphics Flutter Analysis Methods (GFAM), was developed to complement FAMAS, a matrix-oriented batch computing system, and other computer programs in performing complex numerical calculations using a fully integrated data management system. GFAM has many of the matrix operation capabilities found in FAMAS, but on a smaller scale, and is utilized when the analysis requires a high degree of interaction between the engineer and computer, and schedule constraints exclude the use of batch entry programs. Applications of GFAM to a variety of preliminary design, development design, and project modification programs suggest that interactive flutter analysis using matrix representations is a feasible and cost effective computing tool.

  19. Towards Extending Forward Kinematic Models on Hyper-Redundant Manipulator to Cooperative Bionic Arms

    NASA Astrophysics Data System (ADS)

    Singh, Inderjeet; Lakhal, Othman; Merzouki, Rochdi

    2017-01-01

    Forward Kinematics is a stepping stone towards finding an inverse solution and subsequently a dynamic model of a robot. Hence a study and comparison of various Forward Kinematic Models (FKMs) is necessary for robot design. This paper deals with comparison of three FKMs on the same hyper-redundant Compact Bionic Handling Assistant (CBHA) manipulator under same conditions. The aim of this study is to project on modeling cooperative bionic manipulators. Two of these methods are quantitative methods, Arc Geometry HTM (Homogeneous Transformation Matrix) Method and Dual Quaternion Method, while the other one is Hybrid Method which uses both quantitative as well as qualitative approach. The methods are compared theoretically and experimental results are discussed to add further insight to the comparison. HTM is the widely used and accepted technique, is taken as reference and trajectory deviation in other techniques are compared with respect to HTM. Which method allows obtaining an accurate kinematic behavior of the CBHA, controlled in the real-time.

  20. Fermionic topological quantum states as tensor networks

    NASA Astrophysics Data System (ADS)

    Wille, C.; Buerschaper, O.; Eisert, J.

    2017-06-01

    Tensor network states, and in particular projected entangled pair states, play an important role in the description of strongly correlated quantum lattice systems. They do not only serve as variational states in numerical simulation methods, but also provide a framework for classifying phases of quantum matter and capture notions of topological order in a stringent and rigorous language. The rapid development in this field for spin models and bosonic systems has not yet been mirrored by an analogous development for fermionic models. In this work, we introduce a tensor network formalism capable of capturing notions of topological order for quantum systems with fermionic components. At the heart of the formalism are axioms of fermionic matrix-product operator injectivity, stable under concatenation. Building upon that, we formulate a Grassmann number tensor network ansatz for the ground state of fermionic twisted quantum double models. A specific focus is put on the paradigmatic example of the fermionic toric code. This work shows that the program of describing topologically ordered systems using tensor networks carries over to fermionic models.

Top