NASA Astrophysics Data System (ADS)
Jamróz, Dariusz; Niedoba, Tomasz; Surowiak, Agnieszka; Tumidajski, Tadeusz; Szostek, Roman; Gajer, Mirosław
2017-09-01
The application of methods drawing upon multi-parameter visualization of data by transformation of multidimensional space into two-dimensional one allow to show multi-parameter data on computer screen. Thanks to that, it is possible to conduct a qualitative analysis of this data in the most natural way for human being, i.e. by the sense of sight. An example of such method of multi-parameter visualization is multidimensional scaling. This method was used in this paper to present and analyze a set of seven-dimensional data obtained from Janina Mining Plant and Wieczorek Coal Mine. It was decided to examine whether the method of multi-parameter data visualization allows to divide the samples space into areas of various applicability to fluidal gasification process. The "Technological applicability card for coals" was used for this purpose [Sobolewski et al., 2012; 2017], in which the key parameters, important and additional ones affecting the gasification process were described.
Transformation to equivalent dimensions—a new methodology to study earthquake clustering
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw
2014-05-01
A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.
Duque, Ricardo E
2012-04-01
Flow cytometric analysis of cell suspensions involves the sequential 'registration' of intrinsic and extrinsic parameters of thousands of cells in list mode files. Thus, it is almost irresistible to describe phenomena in numerical terms or by 'ratios' that have the appearance of 'accuracy' due to the presence of numbers obtained from thousands of cells. The concepts involved in the detection and characterization of B cell lymphoproliferative processes are revisited in this paper by identifying parameters that, when analyzed appropriately, are both necessary and sufficient. The neoplastic process (cluster) can be visualized easily because the parameters that distinguish it form a cluster in multidimensional space that is unique and distinguishable from neighboring clusters that are not of diagnostic interest but serve to provide a background. For B cell neoplasia it is operationally necessary to identify the multidimensional space occupied by a cluster whose kappa:lambda ratio is 100:0 or 0:100. Thus, the concept of kappa:lambda ratio is without meaning and would not detect B cell neoplasia in an unacceptably high number of cases.
The Extraction of One-Dimensional Flow Properties from Multi-Dimensional Data Sets
NASA Technical Reports Server (NTRS)
Baurle, Robert A.; Gaffney, Richard L., Jr.
2007-01-01
The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e.g. thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.
The Art of Extracting One-Dimensional Flow Properties from Multi-Dimensional Data Sets
NASA Technical Reports Server (NTRS)
Baurle, R. A.; Gaffney, R. L.
2007-01-01
The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e:g: thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.
NASA Astrophysics Data System (ADS)
Jamróz, Dariusz; Niedoba, Tomasz; Surowiak, Agnieszka; Tumidajski, Tadeusz
2016-09-01
Methods serving to visualise multidimensional data through the transformation of multidimensional space into two-dimensional space, enable to present the multidimensional data on the computer screen. Thanks to this, qualitative analysis of this data can be performed in the most natural way for humans, through the sense of sight. An example of such a method of multidimensional data visualisation is PCA (principal component analysis) method. This method was used in this work to present and analyse a set of seven-dimensional data (selected seven properties) describing coal samples obtained from Janina and Wieczorek coal mines. Coal from these mines was previously subjected to separation by means of a laboratory ring jig, consisting of ten rings. With 5 layers of both types of coal (with 2 rings each) were obtained in this way. It was decided to check if the method of multidimensional data visualisation enables to divide the space of such divided samples into areas with different suitability for the fluidised gasification process. To that end, the card of technological suitability of coal was used (Sobolewski et al., 2012; 2013), in which key, relevant and additional parameters, having effect on the gasification process, were described. As a result of analyses, it was stated that effective determination of coal samples suitability for the on-surface gasification process in a fluidised reactor is possible. The PCA method enables the visualisation of the optimal subspace containing the set requirements concerning the properties of coals intended for this process.
Experimental Study on the Perception Characteristics of Haptic Texture by Multidimensional Scaling.
Wu, Juan; Li, Na; Liu, Wei; Song, Guangming; Zhang, Jun
2015-01-01
Recent works regarding real texture perception demonstrate that physical factors such as stiffness and spatial period play a fundamental role in texture perception. This research used a multidimensional scaling (MDS) analysis to further characterize and quantify the effects of the simulation parameters on haptic texture rendering and perception. In a pilot experiment, 12 haptic texture samples were generated by using a 3-degrees-of-freedom (3-DOF) force-feedback device with varying spatial period, height, and stiffness coefficient parameter values. The subjects' perceptions of the virtual textures indicate that roughness, denseness, flatness and hardness are distinguishing characteristics of texture. In the main experiment, 19 participants rated the dissimilarities of the textures and estimated the magnitudes of their characteristics. The MDS method was used to recover the underlying perceptual space and reveal the significance of the space from the recorded data. The physical parameters and their combinations have significant effects on the perceptual characteristics. A regression model was used to quantitatively analyze the parameters and their effects on the perceptual characteristics. This paper is to illustrate that haptic texture perception based on force feedback can be modeled in two- or three-dimensional space and provide suggestions on improving perception-based haptic texture rendering.
Planning Robot-Control Parameters With Qualitative Reasoning
NASA Technical Reports Server (NTRS)
Peters, Stephen F.
1993-01-01
Qualitative-reasoning planning algorithm helps to determine quantitative parameters controlling motion of robot. Algorithm regarded as performing search in multidimensional space of control parameters from starting point to goal region in which desired result of robotic manipulation achieved. Makes use of directed graph representing qualitative physical equations describing task, and interacts, at each sampling period, with history of quantitative control parameters and sensory data, to narrow search for reliable values of quantitative control parameters.
Naden, Levi N; Shirts, Michael R
2016-04-12
We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free energy.
de la Vega de León, Antonio; Bajorath, Jürgen
2016-09-01
The concept of chemical space is of fundamental relevance for medicinal chemistry and chemical informatics. Multidimensional chemical space representations are coordinate-based. Chemical space networks (CSNs) have been introduced as a coordinate-free representation. A computational approach is presented for the transformation of multidimensional chemical space into CSNs. The design of transformation CSNs (TRANS-CSNs) is based upon a similarity function that directly reflects distance relationships in original multidimensional space. TRANS-CSNs provide an immediate visualization of coordinate-based chemical space and do not require the use of dimensionality reduction techniques. At low network density, TRANS-CSNs are readily interpretable and make it possible to evaluate structure-activity relationship information originating from multidimensional chemical space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten
2016-06-08
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less
NASA Astrophysics Data System (ADS)
Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang
2016-06-01
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.
Numeric invariants from multidimensional persistence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skryzalin, Jacek; Carlsson, Gunnar
2017-05-19
In this paper, we analyze the space of multidimensional persistence modules from the perspectives of algebraic geometry. We first build a moduli space of a certain subclass of easily analyzed multidimensional persistence modules, which we construct specifically to capture much of the information which can be gained by using multidimensional persistence over one-dimensional persistence. We argue that the global sections of this space provide interesting numeric invariants when evaluated against our subclass of multidimensional persistence modules. Lastly, we extend these global sections to the space of all multidimensional persistence modules and discuss how the resulting numeric invariants might be usedmore » to study data.« less
Asymptotical AdS space from nonlinear gravitational models with stabilized extra dimensions
NASA Astrophysics Data System (ADS)
Günther, U.; Moniz, P.; Zhuk, A.
2002-08-01
We consider nonlinear gravitational models with a multidimensional warped product geometry. Particular attention is payed to models with quadratic scalar curvature terms. It is shown that for certain parameter ranges, the extra dimensions are stabilized if the internal spaces have a negative constant curvature. In this case, the four-dimensional effective cosmological constant as well as the bulk cosmological constant become negative. As a consequence, the homogeneous and isotropic external space is asymptotically AdS4. The connection between the D-dimensional and the four-dimensional fundamental mass scales sets a restriction on the parameters of the considered nonlinear models.
Meyers, Charles E.; Davidson, George S.; Johnson, David K.; Hendrickson, Bruce A.; Wylie, Brian N.
1999-01-01
A method of data mining represents related items in a multidimensional space. Distance between items in the multidimensional space corresponds to the extent of relationship between the items. The user can select portions of the space to perceive. The user also can interact with and control the communication of the space, focusing attention on aspects of the space of most interest. The multidimensional spatial representation allows more ready comprehension of the structure of the relationships among the items.
Dynamic State Estimation of Terrestrial and Solar Plasmas
NASA Astrophysics Data System (ADS)
Kamalabadi, Farzad
A pervasive problem in virtually all branches of space science is the estimation of multi-dimensional state parameters of a dynamical system from a collection of indirect, often incomplete, and imprecise measurements. Subsequent scientific inference is predicated on rigorous analysis, interpretation, and understanding of physical observations and on the reliability of the associated quantitative statistical bounds and performance characteristics of the algorithms used. In this work, we focus on these dynamic state estimation problems and illustrate their importance in the context of two timely activities in space remote sensing. First, we discuss the estimation of multi-dimensional ionospheric state parameters from UV spectral imaging measurements anticipated to be acquired the recently selected NASA Heliophysics mission, Ionospheric Connection Explorer (ICON). Next, we illustrate that similar state-space formulations provide the means for the estimation of 3D, time-dependent densities and temperatures in the solar corona from a series of white-light and EUV measurements. We demonstrate that, while a general framework for the stochastic formulation of the state estimation problem is suited for systematic inference of the parameters of a hidden Markov process, several challenges must be addressed in the assimilation of an increasing volume and diversity of space observations. These challenges are: (1) the computational tractability when faced with voluminous and multimodal data, (2) the inherent limitations of the underlying models which assume, often incorrectly, linear dynamics and Gaussian noise, and (3) the unavailability or inaccuracy of transition probabilities and noise statistics. We argue that pursuing answers to these questions necessitates cross-disciplinary research that enables progress toward systematically reconciling observational and theoretical understanding of the space environment.
NASA Astrophysics Data System (ADS)
Saleem, M.; Resmi, L.; Misra, Kuntal; Pai, Archana; Arun, K. G.
2018-03-01
Short duration Gamma Ray Bursts (SGRB) and their afterglows are among the most promising electromagnetic (EM) counterparts of Neutron Star (NS) mergers. The afterglow emission is broad-band, visible across the entire electromagnetic window from γ-ray to radio frequencies. The flux evolution in these frequencies is sensitive to the multidimensional afterglow physical parameter space. Observations of gravitational wave (GW) from BNS mergers in spatial and temporal coincidence with SGRB and associated afterglows can provide valuable constraints on afterglow physics. We run simulations of GW-detected BNS events and assuming that all of them are associated with a GRB jet which also produces an afterglow, investigate how detections or non-detections in X-ray, optical and radio frequencies can be influenced by the parameter space. We narrow down the regions of afterglow parameter space for a uniform top-hat jet model, which would result in different detection scenarios. We list inferences which can be drawn on the physics of GRB afterglows from multimessenger astronomy with coincident GW-EM observations.
Multidimensional flamelet-generated manifolds for partially premixed combustion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Phuc-Danh; Vervisch, Luc; Subramanian, Vallinayagam
2010-01-15
Flamelet-generated manifolds have been restricted so far to premixed or diffusion flame archetypes, even though the resulting tables have been applied to nonpremixed and partially premixed flame simulations. By using a projection of the full set of mass conservation species balance equations into a restricted subset of the composition space, unsteady multidimensional flamelet governing equations are derived from first principles, under given hypotheses. During the projection, as in usual one-dimensional flamelets, the tangential strain rate of scalar isosurfaces is expressed in the form of the scalar dissipation rates of the control parameters of the multidimensional flamelet-generated manifold (MFM), which ismore » tested in its five-dimensional form for partially premixed combustion, with two composition space directions and three scalar dissipation rates. It is shown that strain-rate-induced effects can hardly be fully neglected in chemistry tabulation of partially premixed combustion, because of fluxes across iso-equivalence-ratio and iso-progress-of-reaction surfaces. This is illustrated by comparing the 5D flamelet-generated manifold with one-dimensional premixed flame and unsteady strained diffusion flame composition space trajectories. The formal links between the asymptotic behavior of MFM and stratified flame, weakly varying partially premixed front, triple-flame, premixed and nonpremixed edge flames are also evidenced. (author)« less
An Integrated Framework for Parameter-based Optimization of Scientific Workflows.
Kumar, Vijay S; Sadayappan, P; Mehta, Gaurang; Vahi, Karan; Deelman, Ewa; Ratnakar, Varun; Kim, Jihie; Gil, Yolanda; Hall, Mary; Kurc, Tahsin; Saltz, Joel
2009-01-01
Data analysis processes in scientific applications can be expressed as coarse-grain workflows of complex data processing operations with data flow dependencies between them. Performance optimization of these workflows can be viewed as a search for a set of optimal values in a multi-dimensional parameter space. While some performance parameters such as grouping of workflow components and their mapping to machines do not a ect the accuracy of the output, others may dictate trading the output quality of individual components (and of the whole workflow) for performance. This paper describes an integrated framework which is capable of supporting performance optimizations along multiple dimensions of the parameter space. Using two real-world applications in the spatial data analysis domain, we present an experimental evaluation of the proposed framework.
Nonaxial hexadecapole deformation effects on the fission barrier
NASA Astrophysics Data System (ADS)
Kardan, A.; Nejati, S.
2016-06-01
Fission barrier of the heavy nucleus 250Cf is analyzed in a multi-dimensional deformation space. This space includes two quadrupole (ɛ2,γ) and three hexadecapole deformation (ɛ40,ɛ42,ɛ44) parameters. The analysis is performed within an unpaired macroscopic-microscopic approach. Special attention is given to the effects of the axial and non-axial hexadecapole deformation shapes. It is found that the inclusion of the nonaxial hexadecapole shapes does not change the fission barrier heights, so it should be sufficient to minimize the energy in only one degree of freedom in the hexadecapole space ɛ4. The role of hexadecapole deformation parameters is also discussed on the Lublin-Strasbourg drop (LSD) macroscopic and the Strutinsky shell energies.
Method of multi-dimensional moment analysis for the characterization of signal peaks
Pfeifer, Kent B; Yelton, William G; Kerr, Dayle R; Bouchier, Francis A
2012-10-23
A method of multi-dimensional moment analysis for the characterization of signal peaks can be used to optimize the operation of an analytical system. With a two-dimensional Peclet analysis, the quality and signal fidelity of peaks in a two-dimensional experimental space can be analyzed and scored. This method is particularly useful in determining optimum operational parameters for an analytical system which requires the automated analysis of large numbers of analyte data peaks. For example, the method can be used to optimize analytical systems including an ion mobility spectrometer that uses a temperature stepped desorption technique for the detection of explosive mixtures.
CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila
2015-03-10
We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less
Cope, Davis; Blakeslee, Barbara; McCourt, Mark E
2013-05-01
The difference-of-Gaussians (DOG) filter is a widely used model for the receptive field of neurons in the retina and lateral geniculate nucleus (LGN) and is a potential model in general for responses modulated by an excitatory center with an inhibitory surrounding region. A DOG filter is defined by three standard parameters: the center and surround sigmas (which define the variance of the radially symmetric Gaussians) and the balance (which defines the linear combination of the two Gaussians). These parameters are not directly observable and are typically determined by nonlinear parameter estimation methods applied to the frequency response function. DOG filters show both low-pass (optimal response at zero frequency) and bandpass (optimal response at a nonzero frequency) behavior. This paper reformulates the DOG filter in terms of a directly observable parameter, the zero-crossing radius, and two new (but not directly observable) parameters. In the two-dimensional parameter space, the exact region corresponding to bandpass behavior is determined. A detailed description of the frequency response characteristics of the DOG filter is obtained. It is also found that the directly observable optimal frequency and optimal gain (the ratio of the response at optimal frequency to the response at zero frequency) provide an alternate coordinate system for the bandpass region. Altogether, the DOG filter and its three standard implicit parameters can be determined by three directly observable values. The two-dimensional bandpass region is a potential tool for the analysis of populations of DOG filters (for example, populations of neurons in the retina or LGN), because the clustering of points in this parameter space may indicate an underlying organizational principle. This paper concentrates on circular Gaussians, but the results generalize to multidimensional radially symmetric Gaussians and are given as an appendix.
Pohlheim, Hartmut
2006-01-01
Multidimensional scaling as a technique for the presentation of high-dimensional data with standard visualization techniques is presented. The technique used is often known as Sammon mapping. We explain the mathematical foundations of multidimensional scaling and its robust calculation. We also demonstrate the use of this technique in the area of evolutionary algorithms. First, we present the visualization of the path through the search space of the best individuals during an optimization run. We then apply multidimensional scaling to the comparison of multiple runs regarding the variables of individuals and multi-criteria objective values (path through the solution space).
NASA Astrophysics Data System (ADS)
Günther, U.; Moniz, P.; Zhuk, A.
2003-08-01
We consider multidimensional gravitational models with a nonlinear scalar curvature term and form fields in the action functional. In our scenario it is assumed that the higher dimensional spacetime undergoes a spontaneous compactification to a warped product manifold. Particular attention is paid to models with quadratic scalar curvature terms and a Freund-Rubin-like ansatz for solitonic form fields. It is shown that for certain parameter ranges the extra dimensions are stabilized. In particular, stabilization is possible for any sign of the internal space curvature, the bulk cosmological constant, and of the effective four-dimensional cosmological constant. Moreover, the effective cosmological constant can satisfy the observable limit on the dark energy density. Finally, we discuss the restrictions on the parameters of the considered nonlinear models and how they follow from the connection between the D-dimensional and the four-dimensional fundamental mass scales.
Patent data mining method and apparatus
Boyack, Kevin W.; Grafe, V. Gerald; Johnson, David K.; Wylie, Brian N.
2002-01-01
A method of data mining represents related patents in a multidimensional space. Distance between patents in the multidimensional space corresponds to the extent of relationship between the patents. The relationship between pairings of patents can be expressed based on weighted combinations of several predicates. The user can select portions of the space to perceive. The user also can interact with and control the communication of the space, focusing attention on aspects of the space of most interest. The multidimensional spatial representation allows more ready comprehension of the structure of the relationships among the patents.
Multidimensional data analysis in immunophenotyping.
Loken, M R
2001-05-01
The complexity of cell populations requires careful selection of reagents to detect cells of interest and distinguish them from other types. Additional reagents are frequently used to provide independent criteria for cell identification. Two or three monoclonal antibodies in combination with forward and right-angle light scatter generate a data set that is difficult to visualize because the data must be represented in four- or five-dimensional space. The separation between cell populations provided by the multiple characteristics is best visualized by multidimensional analysis using all parameters simultaneously to identify populations within the resulting hyperspace. Groups of cells are distinguished based on a combination of characteristics not apparent in any usual two-dimensional representation of the data.
Chan, Emory M; Xu, Chenxu; Mao, Alvin W; Han, Gang; Owen, Jonathan S; Cohen, Bruce E; Milliron, Delia J
2010-05-12
While colloidal nanocrystals hold tremendous potential for both enhancing fundamental understanding of materials scaling and enabling advanced technologies, progress in both realms can be inhibited by the limited reproducibility of traditional synthetic methods and by the difficulty of optimizing syntheses over a large number of synthetic parameters. Here, we describe an automated platform for the reproducible synthesis of colloidal nanocrystals and for the high-throughput optimization of physical properties relevant to emerging applications of nanomaterials. This robotic platform enables precise control over reaction conditions while performing workflows analogous to those of traditional flask syntheses. We demonstrate control over the size, size distribution, kinetics, and concentration of reactions by synthesizing CdSe nanocrystals with 0.2% coefficient of variation in the mean diameters across an array of batch reactors and over multiple runs. Leveraging this precise control along with high-throughput optical and diffraction characterization, we effectively map multidimensional parameter space to tune the size and polydispersity of CdSe nanocrystals, to maximize the photoluminescence efficiency of CdTe nanocrystals, and to control the crystal phase and maximize the upconverted luminescence of lanthanide-doped NaYF(4) nanocrystals. On the basis of these demonstrative examples, we conclude that this automated synthesis approach will be of great utility for the development of diverse colloidal nanomaterials for electronic assemblies, luminescent biological labels, electroluminescent devices, and other emerging applications.
The space transformation in the simulation of multidimensional random fields
Christakos, G.
1987-01-01
Space transformations are proposed as a mathematically meaningful and practically comprehensive approach to simulate multidimensional random fields. Within this context the turning bands method of simulation is reconsidered and improved in both the space and frequency domains. ?? 1987.
The Definition of Difficulty and Discrimination for Multidimensional Item Response Theory Models.
ERIC Educational Resources Information Center
Reckase, Mark D.; McKinley, Robert L.
A study was undertaken to develop guidelines for the interpretation of the parameters of three multidimensional item response theory models and to determine the relationship between the parameters and traditional concepts of item difficulty and discrimination. The three models considered were multidimensional extensions of the one-, two-, and…
Item Vector Plots for the Multidimensional Three-Parameter Logistic Model
ERIC Educational Resources Information Center
Bryant, Damon; Davis, Larry
2011-01-01
This brief technical note describes how to construct item vector plots for dichotomously scored items fitting the multidimensional three-parameter logistic model (M3PLM). As multidimensional item response theory (MIRT) shows promise of being a very useful framework in the test development life cycle, graphical tools that facilitate understanding…
Extra Solar Planet Science With a Non Redundant Mask
NASA Astrophysics Data System (ADS)
Minto, Stefenie Nicolet; Sivaramakrishnan, Anand; Greenbaum, Alexandra; St. Laurent, Kathryn; Thatte, Deeparshi
2017-01-01
To detect faint planetary companions near a much brighter star, at the Resolution Limit of the James Webb Space Telescope (JWST) the Near-Infrared Imager and Slitless Spectrograph (NIRISS) will use a non-redundant aperture mask (NRM) for high contrast imaging. I simulated NIRISS data of stars with and without planets, and run these through the code that measures interferometric image properties to determine how sensitive planetary detection is to our knowledge of instrumental parameters, starting with the pixel scale. I measured the position angle, distance, and contrast ratio of the planet (with respect to the star) to characterize the binary pair. To organize this data I am creating programs that will automatically and systematically explore multi-dimensional instrument parameter spaces and binary characteristics. In the future my code will also be applied to explore any other parameters we can simulate.
Multidimensional Optimization of Signal Space Distance Parameters in WLAN Positioning
Brković, Milenko; Simić, Mirjana
2014-01-01
Accurate indoor localization of mobile users is one of the challenging problems of the last decade. Besides delivering high speed Internet, Wireless Local Area Network (WLAN) can be used as an effective indoor positioning system, being competitive both in terms of accuracy and cost. Among the localization algorithms, nearest neighbor fingerprinting algorithms based on Received Signal Strength (RSS) parameter have been extensively studied as an inexpensive solution for delivering indoor Location Based Services (LBS). In this paper, we propose the optimization of the signal space distance parameters in order to improve precision of WLAN indoor positioning, based on nearest neighbor fingerprinting algorithms. Experiments in a real WLAN environment indicate that proposed optimization leads to substantial improvements of the localization accuracy. Our approach is conceptually simple, is easy to implement, and does not require any additional hardware. PMID:24757443
Bimler, David; Kirkland, John; Pichler, Shaun
2004-02-01
The structure of color perception can be examined by collecting judgments about color dissimilarities. In the procedure used here, stimuli are presented three at a time on a computer monitor and the spontaneous grouping of most-similar stimuli into gestalts provides the dissimilarity comparisons. Analysis with multidimensional scaling allows such judgments to be pooled from a number of observers without obscuring the variations among them. The anomalous perceptions of color-deficient observers produce comparisons that are represented well by a geometric model of compressed individual color spaces, with different forms of deficiency distinguished by different directions of compression. The geometrical model is also capable of accommodating the normal spectrum of variation, so that there is greater variation in compression parameters between tests on normal subjects than in those between repeated tests on individual subjects. The method is sufficiently sensitive and the variations sufficiently large that they are not obscured by the use of a range of monitors, even under somewhat loosely controlled conditions.
Parameter learning for performance adaptation
NASA Technical Reports Server (NTRS)
Peek, Mark D.; Antsaklis, Panos J.
1990-01-01
A parameter learning method is introduced and used to broaden the region of operability of the adaptive control system of a flexible space antenna. The learning system guides the selection of control parameters in a process leading to optimal system performance. A grid search procedure is used to estimate an initial set of parameter values. The optimization search procedure uses a variation of the Hooke and Jeeves multidimensional search algorithm. The method is applicable to any system where performance depends on a number of adjustable parameters. A mathematical model is not necessary, as the learning system can be used whenever the performance can be measured via simulation or experiment. The results of two experiments, the transient regulation and the command following experiment, are presented.
Numeric invariants from multidimensional persistence
Skryzalin, Jacek; Carlsson, Gunnar
2017-05-19
Topological data analysis is the study of data using techniques from algebraic topology. Often, one begins with a finite set of points representing data and a “filter” function which assigns a real number to each datum. Using both the data and the filter function, one can construct a filtered complex for further analysis. For example, applying the homology functor to the filtered complex produces an algebraic object known as a “one-dimensional persistence module”, which can often be interpreted as a finite set of intervals representing various geometric features in the data. If one runs the above process incorporating multiple filtermore » functions simultaneously, one instead obtains a multidimensional persistence module. Unfortunately, these are much more difficult to interpret. In this article, we analyze the space of multidimensional persistence modules from the perspective of algebraic geometry. First we build a moduli space of a certain subclass of easily analyzed multidimensional persistence modules, which we construct specifically to capture much of the information which can be gained by using multidimensional persistence instead of one-dimensional persistence. Fruthermore, we argue that the global sections of this space provide interesting numeric invariants when evaluated against our subclass of multidimensional persistence modules. Finally, we extend these global sections to the space of all multidimensional persistence modules and discuss how the resulting numeric invariants might be used to study data. This paper extends the results of Adcock et al. (Homol Homotopy Appl 18(1), 381–402, 2016) by constructing numeric invariants from the computation of a multidimensional persistence module as given by Carlsson et al. (J Comput Geom 1(1), 72–100, 2010).« less
Estimating standard errors in feature network models.
Frank, Laurence E; Heiser, Willem J
2007-05-01
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.
NASA Astrophysics Data System (ADS)
Günther, Uwe; Zhuk, Alexander; Bezerra, Valdir B.; Romero, Carlos
2005-08-01
We study multi-dimensional gravitational models with scalar curvature nonlinearities of types R-1 and R4. It is assumed that the corresponding higher dimensional spacetime manifolds undergo a spontaneous compactification to manifolds with a warped product structure. Special attention has been paid to the stability of the extra-dimensional factor spaces. It is shown that for certain parameter regions the systems allow for a freezing stabilization of these spaces. In particular, we find for the R-1 model that configurations with stabilized extra dimensions do not provide a late-time acceleration (they are AdS), whereas the solution branch which allows for accelerated expansion (the dS branch) is incompatible with stabilized factor spaces. In the case of the R4 model, we obtain that the stability region in parameter space depends on the total dimension D = dim(M) of the higher dimensional spacetime M. For D > 8 the stability region consists of a single (absolutely stable) sector which is shielded from a conformal singularity (and an antigravity sector beyond it) by a potential barrier of infinite height and width. This sector is smoothly connected with the stability region of a curvature-linear model. For D < 8 an additional (metastable) sector exists which is separated from the conformal singularity by a potential barrier of finite height and width so that systems in this sector are prone to collapse into the conformal singularity. This second sector is not smoothly connected with the first (absolutely stable) one. Several limiting cases and the possibility of inflation are discussed for the R4 model.
NASA Astrophysics Data System (ADS)
Schulz, Wolfgang; Hermanns, Torsten; Al Khawli, Toufik
2017-07-01
Decision making for competitive production in high-wage countries is a daily challenge where rational and irrational methods are used. The design of decision making processes is an intriguing, discipline spanning science. However, there are gaps in understanding the impact of the known mathematical and procedural methods on the usage of rational choice theory. Following Benjamin Franklin's rule for decision making formulated in London 1772, he called "Prudential Algebra" with the meaning of prudential reasons, one of the major ingredients of Meta-Modelling can be identified finally leading to one algebraic value labelling the results (criteria settings) of alternative decisions (parameter settings). This work describes the advances in Meta-Modelling techniques applied to multi-dimensional and multi-criterial optimization by identifying the persistence level of the corresponding Morse-Smale Complex. Implementations for laser cutting and laser drilling are presented, including the generation of fast and frugal Meta-Models with controlled error based on mathematical model reduction Reduced Models are derived to avoid any unnecessary complexity. Both, model reduction and analysis of multi-dimensional parameter space are used to enable interactive communication between Discovery Finders and Invention Makers. Emulators and visualizations of a metamodel are introduced as components of Virtual Production Intelligence making applicable the methods of Scientific Design Thinking and getting the developer as well as the operator more skilled.
On the chaotic diffusion in multidimensional Hamiltonian systems
NASA Astrophysics Data System (ADS)
Cincotta, P. M.; Giordano, C. M.; Martí, J. G.; Beaugé, C.
2018-01-01
We present numerical evidence that diffusion in the herein studied multidimensional near-integrable Hamiltonian systems departs from a normal process, at least for realistic timescales. Therefore, the derivation of a diffusion coefficient from a linear fit on the variance evolution of the unperturbed integrals fails. We review some topics on diffusion in the Arnold Hamiltonian and yield numerical and theoretical arguments to show that in the examples we considered, a standard coefficient would not provide a good estimation of the speed of diffusion. However, numerical experiments concerning diffusion would provide reliable information about the stability of the motion within chaotic regions of the phase space. In this direction, we present an extension of previous results concerning the dynamical structure of the Laplace resonance in Gliese-876 planetary system considering variations of the orbital parameters accordingly to the error introduced by the radial velocity determination. We found that a slight variation of the eccentricity of planet c would destabilize the inner region of the resonance that, though chaotic, shows stable when adopting the best fit values for the parameters.
Lee, Eugene K; Tran, David D; Keung, Wendy; Chan, Patrick; Wong, Gabriel; Chan, Camie W; Costa, Kevin D; Li, Ronald A; Khine, Michelle
2017-11-14
Accurately predicting cardioactive effects of new molecular entities for therapeutics remains a daunting challenge. Immense research effort has been focused toward creating new screening platforms that utilize human pluripotent stem cell (hPSC)-derived cardiomyocytes and three-dimensional engineered cardiac tissue constructs to better recapitulate human heart function and drug responses. As these new platforms become increasingly sophisticated and high throughput, the drug screens result in larger multidimensional datasets. Improved automated analysis methods must therefore be developed in parallel to fully comprehend the cellular response across a multidimensional parameter space. Here, we describe the use of machine learning to comprehensively analyze 17 functional parameters derived from force readouts of hPSC-derived ventricular cardiac tissue strips (hvCTS) electrically paced at a range of frequencies and exposed to a library of compounds. A generated metric is effective for then determining the cardioactivity of a given drug. Furthermore, we demonstrate a classification model that can automatically predict the mechanistic action of an unknown cardioactive drug. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Methods of Optimizing X-Ray Optical Prescriptions for Wide-Field Applications
NASA Technical Reports Server (NTRS)
Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.; Weisskopf, M. C.
2010-01-01
We are working on the development of a method for optimizing wide-field x-ray telescope mirror prescriptions, including polynomial coefficients, mirror shell relative displacements, and (assuming 4 focal plane detectors) detector placement and tilt that does not require a search through the multi-dimensional parameter space. Under the assumption that the parameters are small enough that second order expansions are valid, we show that the performance at the detector surface can be expressed as a quadratic function of the parameters with numerical coefficients derived from a ray trace through the underlying Wolter I optic. The best values for the parameters are found by solving the linear system of equations creating by setting derivatives of this function with respect to each parameter to zero. We describe the present status of this development effort.
Precision constraints on the top-quark effective field theory at future lepton colliders
NASA Astrophysics Data System (ADS)
Durieux, G.
We examine the constraints that future lepton colliders would impose on the effective field theory describing modifications of top-quark interactions beyond the standard model, through measurements of the $e^+e^-\\to bW^+\\:\\bar bW^-$ process. Statistically optimal observables are exploited to constrain simultaneously and efficiently all relevant operators. Their constraining power is sufficient for quadratic effective-field-theory contributions to have negligible impact on limits which are therefore basis independent. This is contrasted with the measurements of cross sections and forward-backward asymmetries. An overall measure of constraints strength, the global determinant parameter, is used to determine which run parameters impose the strongest restriction on the multidimensional effective-field-theory parameter space.
Supervised and Unsupervised Learning of Multidimensional Acoustic Categories
ERIC Educational Resources Information Center
Goudbeek, Martijn; Swingley, Daniel; Smits, Roel
2009-01-01
Learning to recognize the contrasts of a language-specific phonemic repertoire can be viewed as forming categories in a multidimensional psychophysical space. Research on the learning of distributionally defined visual categories has shown that categories defined over 1 dimension are easy to learn and that learning multidimensional categories is…
Bayesian model comparison and parameter inference in systems biology using nested sampling.
Pullen, Nick; Morris, Richard J
2014-01-01
Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling's nested sampling to address both of these problems. Nested sampling is a Bayesian method for exploring parameter space that transforms a multi-dimensional integral to a 1D integration over likelihood space. This approach focuses on the computation of the marginal likelihood or evidence. The ratio of evidences of different models leads to the Bayes factor, which can be used for model comparison. We demonstrate how nested sampling can be used to reverse-engineer a system's behaviour whilst accounting for the uncertainty in the results. The effect of missing initial conditions of the variables as well as unknown parameters is investigated. We show how the evidence and the model ranking can change as a function of the available data. Furthermore, the addition of data from extra variables of the system can deliver more information for model comparison than increasing the data from one variable, thus providing a basis for experimental design.
NASA Astrophysics Data System (ADS)
Indik, Nathaniel; Fehrmann, Henning; Harke, Franz; Krishnan, Badri; Nielsen, Alex B.
2018-06-01
Efficient multidimensional template placement is crucial in computationally intensive matched-filtering searches for gravitational waves (GWs). Here, we implement the neighboring cell algorithm (NCA) to improve the detection volume of an existing compact binary coalescence (CBC) template bank. This algorithm has already been successfully applied for a binary millisecond pulsar search in data from the Fermi satellite. It repositions templates from overdense regions to underdense regions and reduces the number of templates that would have been required by a stochastic method to achieve the same detection volume. Our method is readily generalizable to other CBC parameter spaces. Here we apply this method to the aligned-single-spin neutron star-black hole binary coalescence inspiral-merger-ringdown gravitational wave parameter space. We show that the template nudging algorithm can attain the equivalent effectualness of the stochastic method with 12% fewer templates.
6D Visualization of Multidimensional Data by Means of Cognitive Technology
NASA Astrophysics Data System (ADS)
Vitkovskiy, V.; Gorohov, V.; Komarinskiy, S.
2010-12-01
On the basis of the cognitive graphics concept, we worked out the SW-system for visualization and analysis. It allows to train and to aggravate intuition of researcher, to raise his interest and motivation to the creative, scientific cognition, to realize process of dialogue with the very problems simultaneously. The Space Hedgehog system is the next step in the cognitive means of the multidimensional data analyze. The technique and technology cognitive 6D visualization of the multidimensional data is developed on the basis of the cognitive visualization research and technology development. The Space Hedgehog system allows direct dynamic visualization of 6D objects. It is developed with use of experience of the program Space Walker creation and its applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark
Coupled climate models have a large number of input parameters that can affect output uncertainty. We conducted a sensitivity analysis of sea ice proper:es and Arc:c related climate variables to 5 parameters in the HiLAT climate model: air-ocean turbulent exchange parameter (C), conversion of water vapor to clouds (cldfrc_rhminl) and of ice crystals to snow (micro_mg_dcs), snow thermal conduc:vity (ksno), and maximum snow grain size (rsnw_mlt). We used an elementary effect (EE) approach to rank their importance for output uncertainty. EE is an extension of one-at-a-time sensitivity analyses, but it is more efficient in sampling multi-dimensional parameter spaces. We lookedmore » for emerging relationships among climate variables across the model ensemble, and used causal discovery algorithms to establish potential pathways for those relationships.« less
Automated Design Space Exploration with Aspen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spafford, Kyle L.; Vetter, Jeffrey S.
Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less
Automated Design Space Exploration with Aspen
Spafford, Kyle L.; Vetter, Jeffrey S.
2015-01-01
Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less
NASA Astrophysics Data System (ADS)
Khachaturov, R. V.
2016-09-01
It is shown that finding the equivalence set for solving multiobjective discrete optimization problems is advantageous over finding the set of Pareto optimal decisions. An example of a set of key parameters characterizing the economic efficiency of a commercial firm is proposed, and a mathematical model of its activities is constructed. In contrast to the classical problem of finding the maximum profit for any business, this study deals with a multiobjective optimization problem. A method for solving inverse multiobjective problems in a multidimensional pseudometric space is proposed for finding the best project of firm's activities. The solution of a particular problem of this type is presented.
Davidson, George S.; Anderson, Thomas G.
2001-01-01
A display controller allows a user to control a base viewing location, a base viewing orientation, and a relative viewing orientation. The base viewing orientation and relative viewing orientation are combined to determine a desired viewing orientation. An aspect of a multidimensional space visible from the base viewing location along the desired viewing orientation is displayed to the user. The user can change the base viewing location, base viewing orientation, and relative viewing orientation by changing the location or other properties of input objects.
Huang, Yu; Guo, Feng; Li, Yongling; Liu, Yufeng
2015-01-01
Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO) is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm. PMID:25603158
Assessing Construct Validity Using Multidimensional Item Response Theory.
ERIC Educational Resources Information Center
Ackerman, Terry A.
The concept of a user-specified validity sector is discussed. The idea of the validity sector combines the work of M. D. Reckase (1986) and R. Shealy and W. Stout (1991). Reckase developed a methodology to represent an item in a multidimensional latent space as a vector. Item vectors are computed using multidimensional item response theory item…
ERIC Educational Resources Information Center
Zhang, Jinming
2004-01-01
It is common to assume during statistical analysis of a multiscale assessment that the assessment has simple structure or that it is composed of several unidimensional subtests. Under this assumption, both the unidimensional and multidimensional approaches can be used to estimate item parameters. This paper theoretically demonstrates that these…
Low-discrepancy sampling of parametric surface using adaptive space-filling curves (SFC)
NASA Astrophysics Data System (ADS)
Hsu, Charles; Szu, Harold
2014-05-01
Space-Filling Curves (SFCs) are encountered in different fields of engineering and computer science, especially where it is important to linearize multidimensional data for effective and robust interpretation of the information. Examples of multidimensional data are matrices, images, tables, computational grids, and Electroencephalography (EEG) sensor data resulting from the discretization of partial differential equations (PDEs). Data operations like matrix multiplications, load/store operations and updating and partitioning of data sets can be simplified when we choose an efficient way of going through the data. In many applications SFCs present just this optimal manner of mapping multidimensional data onto a one dimensional sequence. In this report, we begin with an example of a space-filling curve and demonstrate how it can be used to find the most similarity using Fast Fourier transform (FFT) through a set of points. Next we give a general introduction to space-filling curves and discuss properties of them. Finally, we consider a discrete version of space-filling curves and present experimental results on discrete space-filling curves optimized for special tasks.
Detonation initiation in a model of explosive: Comparative atomistic and hydrodynamics simulations
NASA Astrophysics Data System (ADS)
Murzov, S. A.; Sergeev, O. V.; Dyachkov, S. A.; Egorova, M. S.; Parshikov, A. N.; Zhakhovsky, V. V.
2016-11-01
Here we extend consistent simulations to reactive materials by the example of AB model explosive. The kinetic model of chemical reactions observed in a molecular dynamics (MD) simulation of self-sustained detonation wave can be used in hydrodynamic simulation of detonation initiation. Kinetic coefficients are obtained by minimization of difference between profiles of species calculated from the kinetic model and observed in MD simulations of isochoric thermal decomposition with a help of downhill simplex method combined with random walk in multidimensional space of fitting kinetic model parameters.
Unidimensional Vertical Scaling in Multidimensional Space. Research Report. ETS RR-17-29
ERIC Educational Resources Information Center
Carlson, James E.
2017-01-01
In this paper, I consider a set of test items that are located in a multidimensional space, S[subscript M], but are located along a curved line in S[subscript M] and can be scaled unidimensionally. Furthermore, I am demonstrating a case in which the test items are administered across 6 levels, such as occurs in K-12 assessment across 6 grade…
NASA Technical Reports Server (NTRS)
Weisskopf, M. C.; Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.
2010-01-01
We present a progress report on the various endeavors we are undertaking at MSFC in support of the Wide Field X-Ray Telescope development. In particular we discuss assembly and alignment techniques, in-situ polishing corrections, and the results of our efforts to optimize mirror prescriptions including polynomial coefficients, relative shell displacements, detector placements and tilts. This optimization does not require a blind search through the multi-dimensional parameter space. Under the assumption that the parameters are small enough so that second order expansions are valid, we show that the performance at the detector can be expressed as a quadratic function with numerical coefficients derived from a ray trace through the underlying Wolter I optic. The optimal values for the parameters are found by solving the linear system of equations creating by setting derivatives of this function with respect to each parameter to zero.
ERIC Educational Resources Information Center
Yao, Lihua; Schwarz, Richard D.
2006-01-01
Multidimensional item response theory (IRT) models have been proposed for better understanding the dimensional structure of data or to define diagnostic profiles of student learning. A compensatory multidimensional two-parameter partial credit model (M-2PPC) for constructed-response items is presented that is a generalization of those proposed to…
Lost in space: design of experiments and scientific exploration in a Hogarth Universe.
Lendrem, Dennis W; Lendrem, B Clare; Woods, David; Rowland-Jones, Ruth; Burke, Matthew; Chatfield, Marion; Isaacs, John D; Owen, Martin R
2015-11-01
A Hogarth, or 'wicked', universe is an irregular environment generating data to support erroneous beliefs. Here, we argue that development scientists often work in such a universe. We demonstrate that exploring these multidimensional spaces using small experiments guided by scientific intuition alone, gives rise to an illusion of validity and a misplaced confidence in that scientific intuition. By contrast, design of experiments (DOE) permits the efficient mapping of such complex, multidimensional spaces. We describe simulation tools that enable research scientists to explore these spaces in relative safety. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Gamma-Ray Burst ToolSHED is Open for Business
NASA Astrophysics Data System (ADS)
Giblin, Timothy W.; Hakkila, Jon; Haglin, David J.; Roiger, Richard J.
2004-09-01
The GRB ToolSHED, a Gamma-Ray Burst SHell for Expeditions in Data-Mining, is now online and available via a web browser to all in the scientific community. The ToolSHED is an online web utility that contains pre-processed burst attributes of the BATSE catalog and a suite of induction-based machine learning and statistical tools for classification and cluster analysis. Users create their own login account and study burst properties within user-defined multi-dimensional parameter spaces. Although new GRB attributes are periodically added to the database for user selection, the ToolSHED has a feature that allows users to upload their own burst attributes (e.g. spectral parameters, etc.) so that additional parameter spaces can be explored. A data visualization feature using GNUplot and web-based IDL has also been implemented to provide interactive plotting of user-selected session output. In an era in which GRB observations and attributes are becoming increasingly more complex, a utility such as the GRB ToolSHED may play an important role in deciphering GRB classes and understanding intrinsic burst properties.
NASA Astrophysics Data System (ADS)
Hong, Zixuan; Bian, Fuling
2008-10-01
Geographic space, time space and cognition space are three fundamental and interrelated spaces in geographic information systems for transportation. However, the cognition space and its relationships to the time space and geographic space are often neglected. This paper studies the relationships of these three spaces in urban transportation system from a new perspective and proposes a novel MDS-SOM transformation method which takes the advantages of the techniques of multidimensional scaling (MDS) and self-organizing map (SOM). The MDS-SOM transformation framework includes three kinds of mapping: the geographic-time transformation, the cognition-time transformation and the time-cognition transformation. The transformations in our research provide a better understanding of the interactions of these three spaces and beneficial knowledge is discovered to help the transportation analysis and decision supports.
Scaling Laws for the Multidimensional Burgers Equation with Quadratic External Potential
NASA Astrophysics Data System (ADS)
Leonenko, N. N.; Ruiz-Medina, M. D.
2006-07-01
The reordering of the multidimensional exponential quadratic operator in coordinate-momentum space (see X. Wang, C.H. Oh and L.C. Kwek (1998). J. Phys. A.: Math. Gen. 31:4329-4336) is applied to derive an explicit formulation of the solution to the multidimensional heat equation with quadratic external potential and random initial conditions. The solution to the multidimensional Burgers equation with quadratic external potential under Gaussian strongly dependent scenarios is also obtained via the Hopf-Cole transformation. The limiting distributions of scaling solutions to the multidimensional heat and Burgers equations with quadratic external potential are then obtained under such scenarios.
NASA Astrophysics Data System (ADS)
Shahzad, Munir; Sengupta, Pinaki
2017-08-01
We study the Shastry-Sutherland Kondo lattice model with additional Dzyaloshinskii-Moriya (DM) interactions, exploring the possible magnetic phases in its multi-dimensional parameter space. Treating the local moments as classical spins and using a variational ansatz, we identify the parameter ranges over which various common magnetic orderings are potentially stabilized. Our results reveal that the competing interactions result in a heightened susceptibility towards a wide range of spin configurations including longitudinal ferromagnetic and antiferromagnetic order, coplanar flux configurations and most interestingly, multiple non-coplanar configurations including a novel canted-flux state as the different Hamiltonian parameters like electron density, interaction strengths and degree of frustration are varied. The non-coplanar and non-collinear magnetic ordering of localized spins behave like emergent electromagnetic fields and drive unusual transport and electronic phenomena.
Best Design for Multidimensional Computerized Adaptive Testing With the Bifactor Model
Seo, Dong Gi; Weiss, David J.
2015-01-01
Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm (MCAT) with a bifactor model using simulated data. Four item selection methods in MCAT were examined for three bifactor pattern designs using two multidimensional item response theory models. To compare MCAT item selection and estimation methods, a fixed test length was used. The Ds-optimality item selection improved θ estimates with respect to a general factor, and either D- or A-optimality improved estimates of the group factors in three bifactor pattern designs under two multidimensional item response theory models. The MCAT model without a guessing parameter functioned better than the MCAT model with a guessing parameter. The MAP (maximum a posteriori) estimation method provided more accurate θ estimates than the EAP (expected a posteriori) method under most conditions, and MAP showed lower observed standard errors than EAP under most conditions, except for a general factor condition using Ds-optimality item selection. PMID:29795848
A Multidimensional Study of Vocal Function Following Radiation Therapy for Laryngeal Cancers.
Angadi, Vrushali; Dressler, Emily; Stemple, Joseph
2017-06-01
Radiation therapy (XRT) has proven to be an effective curative modality in the treatment of laryngeal cancers. However, XRT also has deleterious effects on vocal function. To demonstrate the multidimensional nature of deficits in vocal function as a result of radiation therapy for laryngeal cancer. Cohort study. Vocal function parameters were chosen from the 5 domains of voice assessment to complete a multidimensional assessment battery. Adults irradiated (XRT group) for laryngeal cancers were compared to a control group of individuals with no history of head and neck cancers or radiation therapy. The control group was matched in age, sex, and pack years of smoking. Eighteen participants were recruited for the study. The XRT group demonstrated significantly worse clinical values as compared to the control group across select parameters in the each of the 5 domains of voice assessment. Radiation therapy for laryngeal cancers results in multidimensional deficits in vocal function. Notably, these deficits persist long term. In the present study sample, multidimensional deficits were persistent 2 to 7 years following completion of XRT. The observed multidimensional persistent vocal difficulties highlight the importance of vocal rehabilitation in the irradiated larynx cancer population.
Multidimensional Skyrme-density-functional study of the spontaneous fission of 238U
Sadhukhan, J.; Mazurek, K.; Dobaczewski, J.; ...
2015-01-01
We determined the spontaneous fission lifetime of 238U by a minimization of the action integral in a three-dimensional space of collective variables. Apart from the mass-distribution multipole moments Q 20 (elongation) and Q 30 (left–right asymmetry), we also considered the pairing-fluctuation parameter λ 2 as a collective coordinate. The collective potential was obtained self-consistently using the Skyrme energy density functional SkM*. The inertia tensor was obtained within the nonperturbative cranking approximation to the adiabatic time-dependent Hartree–Fock–Bogoliubov approach. As a result, the pairing-fluctuation parameter λ 2 allowed us to control the pairing gap along the fission path, which significantly changed themore » spontaneous fission lifetime.« less
Hidden multidimensional social structure modeling applied to biased social perception
NASA Astrophysics Data System (ADS)
Maletić, Slobodan; Zhao, Yi
2018-02-01
Intricacies of the structure of social relations are realized by representing a collection of overlapping opinions as a simplicial complex, thus building latent multidimensional structures, through which agents are, virtually, moving as they exchange opinions. The influence of opinion space structure on the distribution of opinions is demonstrated by modeling consensus phenomena when the opinion exchange between individuals may be affected by the false consensus effect. The results indicate that in the cases with and without bias, the road toward consensus is influenced by the structure of multidimensional space of opinions, and in the biased case, complete consensus is achieved. The applications of proposed modeling framework can easily be generalized, as they transcend opinion formation modeling.
A Scalar Product Model for the Multidimensional Scaling of Choice
ERIC Educational Resources Information Center
Bechtel, Gordon G.; And Others
1971-01-01
Contains a solution for the multidimensional scaling of pairwise choice when individuals are represented as dimensional weights. The analysis supplies an exact least squares solution and estimates of group unscalability parameters. (DG)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barszez, Anne-Marie; Camelbeeck, Thierry; Plumier, Andre
Northwest Europe is a region in which damaging earthquakes exist. Assessing the risks of damages is useful, but this is not an easy work based on exact science.In this paper, we propose a general tool for a first level assessment of seismic risks (rapid diagnosis). General methodological aspects are presented. For a given building, the risk is represented by a volume in a multi-dimensional space. This space is defined by axes representing the main parameters that have an influence on the risk. We notably express the importance of including a parameter to consider the specific value of cultural heritage.Then wemore » apply the proposed tool to analyze and compare methods of seismic risk assessment used in Belgium. They differ by the spatial scale of the studied area. Study cases for the whole Belgian Territory and for part of cities in Liege and Mons (Be) aim also to give some sense of the overall risk in Belgium.« less
NASA Astrophysics Data System (ADS)
Chen, D. M.; Clapp, R. G.; Biondi, B.
2006-12-01
Ricksep is a freely-available interactive viewer for multi-dimensional data sets. The viewer is very useful for simultaneous display of multiple data sets from different viewing angles, animation of movement along a path through the data space, and selection of local regions for data processing and information extraction. Several new viewing features are added to enhance the program's functionality in the following three aspects. First, two new data synthesis algorithms are created to adaptively combine information from a data set with mostly high-frequency content, such as seismic data, and another data set with mainly low-frequency content, such as velocity data. Using the algorithms, these two data sets can be synthesized into a single data set which resembles the high-frequency data set on a local scale and at the same time resembles the low- frequency data set on a larger scale. As a result, the originally separated high and low-frequency details can now be more accurately and conveniently studied together. Second, a projection algorithm is developed to display paths through the data space. Paths are geophysically important because they represent wells into the ground. Two difficulties often associated with tracking paths are that they normally cannot be seen clearly inside multi-dimensional spaces and depth information is lost along the direction of projection when ordinary projection techniques are used. The new algorithm projects samples along the path in three orthogonal directions and effectively restores important depth information by using variable projection parameters which are functions of the distance away from the path. Multiple paths in the data space can be generated using different character symbols as positional markers, and users can easily create, modify, and view paths in real time. Third, a viewing history list is implemented which enables Ricksep's users to create, edit and save a recipe for the sequence of viewing states. Then, the recipe can be loaded into an active Ricksep session, after which the user can navigate to any state in the sequence and modify the sequence from that state. Typical uses of this feature are undoing and redoing viewing commands and animating a sequence of viewing states. The theoretical discussion are carried out and several examples using real seismic data are provided to show how these new Ricksep features provide more convenient, accurate ways to manipulate multi-dimensional data sets.
Medical image registration based on normalized multidimensional mutual information
NASA Astrophysics Data System (ADS)
Li, Qi; Ji, Hongbing; Tong, Ming
2009-10-01
Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.
Precision Attitude Determination for an Infrared Space Telescope
NASA Technical Reports Server (NTRS)
Benford, Dominic J.
2008-01-01
We have developed performance simulations for a precision attitude determination system using a focal plane star tracker on an infrared space telescope. The telescope is being designed for the Destiny mission to measure cosmologically distant supernovae as one of the candidate implementations for the Joint Dark Energy Mission. Repeat observations of the supernovae require attitude control at the level of 0.010 arcseconds (0.05 microradians) during integrations and at repeat intervals up to and over a year. While absolute accuracy is not required, the repoint precision is challenging. We have simulated the performance of a focal plane star tracker in a multidimensional parameter space, including pixel size, read noise, and readout rate. Systematic errors such as proper motion, velocity aberration, and parallax can be measured and compensated out. Our prediction is that a relative attitude determination accuracy of 0.001 to 0.002 arcseconds (0.005 to 0.010 microradians) will be achievable.
Potter, Timothy; Corneille, Olivier; Ruys, Kirsten I; Rhodes, Ginwan
2007-04-01
Findings on both attractiveness and memory for faces suggest that people should perceive more similarity among attractive than among unattractive faces. A multidimensional scaling approach was used to test this hypothesis in two studies. In Study 1, we derived a psychological face space from similarity ratings of attractive and unattractive Caucasian female faces. In Study 2, we derived a face space for attractive and unattractive male faces of Caucasians and non-Caucasians. Both studies confirm that attractive faces are indeed more tightly clustered than unattractive faces in people's psychological face spaces. These studies provide direct and original support for theoretical assumptions previously made in the face space and face memory literatures.
On simplified application of multidimensional Savitzky-Golay filters and differentiators
NASA Astrophysics Data System (ADS)
Shekhar, Chandra
2016-02-01
I propose a simplified approach for multidimensional Savitzky-Golay filtering, to enable its fast and easy implementation in scientific and engineering applications. The proposed method, which is derived from a generalized framework laid out by Thornley (D. J. Thornley, "Novel anisotropic multidimensional convolution filters for derivative estimation and reconstruction" in Proceedings of International Conference on Signal Processing and Communications, November 2007), first transforms any given multidimensional problem into a unique one, by transforming coordinates of the sampled data nodes to unity-spaced, uniform data nodes, and then performs filtering and calculates partial derivatives on the unity-spaced nodes. It is followed by transporting the calculated derivatives back onto the original data nodes by using the chain rule of differentiation. The burden to performing the most cumbersome task, which is to carry out the filtering and to obtain derivatives on the unity-spaced nodes, is almost eliminated by providing convolution coefficients for a number of convolution kernel sizes and polynomial orders, up to four spatial dimensions. With the availability of the convolution coefficients, the task of filtering at a data node reduces merely to multiplication of two known matrices. Simplified strategies to adequately address near-boundary data nodes and to calculate partial derivatives there are also proposed. Finally, the proposed methodologies are applied to a three-dimensional experimentally obtained data set, which shows that multidimensional Savitzky-Golay filters and differentiators perform well in both the internal and the near-boundary regions of the domain.
NASA Astrophysics Data System (ADS)
Zhuk, Alexander; Chopovsky, Alexey; Fakhr, Seyed Hossein; Shulga, Valerii; Wei, Han
2017-11-01
In a multidimensional Kaluza-Klein model with Ricci-flat internal space, we study the gravitational field in the weak-field limit. This field is created by two coupled sources. First, this is a point-like massive body which has a dust-like equation of state in the external space and an arbitrary parameter Ω of equation of state in the internal space. The second source is a static spherically symmetric massive scalar field centered at the origin where the point-like massive body is. The found perturbed metric coefficients are used to calculate the parameterized post-Newtonian (PPN) parameter γ . We define under which conditions γ can be very close to unity in accordance with the relativistic gravitational tests in the solar system. This can take place for both massive or massless scalar fields. For example, to have γ ≈ 1 in the solar system, the mass of scalar field should be μ ≳ 5.05× 10^{-49}g ˜ 2.83× 10^{-16}eV. In all cases, we arrive at the same conclusion that to be in agreement with the relativistic gravitational tests, the gravitating mass should have tension: Ω = - 1/2.
A New Heterogeneous Multidimensional Unfolding Procedure
ERIC Educational Resources Information Center
Park, Joonwook; Rajagopal, Priyali; DeSarbo, Wayne S.
2012-01-01
A variety of joint space multidimensional scaling (MDS) methods have been utilized for the spatial analysis of two- or three-way dominance data involving subjects' preferences, choices, considerations, intentions, etc. so as to provide a parsimonious spatial depiction of the underlying relevant dimensions, attributes, stimuli, and/or subjects'…
Higgs-portal assisted Higgs inflation with a sizeable tensor-to-scalar ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jinsu; Ko, Pyungwon; Park, Wan-Il, E-mail: kimjinsu@kias.re.kr, E-mail: pko@kias.re.kr, E-mail: Wanil.Park@uv.es
We show that the Higgs portal interactions involving extra dark Higgs field can save generically the original Higgs inflation of the standard model (SM) from the problem of a deep non-SM vacuum in the SM Higgs potential. Specifically, we show that such interactions disconnect the top quark pole mass from inflationary observables and allow multi-dimensional parameter space to save the Higgs inflation, thanks to the additional parameters (the dark Higgs boson mass m {sub φ}, the mixing angle α between the SM Higgs H and dark Higgs Φ, and the mixed quartic coupling) affecting RG-running of the Higgs quartic coupling.more » The effect of Higgs portal interactions may lead to a larger tensor-to-scalar ratio, 0.08 ∼< r ∼< 0.1, by adjusting relevant parameters in wide ranges of α and m {sub φ}, some region of which can be probed at future colliders. Performing a numerical analysis we find an allowed region of parameters, matching the latest Planck data.« less
Application of differential evolution algorithm on self-potential data.
Li, Xiangtao; Yin, Minghao
2012-01-01
Differential evolution (DE) is a population based evolutionary algorithm widely used for solving multidimensional global optimization problems over continuous spaces, and has been successfully used to solve several kinds of problems. In this paper, differential evolution is used for quantitative interpretation of self-potential data in geophysics. Six parameters are estimated including the electrical dipole moment, the depth of the source, the distance from the origin, the polarization angle and the regional coefficients. This study considers three kinds of data from Turkey: noise-free data, contaminated synthetic data, and Field example. The differential evolution and the corresponding model parameters are constructed as regards the number of the generations. Then, we show the vibration of the parameters at the vicinity of the low misfit area. Moreover, we show how the frequency distribution of each parameter is related to the number of the DE iteration. Experimental results show the DE can be used for solving the quantitative interpretation of self-potential data efficiently compared with previous methods.
Application of Differential Evolution Algorithm on Self-Potential Data
Li, Xiangtao; Yin, Minghao
2012-01-01
Differential evolution (DE) is a population based evolutionary algorithm widely used for solving multidimensional global optimization problems over continuous spaces, and has been successfully used to solve several kinds of problems. In this paper, differential evolution is used for quantitative interpretation of self-potential data in geophysics. Six parameters are estimated including the electrical dipole moment, the depth of the source, the distance from the origin, the polarization angle and the regional coefficients. This study considers three kinds of data from Turkey: noise-free data, contaminated synthetic data, and Field example. The differential evolution and the corresponding model parameters are constructed as regards the number of the generations. Then, we show the vibration of the parameters at the vicinity of the low misfit area. Moreover, we show how the frequency distribution of each parameter is related to the number of the DE iteration. Experimental results show the DE can be used for solving the quantitative interpretation of self-potential data efficiently compared with previous methods. PMID:23240004
High-Level Performance Modeling of SAR Systems
NASA Technical Reports Server (NTRS)
Chen, Curtis
2006-01-01
SAUSAGE (Still Another Utility for SAR Analysis that s General and Extensible) is a computer program for modeling (see figure) the performance of synthetic- aperture radar (SAR) or interferometric synthetic-aperture radar (InSAR or IFSAR) systems. The user is assumed to be familiar with the basic principles of SAR imaging and interferometry. Given design parameters (e.g., altitude, power, and bandwidth) that characterize a radar system, the software predicts various performance metrics (e.g., signal-to-noise ratio and resolution). SAUSAGE is intended to be a general software tool for quick, high-level evaluation of radar designs; it is not meant to capture all the subtleties, nuances, and particulars of specific systems. SAUSAGE was written to facilitate the exploration of engineering tradeoffs within the multidimensional space of design parameters. Typically, this space is examined through an iterative process of adjusting the values of the design parameters and examining the effects of the adjustments on the overall performance of the system at each iteration. The software is designed to be modular and extensible to enable consideration of a variety of operating modes and antenna beam patterns, including, for example, strip-map and spotlight SAR acquisitions, polarimetry, burst modes, and squinted geometries.
Insights into quasar UV spectra using unsupervised clustering analysis
NASA Astrophysics Data System (ADS)
Tammour, A.; Gallagher, S. C.; Daley, M.; Richards, G. T.
2016-06-01
Machine learning techniques can provide powerful tools to detect patterns in multidimensional parameter space. We use K-means - a simple yet powerful unsupervised clustering algorithm which picks out structure in unlabelled data - to study a sample of quasar UV spectra from the Quasar Catalog of the 10th Data Release of the Sloan Digital Sky Survey (SDSS-DR10) of Paris et al. Detecting patterns in large data sets helps us gain insights into the physical conditions and processes giving rise to the observed properties of quasars. We use K-means to find clusters in the parameter space of the equivalent width (EW), the blue- and red-half-width at half-maximum (HWHM) of the Mg II 2800 Å line, the C IV 1549 Å line, and the C III] 1908 Å blend in samples of broad absorption line (BAL) and non-BAL quasars at redshift 1.6-2.1. Using this method, we successfully recover correlations well-known in the UV regime such as the anti-correlation between the EW and blueshift of the C IV emission line and the shape of the ionizing spectra energy distribution (SED) probed by the strength of He II and the Si III]/C III] ratio. We find this to be particularly evident when the properties of C III] are used to find the clusters, while those of Mg II proved to be less strongly correlated with the properties of the other lines in the spectra such as the width of C IV or the Si III]/C III] ratio. We conclude that unsupervised clustering methods (such as K-means) are powerful methods for finding `natural' binning boundaries in multidimensional data sets and discuss caveats and future work.
Some applications of the multi-dimensional fractional order for the Riemann-Liouville derivative
NASA Astrophysics Data System (ADS)
Ahmood, Wasan Ajeel; Kiliçman, Adem
2017-01-01
In this paper, the aim of this work is to study theorem for the one-dimensional space-time fractional deriative, generalize some function for the one-dimensional fractional by table represents the fractional Laplace transforms of some elementary functions to be valid for the multi-dimensional fractional Laplace transform and give the definition of the multi-dimensional fractional Laplace transform. This study includes that, dedicate the one-dimensional fractional Laplace transform for functions of only one independent variable and develop of the one-dimensional fractional Laplace transform to multi-dimensional fractional Laplace transform based on the modified Riemann-Liouville derivative.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotasidis, Fotis A., E-mail: Fotis.Kotasidis@unige.ch; Zaidi, Habib; Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva
2014-06-15
Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailedmore » investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis function superposition and keeping the image representation error to a minimum, is feasible, with the parameter combination range depending upon the scanner's intrinsic resolution characteristics. Conclusions: Using the printed point source array as a MR compatible methodology for experimentally measuring the scanner's PSF, the system's spatially variant resolution properties were successfully evaluated in image space. Overall the PET subsystem exhibits excellent resolution characteristics mainly due to the fact that the raw data are not under-sampled/rebinned, enabling the spatial resolution to be dictated by the scanner's intrinsic resolution and the image reconstruction parameters. Due to the impact of these parameters on the resolution properties of the reconstructed images, the image space PSF varies both under spatial transformations and due to basis function parameter selection. Nonetheless, for a range of basis function parameters, the image space PSF remains unaffected, with the range depending on the scanner's intrinsic resolution properties.« less
A model of the human supervisor
NASA Technical Reports Server (NTRS)
Kok, J. J.; Vanwijk, R. A.
1977-01-01
A general model of the human supervisor's behavior is given. Submechanisms of the model include: the observer/reconstructor; decision-making; and controller. A set of hypothesis is postulated for the relations between the task variables and the parameters of the different submechanisms of the model. Verification of the model hypotheses is considered using variations in the task variables. An approach is suggested for the identification of the model parameters which makes use of a multidimensional error criterion. Each of the elements of this multidimensional criterion corresponds to a certain aspect of the supervisor's behavior, and is directly related to a particular part of the model and its parameters. This approach offers good possibilities for an efficient parameter adjustment procedure.
Multidimensional Programming Methods for Energy Facility Siting: Alternative Approaches
NASA Technical Reports Server (NTRS)
Solomon, B. D.; Haynes, K. E.
1982-01-01
The use of multidimensional optimization methods in solving power plant siting problems, which are characterized by several conflicting, noncommensurable objectives is addressed. After a discussion of data requirements and exclusionary site screening methods for bounding the decision space, classes of multiobjective and goal programming models are discussed in the context of finite site selection. Advantages and limitations of these approaches are highlighted and the linkage of multidimensional methods with the subjective, behavioral components of the power plant siting process is emphasized.
Multidimensional stochastic approximation using locally contractive functions
NASA Technical Reports Server (NTRS)
Lawton, W. M.
1975-01-01
A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.
Gemmell, Philip; Burrage, Kevin; Rodriguez, Blanca; Quinn, T Alexander
2014-01-01
Variability is observed at all levels of cardiac electrophysiology. Yet, the underlying causes and importance of this variability are generally unknown, and difficult to investigate with current experimental techniques. The aim of the present study was to generate populations of computational ventricular action potential models that reproduce experimentally observed intercellular variability of repolarisation (represented by action potential duration) and to identify its potential causes. A systematic exploration of the effects of simultaneously varying the magnitude of six transmembrane current conductances (transient outward, rapid and slow delayed rectifier K(+), inward rectifying K(+), L-type Ca(2+), and Na(+)/K(+) pump currents) in two rabbit-specific ventricular action potential models (Shannon et al. and Mahajan et al.) at multiple cycle lengths (400, 600, 1,000 ms) was performed. This was accomplished with distributed computing software specialised for multi-dimensional parameter sweeps and grid execution. An initial population of 15,625 parameter sets was generated for both models at each cycle length. Action potential durations of these populations were compared to experimentally derived ranges for rabbit ventricular myocytes. 1,352 parameter sets for the Shannon model and 779 parameter sets for the Mahajan model yielded action potential duration within the experimental range, demonstrating that a wide array of ionic conductance values can be used to simulate a physiological rabbit ventricular action potential. Furthermore, by using clutter-based dimension reordering, a technique that allows visualisation of multi-dimensional spaces in two dimensions, the interaction of current conductances and their relative importance to the ventricular action potential at different cycle lengths were revealed. Overall, this work represents an important step towards a better understanding of the role that variability in current conductances may play in experimentally observed intercellular variability of rabbit ventricular action potential repolarisation.
Gemmell, Philip; Burrage, Kevin; Rodriguez, Blanca; Quinn, T. Alexander
2014-01-01
Variability is observed at all levels of cardiac electrophysiology. Yet, the underlying causes and importance of this variability are generally unknown, and difficult to investigate with current experimental techniques. The aim of the present study was to generate populations of computational ventricular action potential models that reproduce experimentally observed intercellular variability of repolarisation (represented by action potential duration) and to identify its potential causes. A systematic exploration of the effects of simultaneously varying the magnitude of six transmembrane current conductances (transient outward, rapid and slow delayed rectifier K+, inward rectifying K+, L-type Ca2+, and Na+/K+ pump currents) in two rabbit-specific ventricular action potential models (Shannon et al. and Mahajan et al.) at multiple cycle lengths (400, 600, 1,000 ms) was performed. This was accomplished with distributed computing software specialised for multi-dimensional parameter sweeps and grid execution. An initial population of 15,625 parameter sets was generated for both models at each cycle length. Action potential durations of these populations were compared to experimentally derived ranges for rabbit ventricular myocytes. 1,352 parameter sets for the Shannon model and 779 parameter sets for the Mahajan model yielded action potential duration within the experimental range, demonstrating that a wide array of ionic conductance values can be used to simulate a physiological rabbit ventricular action potential. Furthermore, by using clutter-based dimension reordering, a technique that allows visualisation of multi-dimensional spaces in two dimensions, the interaction of current conductances and their relative importance to the ventricular action potential at different cycle lengths were revealed. Overall, this work represents an important step towards a better understanding of the role that variability in current conductances may play in experimentally observed intercellular variability of rabbit ventricular action potential repolarisation. PMID:24587229
Unidimensional Interpretations for Multidimensional Test Items
ERIC Educational Resources Information Center
Kahraman, Nilufer
2013-01-01
This article considers potential problems that can arise in estimating a unidimensional item response theory (IRT) model when some test items are multidimensional (i.e., show a complex factorial structure). More specifically, this study examines (1) the consequences of model misfit on IRT item parameter estimates due to unintended minor item-level…
A Brane Model, Its Ads-DS States and Their Agitated Extra Dimensions
NASA Astrophysics Data System (ADS)
Günther, Uwe; Vargas Moniz, Paulo; Zhuk, Alexander
2006-02-01
We consider multidimensional gravitational models with a nonlinear scalar curvature term and form fields. It is assumed that the higher dimensional spacetime undergoes a spontaneous compactification to a warped product manifold. Particular attention is paid to models with quadratic scalar curvature terms and a Freund-Rubin-like ansatz for solitonic form fields. It is shown that for certain parameter ranges the extra dimensions are stabilized for any sign of the internal space curvature, the bulk cosmological constant and of the effective four-dimensional cosmological constant. Moreover, the effective cosmological constant can satisfy the observable limit on the dark energy density.
A General Multidimensional Model for the Measurement of Cultural Differences.
ERIC Educational Resources Information Center
Olmedo, Esteban L.; Martinez, Sergio R.
A multidimensional model for measuring cultural differences (MCD) based on factor analytic theory and techniques is proposed. The model assumes that a cultural space may be defined by means of a relatively small number of orthogonal dimensions which are linear combinations of a much larger number of cultural variables. Once a suitable,…
A nonlocal electron conduction model for multidimensional radiation hydrodynamics codes
NASA Astrophysics Data System (ADS)
Schurtz, G. P.; Nicolaï, Ph. D.; Busquet, M.
2000-10-01
Numerical simulation of laser driven Inertial Confinement Fusion (ICF) related experiments require the use of large multidimensional hydro codes. Though these codes include detailed physics for numerous phenomena, they deal poorly with electron conduction, which is the leading energy transport mechanism of these systems. Electron heat flow is known, since the work of Luciani, Mora, and Virmont (LMV) [Phys. Rev. Lett. 51, 1664 (1983)], to be a nonlocal process, which the local Spitzer-Harm theory, even flux limited, is unable to account for. The present work aims at extending the original formula of LMV to two or three dimensions of space. This multidimensional extension leads to an equivalent transport equation suitable for easy implementation in a two-dimensional radiation-hydrodynamic code. Simulations are presented and compared to Fokker-Planck simulations in one and two dimensions of space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Minghai; Duan, Mojie; Fan, Jue
The thermodynamics and kinetics of protein folding and protein conformational changes are governed by the underlying free energy landscape. However, the multidimensional nature of the free energy landscape makes it difficult to describe. We propose to use a weighted-graph approach to depict the free energy landscape with the nodes on the graph representing the conformational states and the edge weights reflecting the free energy barriers between the states. Our graph is constructed from a molecular dynamics trajectory and does not involve projecting the multi-dimensional free energy landscape onto a low-dimensional space defined by a few order parameters. The calculation ofmore » free energy barriers was based on transition-path theory using the MSMBuilder2 package. We compare our graph with the widely used transition disconnectivity graph (TRDG) which is constructed from the same trajectory and show that our approach gives more accurate description of the free energy landscape than the TRDG approach even though the latter can be organized into a simple tree representation. The weighted-graph is a general approach and can be used on any complex system.« less
Stochastic Evolutionary Algorithms for Planning Robot Paths
NASA Technical Reports Server (NTRS)
Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard
2006-01-01
A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.
Using partially labeled data for normal mixture identification with application to class definition
NASA Technical Reports Server (NTRS)
Shahshahani, Behzad M.; Landgrebe, David A.
1992-01-01
The problem of estimating the parameters of a normal mixture density when, in addition to the unlabeled samples, sets of partially labeled samples are available is addressed. The density of the multidimensional feature space is modeled with a normal mixture. It is assumed that the set of components of the mixture can be partitioned into several classes and that training samples are available from each class. Since for any training sample the class of origin is known but the exact component of origin within the corresponding class is unknown, the training samples as considered to be partially labeled. The EM iterative equations are derived for estimating the parameters of the normal mixture in the presence of partially labeled samples. These equations can be used to combine the supervised and nonsupervised learning processes.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Brown, G.; Rikvold, P. A.
2017-05-01
A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.
Liu, Fei; Zhang, Xi; Jia, Yan
2015-01-01
In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top (k1,k2) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space, where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top (k1,k2) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods' high accuracy and high efficiency.
Devaney chaos, Li-Yorke chaos, and multi-dimensional Li-Yorke chaos for topological dynamics
NASA Astrophysics Data System (ADS)
Dai, Xiongping; Tang, Xinjia
2017-11-01
Let π : T × X → X, written T↷π X, be a topological semiflow/flow on a uniform space X with T a multiplicative topological semigroup/group not necessarily discrete. We then prove: If T↷π X is non-minimal topologically transitive with dense almost periodic points, then it is sensitive to initial conditions. As a result of this, Devaney chaos ⇒ Sensitivity to initial conditions, for this very general setting. Let R+↷π X be a C0-semiflow on a Polish space; then we show: If R+↷π X is topologically transitive with at least one periodic point p and there is a dense orbit with no nonempty interior, then it is multi-dimensional Li-Yorke chaotic; that is, there is a uncountable set Θ ⊆ X such that for any k ≥ 2 and any distinct points x1 , … ,xk ∈ Θ, one can find two time sequences sn → ∞ ,tn → ∞ with Moreover, let X be a non-singleton Polish space; then we prove: Any weakly-mixing C0-semiflow R+↷π X is densely multi-dimensional Li-Yorke chaotic. Any minimal weakly-mixing topological flow T↷π X with T abelian is densely multi-dimensional Li-Yorke chaotic. Any weakly-mixing topological flow T↷π X is densely Li-Yorke chaotic. We in addition construct a completely Li-Yorke chaotic minimal SL (2 , R)-acting flow on the compact metric space R ∪ { ∞ }. Our various chaotic dynamics are sensitive to the choices of the topology of the phase semigroup/group T.
A new clustering algorithm applicable to multispectral and polarimetric SAR images
NASA Technical Reports Server (NTRS)
Wong, Yiu-Fai; Posner, Edward C.
1993-01-01
We describe an application of a scale-space clustering algorithm to the classification of a multispectral and polarimetric SAR image of an agricultural site. After the initial polarimetric and radiometric calibration and noise cancellation, we extracted a 12-dimensional feature vector for each pixel from the scattering matrix. The clustering algorithm was able to partition a set of unlabeled feature vectors from 13 selected sites, each site corresponding to a distinct crop, into 13 clusters without any supervision. The cluster parameters were then used to classify the whole image. The classification map is much less noisy and more accurate than those obtained by hierarchical rules. Starting with every point as a cluster, the algorithm works by melting the system to produce a tree of clusters in the scale space. It can cluster data in any multidimensional space and is insensitive to variability in cluster densities, sizes and ellipsoidal shapes. This algorithm, more powerful than existing ones, may be useful for remote sensing for land use.
NASA Astrophysics Data System (ADS)
Lefebvre, Eric; Helleur, Christopher; Kashyap, Nathan
2008-03-01
Maritime surveillance of coastal regions requires operational staff to integrate a large amount of information from a variety of military and civilian sources. The diverse nature of the information sources makes complete automation difficult. The volume of vessels tracked and the number of sources makes it difficult for the limited operation centre staff to fuse all the information manually within a reasonable timeframe. In this paper, a conceptual decision space is proposed to provide a framework for automating the process of operators integrating the sources needed to maintain Maritime Domain Awareness. The decision space contains all potential pairs of ship tracks that are candidates for fusion. The location of the candidate pairs in this defined space depends on the value of the parameters used to make a decision. In the application presented, three independent parameters are used: the source detection efficiency, the geo-feasibility, and the track quality. One of three decisions is applied to each candidate track pair based on these three parameters: 1. to accept the fusion, in which case tracks are fused in one track, 2. to reject the fusion, in which case the candidate track pair is removed from the list of potential fusion, and 3. to defer the fusion, in which case no fusion occurs but the candidate track pair remains in the list of potential fusion until sufficient information is provided. This paper demonstrates in an operational setting how a proposed conceptual space is used to optimize the different thresholds for automatic fusion decision while minimizing the list of unresolved cases when the decision is left to the operator.
ERIC Educational Resources Information Center
Ding, Cody S.; Davison, Mark L.
2010-01-01
Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…
ERIC Educational Resources Information Center
Hagger, Martin S.; Biddle, Stuart J. H.; Chow, Edward W.; Stambulova, Natalia; Kavussanu, Maria
2003-01-01
Examined the generalizability of the form, structural parameters, and latent means of a hierarchical multidimensional model of physical self-perceptions in adolescents from three cultures. A children's version of the physical self-perception profile was administered to British, Hong Kong, and Russian students. Tests of cross-cultural…
ERIC Educational Resources Information Center
de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.
2010-01-01
We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…
Defining process design space for monoclonal antibody cell culture.
Abu-Absi, Susan Fugett; Yang, LiYing; Thompson, Patrick; Jiang, Canping; Kandula, Sunitha; Schilling, Bernhard; Shukla, Abhinav A
2010-08-15
The concept of design space has been taking root as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. During mapping of the process design space, the multidimensional combination of operational variables is studied to quantify the impact on process performance in terms of productivity and product quality. An efficient methodology to map the design space for a monoclonal antibody cell culture process is described. A failure modes and effects analysis (FMEA) was used as the basis for the process characterization exercise. This was followed by an integrated study of the inoculum stage of the process which includes progressive shake flask and seed bioreactor steps. The operating conditions for the seed bioreactor were studied in an integrated fashion with the production bioreactor using a two stage design of experiments (DOE) methodology to enable optimization of operating conditions. A two level Resolution IV design was followed by a central composite design (CCD). These experiments enabled identification of the edge of failure and classification of the operational parameters as non-key, key or critical. In addition, the models generated from the data provide further insight into balancing productivity of the cell culture process with product quality considerations. Finally, process and product-related impurity clearance was evaluated by studies linking the upstream process with downstream purification. Production bioreactor parameters that directly influence antibody charge variants and glycosylation in CHO systems were identified.
ERIC Educational Resources Information Center
Bergmark, Ulrika; Westman, Susanne
2016-01-01
This paper discusses a case study in teacher education in Sweden, focusing on creating spaces for student engagement through co-creating curriculum. It highlights democratic values and a multidimensional learning view as underpinning such endeavors. The main findings are that co-creating curriculum is an ambiguous process entailing unpredictable,…
ERIC Educational Resources Information Center
Nosofsky, Robert M.; Stanton, Roger D.
2006-01-01
Observers made speeded old-new recognition judgments of color stimuli embedded in a multidimensional similarity space. The paradigm used multiple lists but with the underlying similarity structures repeated across lists, to allow for quantitative modeling of the data at the individual-participant and individual-item levels. Correct rejection…
Population Coding of Visual Space: Modeling
Lehky, Sidney R.; Sereno, Anne B.
2011-01-01
We examine how the representation of space is affected by receptive field (RF) characteristics of the encoding population. Spatial responses were defined by overlapping Gaussian RFs. These responses were analyzed using multidimensional scaling to extract the representation of global space implicit in population activity. Spatial representations were based purely on firing rates, which were not labeled with RF characteristics (tuning curve peak location, for example), differentiating this approach from many other population coding models. Because responses were unlabeled, this model represents space using intrinsic coding, extracting relative positions amongst stimuli, rather than extrinsic coding where known RF characteristics provide a reference frame for extracting absolute positions. Two parameters were particularly important: RF diameter and RF dispersion, where dispersion indicates how broadly RF centers are spread out from the fovea. For large RFs, the model was able to form metrically accurate representations of physical space on low-dimensional manifolds embedded within the high-dimensional neural population response space, suggesting that in some cases the neural representation of space may be dimensionally isomorphic with 3D physical space. Smaller RF sizes degraded and distorted the spatial representation, with the smallest RF sizes (present in early visual areas) being unable to recover even a topologically consistent rendition of space on low-dimensional manifolds. Finally, although positional invariance of stimulus responses has long been associated with large RFs in object recognition models, we found RF dispersion rather than RF diameter to be the critical parameter. In fact, at a population level, the modeling suggests that higher ventral stream areas with highly restricted RF dispersion would be unable to achieve positionally-invariant representations beyond this narrow region around fixation. PMID:21344012
NASA Astrophysics Data System (ADS)
Yang, Xiaojun; Lu, Dun; Ma, Chengfang; Zhang, Jun; Zhao, Wanhua
2017-01-01
The motor thrust force has lots of harmonic components due to the nonlinearity of drive circuit and motor itself in the linear motor feed drive system. What is more, in the motion process, these thrust force harmonics may vary with the position, velocity, acceleration and load, which affects the displacement fluctuation of the feed drive system. Therefore, in this paper, on the basis of the thrust force spectrum obtained by the Maxwell equation and the electromagnetic energy method, the multi-dimensional variation of each thrust harmonic is analyzed under different motion parameters. Then the model of the servo system is established oriented to the dynamic precision. The influence of the variation of the thrust force spectrum on the displacement fluctuation is discussed. At last the experiments are carried out to verify the theoretical analysis above. It can be found that the thrust harmonics show multi-dimensional spectrum characteristics under different motion parameters and loads, which should be considered to choose the motion parameters and optimize the servo control parameters in the high-speed and high-precision machine tools equipped with the linear motor feed drive system.
Nishimura, Mayu; Maurer, Daphne; Gao, Xiaoqing
2009-07-01
We explored differences in the mental representation of facial identity between 8-year-olds and adults. The 8-year-olds and adults made similarity judgments of a homogeneous set of faces (individual hair cues removed) using an "odd-man-out" paradigm. Multidimensional scaling (MDS) analyses were performed to represent perceived similarity of faces in a multidimensional space. Five dimensions accounted optimally for the judgments of both children and adults, with similar local clustering of faces. However, the fit of the MDS solutions was better for adults, in part because children's responses were more variable. More children relied predominantly on a single dimension, namely eye color, whereas adults appeared to use multiple dimensions for each judgment. The pattern of findings suggests that children's mental representation of faces has a structure similar to that of adults but that children's judgments are influenced less consistently by that overall structure.
Asymptotic Analysis Of The Total Least Squares ESPRIT Algorithm'
NASA Astrophysics Data System (ADS)
Ottersten, B. E.; Viberg, M.; Kailath, T.
1989-11-01
This paper considers the problem of estimating the parameters of multiple narrowband signals arriving at an array of sensors. Modern approaches to this problem often involve costly procedures for calculating the estimates. The ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm was recently proposed as a means for obtaining accurate estimates without requiring a costly search of the parameter space. This method utilizes an array invariance to arrive at a computationally efficient multidimensional estimation procedure. Herein, the asymptotic distribution of the estimation error is derived for the Total Least Squares (TLS) version of ESPRIT. The Cramer-Rao Bound (CRB) for the ESPRIT problem formulation is also derived and found to coincide with the variance of the asymptotic distribution through numerical examples. The method is also compared to least squares ESPRIT and MUSIC as well as to the CRB for a calibrated array. Simulations indicate that the theoretic expressions can be used to accurately predict the performance of the algorithm.
NASA Astrophysics Data System (ADS)
Del Pino, S.; Labourasse, E.; Morel, G.
2018-06-01
We present a multidimensional asymptotic preserving scheme for the approximation of a mixture of compressible flows. Fluids are modelled by two Euler systems of equations coupled with a friction term. The asymptotic preserving property is mandatory for this kind of model, to derive a scheme that behaves well in all regimes (i.e. whatever the friction parameter value is). The method we propose is defined in ALE coordinates, using a Lagrange plus remap approach. This imposes a multidimensional definition and analysis of the scheme.
High-Order Central WENO Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present new third- and fifth-order Godunov-type central schemes for approximating solutions of the Hamilton-Jacobi (HJ) equation in an arbitrary number of space dimensions. These are the first central schemes for approximating solutions of the HJ equations with an order of accuracy that is greater than two. In two space dimensions we present two versions for the third-order scheme: one scheme that is based on a genuinely two-dimensional Central WENO reconstruction, and another scheme that is based on a simpler dimension-by-dimension reconstruction. The simpler dimension-by-dimension variant is then extended to a multi-dimensional fifth-order scheme. Our numerical examples in one, two and three space dimensions verify the expected order of accuracy of the schemes.
ERIC Educational Resources Information Center
Hoijtink, Herbert; Molenaar, Ivo W.
1997-01-01
This paper shows that a certain class of constrained latent class models may be interpreted as a special case of nonparametric multidimensional item response models. Parameters of this latent class model are estimated using an application of the Gibbs sampler, and model fit is investigated using posterior predictive checks. (SLD)
ERIC Educational Resources Information Center
Hagger, Martin S.; Biddle, Stuart J. H.; John Wang, C. K.
2005-01-01
This study tests the generalizability of the factor pattern, structural parameters, and latent mean structure of a multidimensional, hierarchical model of physical self-concept in adolescents across gender and grade. A children's version of the Physical Self-Perception Profile (C-PSPP) was administered to seventh-, eighth- and ninth-grade high…
Observed Score and True Score Equating Procedures for Multidimensional Item Response Theory
ERIC Educational Resources Information Center
Brossman, Bradley Grant
2010-01-01
The purpose of this research was to develop observed score and true score equating procedures to be used in conjunction with the Multidimensional Item Response Theory (MIRT) framework. Currently, MIRT scale linking procedures exist to place item parameter estimates and ability estimates on the same scale after separate calibrations are conducted.…
Similarity of the Multidimensional Space Defined by Parallel Forms of a Mathematics Test.
ERIC Educational Resources Information Center
Reckase, Mark D.; And Others
The purpose of the paper is to determine whether test forms of the Mathematics Usage Test (AAP Math) of the American College Testing Program are parallel in a multidimensional sense. The AAP Math is an achievement test of mathematics concepts acquired by high school students by the end of their third year. To determine the dimensionality of the…
ERIC Educational Resources Information Center
Nishimura, Mayu; Maurer, Daphne; Gao, Xiaoqing
2009-01-01
We explored differences in the mental representation of facial identity between 8-year-olds and adults. The 8-year-olds and adults made similarity judgments of a homogeneous set of faces (individual hair cues removed) using an "odd-man-out" paradigm. Multidimensional scaling (MDS) analyses were performed to represent perceived similarity of faces…
Influence of fusion dynamics on fission observables: A multidimensional analysis
NASA Astrophysics Data System (ADS)
Schmitt, C.; Mazurek, K.; Nadtochy, P. N.
2018-01-01
An attempt to unfold the respective influence of the fusion and fission stages on typical fission observables, and namely the neutron prescission multiplicity, is proposed. A four-dimensional dynamical stochastic Langevin model is used to calculate the decay by fission of excited compound nuclei produced in a wide set of heavy-ion collisions. The comparison of the results from such a calculation and experimental data is discussed, guided by predictions of the dynamical deterministic HICOL code for the compound-nucleus formation time. While the dependence of the latter on the entrance-channel properties can straigthforwardly explain some observations, a complex interplay between the various parameters of the reaction is found to occur in other cases. A multidimensional analysis of the respective role of these parameters, including entrance-channel asymmetry, bombarding energy, compound-nucleus fissility, angular momentum, and excitation energy, is proposed. It is shown that, depending on the size of the system, apparent inconsistencies may be deduced when projecting onto specific ordering parameters. The work suggests the possibility of delicate compensation effects in governing the measured fission observables, thereby highlighting the necessity of a multidimensional discussion.
Mass media influence spreading in social networks with community structure
NASA Astrophysics Data System (ADS)
Candia, Julián; Mazzitello, Karina I.
2008-07-01
We study an extension of Axelrod's model for social influence, in which cultural drift is represented as random perturbations, while mass media are introduced by means of an external field. In this scenario, we investigate how the modular structure of social networks affects the propagation of mass media messages across a society. The community structure of social networks is represented by coupled random networks, in which two random graphs are connected by intercommunity links. Considering inhomogeneous mass media fields, we study the conditions for successful message spreading and find a novel phase diagram in the multidimensional parameter space. These findings show that social modularity effects are of paramount importance for designing successful, cost-effective advertising campaigns.
NASA Astrophysics Data System (ADS)
Dhingra, Shonali; Sandler, Roman; Rios, Rodrigo; Vuong, Cliff; Mehta, Mayank
All animals naturally perceive the abstract concept of space-time. A brain region called the Hippocampus is known to be important in creating these perceptions, but the underlying mechanisms are unknown. In our lab we employ several experimental and computational techniques from Physics to tackle this fundamental puzzle. Experimentally, we use ideas from Nanoscience and Materials Science to develop techniques to measure the activity of hippocampal neurons, in freely-behaving animals. Computationally, we develop models to study neuronal activity patterns, which are point processes that are highly stochastic and multidimensional. We then apply these techniques to collect and analyze neuronal signals from rodents while they're exploring space in Real World or Virtual Reality with various stimuli. Our findings show that under these conditions neuronal activity depends on various parameters, such as sensory cues including visual and auditory, and behavioral cues including, linear and angular, position and velocity. Further, neuronal networks create internally-generated rhythms, which influence perception of space and time. In totality, these results further our understanding of how the brain develops a cognitive map of our surrounding space, and keep track of time.
The Cognitive Visualization System with the Dynamic Projection of Multidimensional Data
NASA Astrophysics Data System (ADS)
Gorohov, V.; Vitkovskiy, V.
2008-08-01
The phenomenon of cognitive machine drawing consists in the generation on the screen the special graphic representations, which create in the brain of human operator entertainment means. These means seem man by aesthetically attractive and, thus, they stimulate its descriptive imagination, closely related to the intuitive mechanisms of thinking. The essence of cognitive effect lies in the fact that man receives the moving projection as pseudo-three-dimensional object characterizing multidimensional means in the multidimensional space. After the thorough qualitative study of the visual aspects of multidimensional means with the aid of the enumerated algorithms appears the possibility, using algorithms of standard machine drawing to paint the interesting user separate objects or the groups of objects. Then it is possible to again return to the dynamic behavior of the rotation of means for the purpose of checking the intuitive ideas of user about the clusters and the connections in multidimensional data. Is possible the development of the methods of cognitive machine drawing in combination with other information technologies, first of all with the packets of digital processing of images and multidimensional statistical analysis.
The Behavioral Space of Zebrafish Locomotion and Its Neural Network Analog.
Girdhar, Kiran; Gruebele, Martin; Chemla, Yann R
2015-01-01
How simple is the underlying control mechanism for the complex locomotion of vertebrates? We explore this question for the swimming behavior of zebrafish larvae. A parameter-independent method, similar to that used in studies of worms and flies, is applied to analyze swimming movies of fish. The motion itself yields a natural set of fish "eigenshapes" as coordinates, rather than the experimenter imposing a choice of coordinates. Three eigenshape coordinates are sufficient to construct a quantitative "postural space" that captures >96% of the observed zebrafish locomotion. Viewed in postural space, swim bouts are manifested as trajectories consisting of cycles of shapes repeated in succession. To classify behavioral patterns quantitatively and to understand behavioral variations among an ensemble of fish, we construct a "behavioral space" using multi-dimensional scaling (MDS). This method turns each cycle of a trajectory into a single point in behavioral space, and clusters points based on behavioral similarity. Clustering analysis reveals three known behavioral patterns-scoots, turns, rests-but shows that these do not represent discrete states, but rather extremes of a continuum. The behavioral space not only classifies fish by their behavior but also distinguishes fish by age. With the insight into fish behavior from postural space and behavioral space, we construct a two-channel neural network model for fish locomotion, which produces strikingly similar postural space and behavioral space dynamics compared to real zebrafish.
The Behavioral Space of Zebrafish Locomotion and Its Neural Network Analog
Girdhar, Kiran; Gruebele, Martin; Chemla, Yann R.
2015-01-01
How simple is the underlying control mechanism for the complex locomotion of vertebrates? We explore this question for the swimming behavior of zebrafish larvae. A parameter-independent method, similar to that used in studies of worms and flies, is applied to analyze swimming movies of fish. The motion itself yields a natural set of fish "eigenshapes" as coordinates, rather than the experimenter imposing a choice of coordinates. Three eigenshape coordinates are sufficient to construct a quantitative "postural space" that captures >96% of the observed zebrafish locomotion. Viewed in postural space, swim bouts are manifested as trajectories consisting of cycles of shapes repeated in succession. To classify behavioral patterns quantitatively and to understand behavioral variations among an ensemble of fish, we construct a "behavioral space" using multi-dimensional scaling (MDS). This method turns each cycle of a trajectory into a single point in behavioral space, and clusters points based on behavioral similarity. Clustering analysis reveals three known behavioral patterns—scoots, turns, rests—but shows that these do not represent discrete states, but rather extremes of a continuum. The behavioral space not only classifies fish by their behavior but also distinguishes fish by age. With the insight into fish behavior from postural space and behavioral space, we construct a two-channel neural network model for fish locomotion, which produces strikingly similar postural space and behavioral space dynamics compared to real zebrafish. PMID:26132396
An introduction to multidimensional measurement using Rasch models.
Briggs, Derek C; Wilson, Mark
2003-01-01
The act of constructing a measure requires a number of important assumptions. Principle among these assumptions is that the construct is unidimensional. In practice there are many instances when the assumption of unidimensionality does not hold, and where the application of a multidimensional measurement model is both technically appropriate and substantively advantageous. In this paper we illustrate the usefulness of a multidimensional approach to measurement with the Multidimensional Random Coefficient Multinomial Logit (MRCML) model, an extension of the unidimensional Rasch model. An empirical example is taken from a collection of embedded assessments administered to 541 students enrolled in middle school science classes with a hands-on science curriculum. Student achievement on these assessments are multidimensional in nature, but can also be treated as consecutive unidimensional estimates, or as is most common, as a composite unidimensional estimate. Structural parameters are estimated for each model using ConQuest, and model fit is compared. Student achievement in science is also compared across models. The multidimensional approach has the best fit to the data, and provides more reliable estimates of student achievement than under the consecutive unidimensional approach. Finally, at an interpretational level, the multidimensional approach may well provide richer information to the classroom teacher about the nature of student achievement.
Framework for analyzing ecological trait-based models in multidimensional niche spaces
NASA Astrophysics Data System (ADS)
Biancalani, Tommaso; DeVille, Lee; Goldenfeld, Nigel
2015-05-01
We develop a theoretical framework for analyzing ecological models with a multidimensional niche space. Our approach relies on the fact that ecological niches are described by sequences of symbols, which allows us to include multiple phenotypic traits. Ecological drivers, such as competitive exclusion, are modeled by introducing the Hamming distance between two sequences. We show that a suitable transform diagonalizes the community interaction matrix of these models, making it possible to predict the conditions for niche differentiation and, close to the instability onset, the asymptotically long time population distributions of niches. We exemplify our method using the Lotka-Volterra equations with an exponential competition kernel.
Investigation of multidimensional control systems in the state space and wavelet medium
NASA Astrophysics Data System (ADS)
Fedosenkov, D. B.; Simikova, A. A.; Fedosenkov, B. A.
2018-05-01
The notions are introduced of “one-dimensional-point” and “multidimensional-point” automatic control systems. To demonstrate the joint use of approaches based on the concepts of state space and wavelet transforms, a method for optimal control in a state space medium represented in the form of time-frequency representations (maps), is considered. The computer-aided control system is formed on the basis of the similarity transformation method, which makes it possible to exclude the use of reduced state variable observers. 1D-material flow signals formed by primary transducers are converted by means of wavelet transformations into multidimensional concentrated-at-a point variables in the form of time-frequency distributions of Cohen’s class. The algorithm for synthesizing a stationary controller for feeding processes is given here. The conclusion is made that the formation of an optimal control law with time-frequency distributions available contributes to the improvement of transient processes quality in feeding subsystems and the mixing unit. Confirming the efficiency of the method presented is illustrated by an example of the current registration of material flows in the multi-feeding unit. The first section in your paper.
NASA Astrophysics Data System (ADS)
Shrestha, S. R.; Collow, T. W.; Rose, B.
2016-12-01
Scientific datasets are generated from various sources and platforms but they are typically produced either by earth observation systems or by modelling systems. These are widely used for monitoring, simulating, or analyzing measurements that are associated with physical, chemical, and biological phenomena over the ocean, atmosphere, or land. A significant subset of scientific datasets stores values directly as rasters or in a form that can be rasterized. This is where a value exists at every cell in a regular grid spanning the spatial extent of the dataset. Government agencies like NOAA, NASA, EPA, USGS produces large volumes of near real-time, forecast, and historical data that drives climatological and meteorological studies, and underpins operations ranging from weather prediction to sea ice loss. Modern science is computationally intensive because of the availability of an enormous amount of scientific data, the adoption of data-driven analysis, and the need to share these dataset and research results with the public. ArcGIS as a platform is sophisticated and capable of handling such complex domain. We'll discuss constructs and capabilities applicable to multidimensional gridded data that can be conceptualized as a multivariate space-time cube. Building on the concept of a two-dimensional raster, a typical multidimensional raster dataset could contain several "slices" within the same spatial extent. We will share a case from the NOAA Climate Forecast Systems Reanalysis (CFSR) multidimensional data as an example of how large collections of rasters can be efficiently organized and managed through a data model within a geodatabase called "Mosaic dataset" and dynamically transformed and analyzed using raster functions. A raster function is a lightweight, raster-valued transformation defined over a mixed set of raster and scalar input. That means, just like any tool, you can provide a raster function with input parameters. It enables dynamic processing of only the data that's being displayed on the screen or requested by an application. We will present the dynamic processing and analysis of CFSR data using the chains of raster function and share it as dynamic multidimensional image service. This workflow and capabilities can be easily applied to any scientific data formats that are supported in mosaic dataset.
Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.
Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal
2016-01-01
Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.
Self-Organizing-Map Program for Analyzing Multivariate Data
NASA Technical Reports Server (NTRS)
Li, P. Peggy; Jacob, Joseph C.; Block, Gary L.; Braverman, Amy J.
2005-01-01
SOM_VIS is a computer program for analysis and display of multidimensional sets of Earth-image data typified by the data acquired by the Multi-angle Imaging Spectro-Radiometer [MISR (a spaceborne instrument)]. In SOM_VIS, an enhanced self-organizing-map (SOM) algorithm is first used to project a multidimensional set of data into a nonuniform three-dimensional lattice structure. The lattice structure is mapped to a color space to obtain a color map for an image. The Voronoi cell-refinement algorithm is used to map the SOM lattice structure to various levels of color resolution. The final result is a false-color image in which similar colors represent similar characteristics across all its data dimensions. SOM_VIS provides a control panel for selection of a subset of suitably preprocessed MISR radiance data, and a control panel for choosing parameters to run SOM training. SOM_VIS also includes a component for displaying the false-color SOM image, a color map for the trained SOM lattice, a plot showing an original input vector in 36 dimensions of a selected pixel from the SOM image, the SOM vector that represents the input vector, and the Euclidean distance between the two vectors.
ERIC Educational Resources Information Center
Haberman, Shelby J.; von Davier, Matthias; Lee, Yi-Hsuan
2008-01-01
Multidimensional item response models can be based on multivariate normal ability distributions or on multivariate polytomous ability distributions. For the case of simple structure in which each item corresponds to a unique dimension of the ability vector, some applications of the two-parameter logistic model to empirical data are employed to…
ERIC Educational Resources Information Center
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
Intelligent Space Tube Optimization for speeding ground water remedial design.
Kalwij, Ineke M; Peralta, Richard C
2008-01-01
An innovative Intelligent Space Tube Optimization (ISTO) two-stage approach facilitates solving complex nonlinear flow and contaminant transport management problems. It reduces computational effort of designing optimal ground water remediation systems and strategies for an assumed set of wells. ISTO's stage 1 defines an adaptive mobile space tube that lengthens toward the optimal solution. The space tube has overlapping multidimensional subspaces. Stage 1 generates several strategies within the space tube, trains neural surrogate simulators (NSS) using the limited space tube data, and optimizes using an advanced genetic algorithm (AGA) with NSS. Stage 1 speeds evaluating assumed well locations and combinations. For a large complex plume of solvents and explosives, ISTO stage 1 reaches within 10% of the optimal solution 25% faster than an efficient AGA coupled with comprehensive tabu search (AGCT) does by itself. ISTO input parameters include space tube radius and number of strategies used to train NSS per cycle. Larger radii can speed convergence to optimality for optimizations that achieve it but might increase the number of optimizations reaching it. ISTO stage 2 automatically refines the NSS-AGA stage 1 optimal strategy using heuristic optimization (we used AGCT), without using NSS surrogates. Stage 2 explores the entire solution space. ISTO is applicable for many heuristic optimization settings in which the numerical simulator is computationally intensive, and one would like to reduce that burden.
Self-consistent pseudopotential calculation of the bulk properties of Mo and W
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zunger, A.; Cohen, M.L.
The bulk properties of Mo and W are calculated using the recently developed momentum-space approach for calculating total energy via a nonlocal pseudopotential. This approach avoids any shape approximation to the variational charge density (e.g., muffin tins), is fully self-consistent, and replaces the multidimensional and multicenter integrals akin to real-space representations by simple and readily convergent reciprocal-space lattice sums. We use first-principles atomic pseudopotentials which have been previously demonstrated to yield band structures and charge densities for both semiconductors and transition metals in good agreement with experiment and all-electron calculations. Using a mixed-basis representation for the crystalline wave function, wemore » are able to accurately reproduce both the localized and itinerant features of the electronic states in these systems. These first-principles pseudopotentials, together with the self-consistent density-functional representation for both the exchange and the correlation screening, yields agreement with experiment of 0.2% in the lattice parameters, 2% and 11% for the binding energies of Mo and W, respectively, and 12% and 7% for the bulk moduli of Mo and W, respectively.« less
NASA Astrophysics Data System (ADS)
Du, Peijun; Tan, Kun; Xing, Xiaoshi
2010-12-01
Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.
Shen, Minxue; Cui, Yuanwu; Hu, Ming; Xu, Linyong
2017-01-13
The study aimed to validate a scale to assess the severity of "Yin deficiency, intestine heat" pattern of functional constipation based on the modern test theory. Pooled longitudinal data of 237 patients with "Yin deficiency, intestine heat" pattern of constipation from a prospective cohort study were used to validate the scale. Exploratory factor analysis was used to examine the common factors of items. A multidimensional item response model was used to assess the scale with the presence of multidimensionality. The Cronbach's alpha ranged from 0.79 to 0.89, and the split-half reliability ranged from 0.67 to 0.79 at different measurements. Exploratory factor analysis identified two common factors, and all items had cross factor loadings. Bidimensional model had better goodness of fit than the unidimensional model. Multidimensional item response model showed that the all items had moderate to high discrimination parameters. Parameters indicated that the first latent trait signified intestine heat, while the second trait characterized Yin deficiency. Information function showed that items demonstrated highest discrimination power among patients with moderate to high level of disease severity. Multidimensional item response theory provides a useful and rational approach in validating scales for assessing the severity of patterns in traditional Chinese medicine.
A New Time-Space Accurate Scheme for Hyperbolic Problems. 1; Quasi-Explicit Case
NASA Technical Reports Server (NTRS)
Sidilkover, David
1998-01-01
This paper presents a new discretization scheme for hyperbolic systems of conservations laws. It satisfies the TVD property and relies on the new high-resolution mechanism which is compatible with the genuinely multidimensional approach proposed recently. This work can be regarded as a first step towards extending the genuinely multidimensional approach to unsteady problems. Discontinuity capturing capabilities and accuracy of the scheme are verified by a set of numerical tests.
Inferring pathological states in cortical neuron microcircuits.
Rydzewski, Jakub; Nowak, Wieslaw; Nicosia, Giuseppe
2015-12-07
The brain activity is to a large extent determined by states of neural cortex microcircuits. Unfortunately, accuracy of results from neural circuits׳ mathematical models is often biased by the presence of uncertainties in underlying experimental data. Moreover, due to problems with uncertainties identification in a multidimensional parameters space, it is almost impossible to classify states of the neural cortex, which correspond to a particular set of the parameters. Here, we develop a complete methodology for determining uncertainties and the novel protocol for classifying all states in any neuroinformatic model. Further, we test this protocol on the mathematical, nonlinear model of such a microcircuit developed by Giugliano et al. (2008) and applied in the experimental data analysis of Huntington׳s disease. Up to now, the link between parameter domains in the mathematical model of Huntington׳s disease and the pathological states in cortical microcircuits has remained unclear. In this paper we precisely identify all the uncertainties, the most crucial input parameters and domains that drive the system into an unhealthy state. The scheme proposed here is general and can be easily applied to other mathematical models of biological phenomena. Copyright © 2015 Elsevier Ltd. All rights reserved.
Influence of Multidimensionality on Convergence of Sampling in Protein Simulation
NASA Astrophysics Data System (ADS)
Metsugi, Shoichi
2005-06-01
We study the problem of convergence of sampling in protein simulation originating in the multidimensionality of protein’s conformational space. Since several important physical quantities are given by second moments of dynamical variables, we attempt to obtain the time of simulation necessary for their sufficient convergence. We perform a molecular dynamics simulation of a protein and the subsequent principal component (PC) analysis as a function of simulation time T. As T increases, PC vectors with smaller amplitude of variations are identified and their amplitudes are equilibrated before identifying and equilibrating vectors with larger amplitude of variations. This sequential identification and equilibration mechanism makes protein simulation a useful method although it has an intrinsic multidimensional nature.
Lichtenstein, James L. L.; Wright, Colin M; McEwen, Brendan; Pinter-Wollman, Noa; Pruitt, Jonathan N.
2018-01-01
Individual animals differ consistently in their behaviour, thus impacting a wide variety of ecological outcomes. Recent advances in animal personality research have established the ecological importance of the multidimensional behavioural volume occupied by individuals and by multispecies communities. Here, we examine the degree to which the multidimensional behavioural volume of a group predicts the outcome of both intra- and interspecific interactions. In particular, we test the hypothesis that a population of conspecifics will experience low intraspecific competition when the population occupies a large volume in behavioural space. We further hypothesize that populations of interacting species will exhibit greater interspecific competition when one or both species occupy large volumes in behavioural space. We evaluate these hypotheses by studying groups of katydids (Scudderia nymphs) and froghoppers (Philaenus spumarius), which compete for food and space on their shared host plant, Solidago canadensis. We found that individuals in single-species groups of katydids positioned themselves closer to one another, suggesting reduced competition, when groups occupied a large behavioural volume. When both species were placed together, we found that the survival of froghoppers was greatest when both froghoppers and katydids occupied a small volume in behavioural space, particularly at high froghopper densities. These results suggest that groups that occupy large behavioural volumes can have low intraspecific competition but high interspecific competition. Thus, behavioural hypervolumes appear to have ecological consequences at both the level of the population and the community and may help to predict the intensity of competition both within and across species. PMID:29681647
Parsimony and goodness-of-fit in multi-dimensional NMR inversion
NASA Astrophysics Data System (ADS)
Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos
2017-01-01
Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.
ERIC Educational Resources Information Center
Papa, Frank J.; And Others
1997-01-01
Chest pain was identified as a specific medical problem space, and disease classes were modeled to define it. Results from a test taken by 628 medical residents indicate a second-order factor structure that suggests that chest pain is a multidimensional problem space. Implications for medical education are discussed. (SLD)
Multidimensional poverty and child survival in India.
Mohanty, Sanjay K
2011-01-01
Though the concept of multidimensional poverty has been acknowledged cutting across the disciplines (among economists, public health professionals, development thinkers, social scientists, policy makers and international organizations) and included in the development agenda, its measurement and application are still limited. OBJECTIVES AND METHODOLOGY: Using unit data from the National Family and Health Survey 3, India, this paper measures poverty in multidimensional space and examine the linkages of multidimensional poverty with child survival. The multidimensional poverty is measured in the dimension of knowledge, health and wealth and the child survival is measured with respect to infant mortality and under-five mortality. Descriptive statistics, principal component analyses and the life table methods are used in the analyses. The estimates of multidimensional poverty are robust and the inter-state differentials are large. While infant mortality rate and under-five mortality rate are disproportionately higher among the abject poor compared to the non-poor, there are no significant differences in child survival among educationally, economically and health poor at the national level. State pattern in child survival among the education, economical and health poor are mixed. Use of multidimensional poverty measures help to identify abject poor who are unlikely to come out of poverty trap. The child survival is significantly lower among abject poor compared to moderate poor and non-poor. We urge to popularize the concept of multiple deprivations in research and program so as to reduce poverty and inequality in the population.
Aspects of the "Design Space" in high pressure liquid chromatography method development.
Molnár, I; Rieger, H-J; Monks, K E
2010-05-07
The present paper describes a multifactorial optimization of 4 critical HPLC method parameters, i.e. gradient time (t(G)), temperature (T), pH and ternary composition (B(1):B(2)) based on 36 experiments. The effect of these experimental variables on critical resolution and selectivity was carried out in such a way as to systematically vary all four factors simultaneously. The basic element is a gradient time-temperature (t(G)-T) plane, which is repeated at three different pH's of the eluent A and at three different ternary compositions of eluent B between methanol and acetonitrile. The so-defined volume enables the investigation of the critical resolution for a part of the Design Space of a given sample. Further improvement of the analysis time, with conservation of the previously optimized selectivity, was possible by reducing the gradient time and increasing the flow rate. Multidimensional robust regions were successfully defined and graphically depicted. Copyright (c) 2010 Elsevier B.V. All rights reserved.
A simple respirogram-based approach for the management of effluent from an activated sludge system.
Li, Zhi-Hua; Zhu, Yuan-Mo; Yang, Cheng-Jian; Zhang, Tian-Yu; Yu, Han-Qing
2018-08-01
Managing wastewater treatment plant (WWTP) based on respirometric analysis is a new and promising field. In this study, a multi-dimensional respirogram space was constructed, and an important index R es/t (ratio of in-situ respiration rate to maximum respiration rate) was derived as an alarm signal for the effluent quality control. A smaller R es/t value suggests better effluent. The critical R' es/t value used for determining whether the effluent meets the regulation depends on operational conditions, which were characterized by temperature and biomass ratio of heterotrophs to autotrophs. With given operational conditions, the critical R' es/t value can be calculated from the respirogram space and effluent conditions required by the discharge regulation, with no requirement for calibration of parameters or any additional measurements. Since it is simple, easy to use, and can be readily implemented online, this approach holds a great promise for applications. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Siapkaras, A.
1977-01-01
A computational method to deal with the multidimensional nature of tracking and/or monitoring tasks is developed. Operator centered variables, including the operator's perception of the task, are considered. Matrix ratings are defined based on multidimensional scaling techniques and multivariate analysis. The method consists of two distinct steps: (1) to determine the mathematical space of subjective judgements of a certain individual (or group of evaluators) for a given set of tasks and experimental conditionings; and (2) to relate this space with respect to both the task variables and the objective performance criteria used. Results for a variety of second-order trackings with smoothed noise-driven inputs indicate that: (1) many of the internally perceived task variables form a nonorthogonal set; and (2) the structure of the subjective space varies among groups of individuals according to the degree of familiarity they have with such tasks.
Neutrino oscillation parameter sampling with MonteCUBES
NASA Astrophysics Data System (ADS)
Blennow, Mattias; Fernandez-Martinez, Enrique
2010-01-01
We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].
Two dimensional model for coherent synchrotron radiation
NASA Astrophysics Data System (ADS)
Huang, Chengkun; Kwan, Thomas J. T.; Carlsten, Bruce E.
2013-01-01
Understanding coherent synchrotron radiation (CSR) effects in a bunch compressor requires an accurate model accounting for the realistic beam shape and parameters. We extend the well-known 1D CSR analytic model into two dimensions and develop a simple numerical model based on the Liénard-Wiechert formula for the CSR field of a coasting beam. This CSR numerical model includes the 2D spatial dependence of the field in the bending plane and is accurate for arbitrary beam energy. It also removes the singularity in the space charge field calculation present in a 1D model. Good agreement is obtained with 1D CSR analytic result for free electron laser (FEL) related beam parameters but it can also give a more accurate result for low-energy/large spot size beams and off-axis/transient fields. This 2D CSR model can be used for understanding the limitation of various 1D models and for benchmarking fully electromagnetic multidimensional particle-in-cell simulations for self-consistent CSR modeling.
Acoustic source localization in mixed field using spherical microphone arrays
NASA Astrophysics Data System (ADS)
Huang, Qinghua; Wang, Tong
2014-12-01
Spherical microphone arrays have been used for source localization in three-dimensional space recently. In this paper, a two-stage algorithm is developed to localize mixed far-field and near-field acoustic sources in free-field environment. In the first stage, an array signal model is constructed in the spherical harmonics domain. The recurrent relation of spherical harmonics is independent of far-field and near-field mode strengths. Therefore, it is used to develop spherical estimating signal parameter via rotational invariance technique (ESPRIT)-like approach to estimate directions of arrival (DOAs) for both far-field and near-field sources. In the second stage, based on the estimated DOAs, simple one-dimensional MUSIC spectrum is exploited to distinguish far-field and near-field sources and estimate the ranges of near-field sources. The proposed algorithm can avoid multidimensional search and parameter pairing. Simulation results demonstrate the good performance for localizing far-field sources, or near-field ones, or mixed field sources.
K Jawed, M; Hadjiconstantinou, N G; Parks, D M; Reis, P M
2018-03-14
We develop and perform continuum mechanics simulations of carbon nanotube (CNT) deployment directed by a combination of surface topography and rarefied gas flow. We employ the discrete elastic rods method to model the deposition of CNT as a slender elastic rod that evolves in time under two external forces, namely, van der Waals (vdW) and aerodynamic drag. Our results confirm that this self-assembly process is analogous to a previously studied macroscopic system, the "elastic sewing machine", where an elastic rod deployed onto a moving substrate forms nonlinear patterns. In the case of CNTs, the complex patterns observed on the substrate, such as coils and serpentines, result from an intricate interplay between van der Waals attraction, rarefied aerodynamics, and elastic bending. We systematically sweep through the multidimensional parameter space to quantify the pattern morphology as a function of the relevant material, flow, and geometric parameters. Our findings are in good agreement with available experimental data. Scaling analysis involving the relevant forces helps rationalize our observations.
Membership determination of open clusters based on a spectral clustering method
NASA Astrophysics Data System (ADS)
Gao, Xin-Hua
2018-06-01
We present a spectral clustering (SC) method aimed at segregating reliable members of open clusters in multi-dimensional space. The SC method is a non-parametric clustering technique that performs cluster division using eigenvectors of the similarity matrix; no prior knowledge of the clusters is required. This method is more flexible in dealing with multi-dimensional data compared to other methods of membership determination. We use this method to segregate the cluster members of five open clusters (Hyades, Coma Ber, Pleiades, Praesepe, and NGC 188) in five-dimensional space; fairly clean cluster members are obtained. We find that the SC method can capture a small number of cluster members (weak signal) from a large number of field stars (heavy noise). Based on these cluster members, we compute the mean proper motions and distances for the Hyades, Coma Ber, Pleiades, and Praesepe clusters, and our results are in general quite consistent with the results derived by other authors. The test results indicate that the SC method is highly suitable for segregating cluster members of open clusters based on high-precision multi-dimensional astrometric data such as Gaia data.
Multidimensional generalized-ensemble algorithms for complex systems.
Mitsutake, Ayori; Okamoto, Yuko
2009-06-07
We give general formulations of the multidimensional multicanonical algorithm, simulated tempering, and replica-exchange method. We generalize the original potential energy function E(0) by adding any physical quantity V of interest as a new energy term. These multidimensional generalized-ensemble algorithms then perform a random walk not only in E(0) space but also in V space. Among the three algorithms, the replica-exchange method is the easiest to perform because the weight factor is just a product of regular Boltzmann-like factors, while the weight factors for the multicanonical algorithm and simulated tempering are not a priori known. We give a simple procedure for obtaining the weight factors for these two latter algorithms, which uses a short replica-exchange simulation and the multiple-histogram reweighting techniques. As an example of applications of these algorithms, we have performed a two-dimensional replica-exchange simulation and a two-dimensional simulated-tempering simulation using an alpha-helical peptide system. From these simulations, we study the helix-coil transitions of the peptide in gas phase and in aqueous solution.
Improved surface-wave retrieval from ambient seismic noise by multi-dimensional deconvolution
NASA Astrophysics Data System (ADS)
Wapenaar, Kees; Ruigrok, Elmer; van der Neut, Joost; Draganov, Deyan
2011-01-01
The methodology of surface-wave retrieval from ambient seismic noise by crosscorrelation relies on the assumption that the noise field is equipartitioned. Deviations from equipartitioning degrade the accuracy of the retrieved surface-wave Green's function. A point-spread function, derived from the same ambient noise field, quantifies the smearing in space and time of the virtual source of the Green's function. By multidimensionally deconvolving the retrieved Green's function by the point-spread function, the virtual source becomes better focussed in space and time and hence the accuracy of the retrieved surface-wave Green's function may improve significantly. We illustrate this at the hand of a numerical example and discuss the advantages and limitations of this new methodology.
A transformed path integral approach for solution of the Fokker-Planck equation
NASA Astrophysics Data System (ADS)
Subramaniam, Gnana M.; Vedula, Prakash
2017-10-01
A novel path integral (PI) based method for solution of the Fokker-Planck equation is presented. The proposed method, termed the transformed path integral (TPI) method, utilizes a new formulation for the underlying short-time propagator to perform the evolution of the probability density function (PDF) in a transformed computational domain where a more accurate representation of the PDF can be ensured. The new formulation, based on a dynamic transformation of the original state space with the statistics of the PDF as parameters, preserves the non-negativity of the PDF and incorporates short-time properties of the underlying stochastic process. New update equations for the state PDF in a transformed space and the parameters of the transformation (including mean and covariance) that better accommodate nonlinearities in drift and non-Gaussian behavior in distributions are proposed (based on properties of the SDE). Owing to the choice of transformation considered, the proposed method maps a fixed grid in transformed space to a dynamically adaptive grid in the original state space. The TPI method, in contrast to conventional methods such as Monte Carlo simulations and fixed grid approaches, is able to better represent the distributions (especially the tail information) and better address challenges in processes with large diffusion, large drift and large concentration of PDF. Additionally, in the proposed TPI method, error bounds on the probability in the computational domain can be obtained using the Chebyshev's inequality. The benefits of the TPI method over conventional methods are illustrated through simulations of linear and nonlinear drift processes in one-dimensional and multidimensional state spaces. The effects of spatial and temporal grid resolutions as well as that of the diffusion coefficient on the error in the PDF are also characterized.
NASA Technical Reports Server (NTRS)
Tullis, Thomas S.; Bied Sperling, Barbra; Steinberg, A. L.
1986-01-01
Before an optimum layout of the facilities for the proposed Space Station can be designed, it is necessary to understand the functions that will be performed by the Space Station crew and the relationships among those functions. Five criteria for assessing functional relationships were identified. For each of these criteria, a matrix representing the degree of association of all pairs of functions was developed. The key to making inferences about the layout of the Space Station from these matrices was the use of multidimensional scaling (MDS). Applying MDS to these matrices resulted in spatial configurations of the crew functions in which smaller distances in the MDS configuration reflected closer associations. An MDS analysis of a composite matrix formed by combining the five individual matrices resulted in two dimensions that describe the configuration: a 'private-public' dimension and a 'group-individual' dimension. Seven specific recommendations for Space Station layout were derived from analyses of the MDS configurations. Although these techniques have been applied to the design of the Space Station, they can be applied to the design of any facility where people live or work.
ERIC Educational Resources Information Center
Finch, Holmes
2010-01-01
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Functional identification of spike-processing neural circuits.
Lazar, Aurel A; Slutskiy, Yevgeniy B
2014-02-01
We introduce a novel approach for a complete functional identification of biophysical spike-processing neural circuits. The circuits considered accept multidimensional spike trains as their input and comprise a multitude of temporal receptive fields and conductance-based models of action potential generation. Each temporal receptive field describes the spatiotemporal contribution of all synapses between any two neurons and incorporates the (passive) processing carried out by the dendritic tree. The aggregate dendritic current produced by a multitude of temporal receptive fields is encoded into a sequence of action potentials by a spike generator modeled as a nonlinear dynamical system. Our approach builds on the observation that during any experiment, an entire neural circuit, including its receptive fields and biophysical spike generators, is projected onto the space of stimuli used to identify the circuit. Employing the reproducing kernel Hilbert space (RKHS) of trigonometric polynomials to describe input stimuli, we quantitatively describe the relationship between underlying circuit parameters and their projections. We also derive experimental conditions under which these projections converge to the true parameters. In doing so, we achieve the mathematical tractability needed to characterize the biophysical spike generator and identify the multitude of receptive fields. The algorithms obviate the need to repeat experiments in order to compute the neurons' rate of response, rendering our methodology of interest to both experimental and theoretical neuroscientists.
Experimental and Computational Analysis of Unidirectional Flow Through Stirling Engine Heater Head
NASA Technical Reports Server (NTRS)
Wilson, Scott D.; Dyson, Rodger W.; Tew, Roy C.; Demko, Rikako
2006-01-01
A high efficiency Stirling Radioisotope Generator (SRG) is being developed for possible use in long-duration space science missions. NASA s advanced technology goals for next generation Stirling convertors include increasing the Carnot efficiency and percent of Carnot efficiency. To help achieve these goals, a multi-dimensional Computational Fluid Dynamics (CFD) code is being developed to numerically model unsteady fluid flow and heat transfer phenomena of the oscillating working gas inside Stirling convertors. In the absence of transient pressure drop data for the zero mean oscillating multi-dimensional flows present in the Technology Demonstration Convertors on test at NASA Glenn Research Center, unidirectional flow pressure drop test data is used to compare against 2D and 3D computational solutions. This study focuses on tracking pressure drop and mass flow rate data for unidirectional flow though a Stirling heater head using a commercial CFD code (CFD-ACE). The commercial CFD code uses a porous-media model which is dependent on permeability and the inertial coefficient present in the linear and nonlinear terms of the Darcy-Forchheimer equation. Permeability and inertial coefficient were calculated from unidirectional flow test data. CFD simulations of the unidirectional flow test were validated using the porous-media model input parameters which increased simulation accuracy by 14 percent on average.
Multidimensional stability of traveling fronts in combustion and non-KPP monostable equations
NASA Astrophysics Data System (ADS)
Bu, Zhen-Hui; Wang, Zhi-Cheng
2018-02-01
This paper is concerned with the multidimensional stability of traveling fronts for the combustion and non-KPP monostable equations. Our study contains two parts: in the first part, we first show that the two-dimensional V-shaped traveling fronts are asymptotically stable in R^{n+2} with n≥1 under any (possibly large) initial perturbations that decay at space infinity, and then, we prove that there exists a solution that oscillates permanently between two V-shaped traveling fronts, which implies that even very small perturbations to the V-shaped traveling front can lead to permanent oscillation. In the second part, we establish the multidimensional stability of planar traveling front in R^{n+1} with n≥1.
Knowledge Space: A Conceptual Basis for the Organization of Knowledge
ERIC Educational Resources Information Center
Meincke, Peter P. M.; Atherton, Pauline
1976-01-01
Proposes a new conceptual basis for visualizing the organization of information, or knowledge, which differentiates between the concept "vectors" for a field of knowledge represented in a multidimensional space, and the state "vectors" for a person based on his understanding of these concepts, and the representational…
NASA Astrophysics Data System (ADS)
Kozlovskaya, E. N.; Doroshenko, I. Yu.; Pogorelov, V. E.; Vaskivskyi, Ye. V.; Pitsevich, G. A.
2018-01-01
Previously calculated multidimensional potential-energy surfaces of the MeOH monomer and dimer, water dimer, malonaldehyde, formic acid dimer, free pyridine-N-oxide/trichloroacetic acid complex, and protonated water dimer were analyzed. The corresponding harmonic potential-energy surfaces near the global minima were constructed for series of clusters and complexes with hydrogen bonds of different strengths based on the behavior of the calculated multidimensional potential-energy surfaces. This enabled the introduction of an obvious anharmonicity parameter for the calculated potential-energy surfaces. The anharmonicity parameter was analyzed as functions of the size of the analyzed area near the energy minimum, the number of points over which energies were compared, and the dimensionality of the solved vibrational problem. Anharmonicity parameters for potential-energy surfaces in complexes with strong, medium, and weak H-bonds were calculated under identical conditions. The obtained anharmonicity parameters were compared with the corresponding diagonal anharmonicity constants for stretching vibrations of the bridging protons and the lengths of the hydrogen bridges.
Multidimensional Poverty and Child Survival in India
Mohanty, Sanjay K.
2011-01-01
Background Though the concept of multidimensional poverty has been acknowledged cutting across the disciplines (among economists, public health professionals, development thinkers, social scientists, policy makers and international organizations) and included in the development agenda, its measurement and application are still limited. Objectives and Methodology Using unit data from the National Family and Health Survey 3, India, this paper measures poverty in multidimensional space and examine the linkages of multidimensional poverty with child survival. The multidimensional poverty is measured in the dimension of knowledge, health and wealth and the child survival is measured with respect to infant mortality and under-five mortality. Descriptive statistics, principal component analyses and the life table methods are used in the analyses. Results The estimates of multidimensional poverty are robust and the inter-state differentials are large. While infant mortality rate and under-five mortality rate are disproportionately higher among the abject poor compared to the non-poor, there are no significant differences in child survival among educationally, economically and health poor at the national level. State pattern in child survival among the education, economical and health poor are mixed. Conclusion Use of multidimensional poverty measures help to identify abject poor who are unlikely to come out of poverty trap. The child survival is significantly lower among abject poor compared to moderate poor and non-poor. We urge to popularize the concept of multiple deprivations in research and program so as to reduce poverty and inequality in the population. PMID:22046384
NASA Astrophysics Data System (ADS)
Arp, Trevor; Pleskot, Dennis; Gabor, Nathaniel
We have developed a new photoresponse imaging technique that utilizes extensive data acquisition over a large parameter space. By acquiring a multi-dimensional data set, we fully capture the intrinsic optoelectronic response of two-dimensional heterostructure devices. Using this technique we have investigated the behavior of heterostructures consisting of molybdenum ditelluride (MoTe2) sandwiched between graphene top and bottom contacts. Under near-infrared optical excitation, the ultra-thin heterostructure devices exhibit sub-linear photocurrent response that recovers within several dozen picoseconds. As the optical power increases, the dynamics of the photoresponse, consistent with 3-body annihilation, precede a sudden suppression of photocurrent. The observed dynamics near the threshold to photocurrent suppression may indicate the onset to a strongly interacting population of electrons and holes.
One-Dimensional, Two-Phase Flow Modeling Toward Interpreting Motor Slag Expulsion Phenomena
NASA Technical Reports Server (NTRS)
Kibbey, Timothy P.
2012-01-01
Aluminum oxide slag accumulation and expulsion was previously shown to be a player in various solid rocket motor phenomena, including the Space Shuttle's Reusable Solid Rocket Motor (RSRM) pressure perturbation, or "blip," and phantom moment. In the latter case, such un ]commanded side accelerations near the end of burn have also been identified in several other motor systems. However, efforts to estimate the mass expelled during a given event have come up short. Either bulk calculations are performed without enough physics present, or multiphase, multidimensional Computational Fluid Dynamic analyses are performed that give a snapshot in time and space but do not always aid in grasping the general principle. One ]dimensional, two ]phase compressible flow calculations yield an analytical result for nozzle flow under certain assumptions. This can be carried further to relate the bulk motor parameters of pressure, thrust, and mass flow rate under the different exhaust conditions driven by the addition of condensed phase mass flow. An unknown parameter is correlated to airflow testing with water injection where mass flow rates and pressure are known. Comparison is also made to full ]scale static test motor data where thrust and pressure changes are known and similar behavior is shown. The end goal is to be able to include the accumulation and flow of slag in internal ballistics predictions. This will allow better prediction of the tailoff when much slag is ejected and of mass retained versus time, believed to be a contributor to the widely-observed "flight knockdown" parameter.
ERIC Educational Resources Information Center
Samejima, Fumiko
In latent trait theory the latent space, or space of the hypothetical construct, is usually represented by some unidimensional or multi-dimensional continuum of real numbers. Like the latent space, the item response can either be treated as a discrete variable or as a continuous variable. Latent trait theory relates the item response to the latent…
An Application of a Multidimensional Extension of the Two-Parameter Logistic Latent Trait Model.
ERIC Educational Resources Information Center
McKinley, Robert L.; Reckase, Mark D.
A latent trait model is described that is appropriate for use with tests that measure more than one dimension, and its application to both real and simulated test data is demonstrated. Procedures for estimating the parameters of the model are presented. The research objectives are to determine whether the two-parameter logistic model more…
NASA Astrophysics Data System (ADS)
Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul
2016-08-01
Numerical solutions of the hydrodynamical model of semiconductor devices are presented in one and two-space dimension. The model describes the charge transport in semiconductor devices. Mathematically, the models can be written as a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the conservation element and solution element (CE/SE) method for hyperbolic step, and a semi-implicit scheme for the relaxation step. The numerical results of the suggested scheme are compared with the splitting scheme based on Nessyahu-Tadmor (NT) central scheme for convection step and the same semi-implicit scheme for the relaxation step. The effects of various parameters such as low field mobility, device length, lattice temperature and voltages for one-space dimensional hydrodynamic model are explored to further validate the generic applicability of the CE/SE method for the current model equations. A two dimensional simulation is also performed by CE/SE method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.
NASA Technical Reports Server (NTRS)
Kreifeldt, J. G.; Parkin, L.; Wempe, T. E.; Huff, E. F.
1975-01-01
Perceived orderliness in the ground tracks of five A/C during their simulated flights was studied. Dynamically developing ground tracks for five A/C from 21 separate runs were reproduced from computer storage and displayed on CRTS to professional pilots and controllers for their evaluations and preferences under several criteria. The ground tracks were developed in 20 seconds as opposed to the 5 minutes of simulated flight using speedup techniques for display. Metric and nonmetric multidimensional scaling techniques are being used to analyze the subjective responses in an effort to: (1) determine the meaningfulness of basing decisions on such complex subjective criteria; (2) compare pilot/controller perceptual spaces; (3) determine the dimensionality of the subjects' perceptual spaces; and thereby (4) determine objective measures suitable for comparing alternative traffic management simulations.
Trajectories of Smooth: The Multidimensionality of Spatial Relations and Autism Spectrum
ERIC Educational Resources Information Center
Reddington, Sarah; Price, Deborah
2017-01-01
This paper examines how two men with autism spectrum (AS) experience educational spaces having attended public school in Nova Scotia, Canada. Smooth and striated space is mobilised as the main conceptual framework to account for the men's affectivities when experiencing the educational terrain. The central aim when applying smooth and striated…
Invisible and Hypervisible Academics: The Experiences of Black and Minority Ethnic Teacher Educators
ERIC Educational Resources Information Center
Lander, Vini; Santoro, Ninetta
2017-01-01
This qualitative study investigated the experiences of Black and Minority Ethnic (BME) teacher educators in England and Australia working within the predominantly white space of the academy. Data analysis was informed by a multidimensional theoretical framework drawing on Critical Race Theory, whiteness and Puwar's concept of the Space Invader.…
The effects of context on multidimensional spatial cognitive models. Ph.D. Thesis - Arizona Univ.
NASA Technical Reports Server (NTRS)
Dupnick, E. G.
1979-01-01
Spatial cognitive models obtained by multidimensional scaling represent cognitive structure by defining alternatives as points in a coordinate space based on relevant dimensions such that interstimulus dissimilarities perceived by the individual correspond to distances between the respective alternatives. The dependence of spatial models on the context of the judgments required of the individual was investigated. Context, which is defined as a perceptual interpretation and cognitive understanding of a judgment situation, was analyzed and classified with respect to five characteristics: physical environment, social environment, task definition, individual perspective, and temporal setting. Four experiments designed to produce changes in the characteristics of context and to test the effects of these changes upon individual cognitive spaces are described with focus on experiment design, objectives, statistical analysis, results, and conclusions. The hypothesis is advanced that an individual can be characterized as having a master cognitive space for a set of alternatives. When the context changes, the individual appears to change the dimension weights to give a new spatial configuration. Factor analysis was used in the interpretation and labeling of cognitive space dimensions.
Multidimensional quantitative analysis of mRNA expression within intact vertebrate embryos.
Trivedi, Vikas; Choi, Harry M T; Fraser, Scott E; Pierce, Niles A
2018-01-08
For decades, in situ hybridization methods have been essential tools for studies of vertebrate development and disease, as they enable qualitative analyses of mRNA expression in an anatomical context. Quantitative mRNA analyses typically sacrifice the anatomy, relying on embryo microdissection, dissociation, cell sorting and/or homogenization. Here, we eliminate the trade-off between quantitation and anatomical context, using quantitative in situ hybridization chain reaction (qHCR) to perform accurate and precise relative quantitation of mRNA expression with subcellular resolution within whole-mount vertebrate embryos. Gene expression can be queried in two directions: read-out from anatomical space to expression space reveals co-expression relationships in selected regions of the specimen; conversely, read-in from multidimensional expression space to anatomical space reveals those anatomical locations in which selected gene co-expression relationships occur. As we demonstrate by examining gene circuits underlying somitogenesis, quantitative read-out and read-in analyses provide the strengths of flow cytometry expression analyses, but by preserving subcellular anatomical context, they enable bi-directional queries that open a new era for in situ hybridization. © 2018. Published by The Company of Biologists Ltd.
The organization of conspecific face space in nonhuman primates
Parr, Lisa A.; Taubert, Jessica; Little, Anthony C.; Hancock, Peter J. B.
2013-01-01
Humans and chimpanzees demonstrate numerous cognitive specializations for processing faces, but comparative studies with monkeys suggest that these may be the result of recent evolutionary adaptations. The present study utilized the novel approach of face space, a powerful theoretical framework used to understand the representation of face identity in humans, to further explore species differences in face processing. According to the theory, faces are represented by vectors in a multidimensional space, the centre of which is defined by an average face. Each dimension codes features important for describing a face’s identity, and vector length codes the feature’s distinctiveness. Chimpanzees and rhesus monkeys discriminated male and female conspecifics’ faces, rated by humans for their distinctiveness, using a computerized task. Multidimensional scaling analyses showed that the organization of face space was similar between humans and chimpanzees. Distinctive faces had the longest vectors and were the easiest for chimpanzees to discriminate. In contrast, distinctiveness did not correlate with the performance of rhesus monkeys. The feature dimensions for each species’ face space were visualized and described using morphing techniques. These results confirm species differences in the perceptual representation of conspecific faces, which are discussed within an evolutionary framework. PMID:22670823
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2016-08-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
ASPCAP: THE APOGEE STELLAR PARAMETER AND CHEMICAL ABUNDANCES PIPELINE
DOE Office of Scientific and Technical Information (OSTI.GOV)
García Pérez, Ana E.; Majewski, Steven R.; Shane, Neville
2016-06-01
The Apache Point Observatory Galactic Evolution Experiment (APOGEE) has built the largest moderately high-resolution ( R ≈ 22,500) spectroscopic map of the stars across the Milky Way, and including dust-obscured areas. The APOGEE Stellar Parameter and Chemical Abundances Pipeline (ASPCAP) is the software developed for the automated analysis of these spectra. ASPCAP determines atmospheric parameters and chemical abundances from observed spectra by comparing observed spectra to libraries of theoretical spectra, using χ {sup 2} minimization in a multidimensional parameter space. The package consists of a fortran90 code that does the actual minimization and a wrapper IDL code for book-keeping and datamore » handling. This paper explains in detail the ASPCAP components and functionality, and presents results from a number of tests designed to check its performance. ASPCAP provides stellar effective temperatures, surface gravities, and metallicities precise to 2%, 0.1 dex, and 0.05 dex, respectively, for most APOGEE stars, which are predominantly giants. It also provides abundances for up to 15 chemical elements with various levels of precision, typically under 0.1 dex. The final data release (DR12) of the Sloan Digital Sky Survey III contains an APOGEE database of more than 150,000 stars. ASPCAP development continues in the SDSS-IV APOGEE-2 survey.« less
Multidimensional electron beam-plasma instabilities in the relativistic regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bret, A.; Gremillet, L.; Dieckmann, M. E.
2010-12-15
The interest in relativistic beam-plasma instabilities has been greatly rejuvenated over the past two decades by novel concepts in laboratory and space plasmas. Recent advances in this long-standing field are here reviewed from both theoretical and numerical points of view. The primary focus is on the two-dimensional spectrum of unstable electromagnetic waves growing within relativistic, unmagnetized, and uniform electron beam-plasma systems. Although the goal is to provide a unified picture of all instability classes at play, emphasis is put on the potentially dominant waves propagating obliquely to the beam direction, which have received little attention over the years. First, themore » basic derivation of the general dielectric function of a kinetic relativistic plasma is recalled. Next, an overview of two-dimensional unstable spectra associated with various beam-plasma distribution functions is given. Both cold-fluid and kinetic linear theory results are reported, the latter being based on waterbag and Maxwell-Juettner model distributions. The main properties of the competing modes (developing parallel, transverse, and oblique to the beam) are given, and their respective region of dominance in the system parameter space is explained. Later sections address particle-in-cell numerical simulations and the nonlinear evolution of multidimensional beam-plasma systems. The elementary structures generated by the various instability classes are first discussed in the case of reduced-geometry systems. Validation of linear theory is then illustrated in detail for large-scale systems, as is the multistaged character of the nonlinear phase. Finally, a collection of closely related beam-plasma problems involving additional physical effects is presented, and worthwhile directions of future research are outlined.« less
Politi, Liran; Codish, Shlomi; Sagy, Iftach; Fink, Lior
2014-12-01
Insights about patterns of system use are often gained through the analysis of system log files, which record the actual behavior of users. In a clinical context, however, few attempts have been made to typify system use through log file analysis. The present study offers a framework for identifying, describing, and discerning among patterns of use of a clinical information retrieval system. We use the session attributes of volume, diversity, granularity, duration, and content to define a multidimensional space in which each specific session can be positioned. We also describe an analytical method for identifying the common archetypes of system use in this multidimensional space. We demonstrate the value of the proposed framework with a log file of the use of a health information exchange (HIE) system by physicians in an emergency department (ED) of a large Israeli hospital. The analysis reveals five distinct patterns of system use, which have yet to be described in the relevant literature. The results of this study have the potential to inform the design of HIE systems for efficient and effective use, thus increasing their contribution to the clinical decision-making process. Copyright © 2014 Elsevier Inc. All rights reserved.
Wide Area Security Region Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Lu, Shuai; Guo, Xinxin
2010-03-31
This report develops innovative and efficient methodologies and practical procedures to determine the wide-area security region of a power system, which take into consideration all types of system constraints including thermal, voltage, voltage stability, transient and potentially oscillatory stability limits in the system. The approach expands the idea of transmission system nomograms to a multidimensional case, involving multiple system limits and parameters such as transmission path constraints, zonal generation or load, etc., considered concurrently. The security region boundary is represented using its piecewise approximation with the help of linear inequalities (so called hyperplanes) in a multi-dimensional space, consisting of systemmore » parameters that are critical for security analyses. The goal of this approximation is to find a minimum set of hyperplanes that describe the boundary with a given accuracy. Methodologies are also developed to use the security hyperplanes, pre-calculated offline, to determine system security margins in real-time system operations, to identify weak elements in the system, and to calculate key contributing factors and sensitivities to determine the best system controls in real time and to assist in developing remedial actions and transmission system enhancements offline . A prototype program that automates the simulation procedures used to build the set of security hyperplanes has also been developed. The program makes it convenient to update the set of security hyperplanes necessitated by changes in system configurations. A prototype operational tool that uses the security hyperplanes to assess security margins and to calculate optimal control directions in real time has been built to demonstrate the project success. Numerical simulations have been conducted using the full-size Western Electricity Coordinating Council (WECC) system model, and they clearly demonstrated the feasibility and the effectiveness of the developed technology. Recommendations for the future work have also been formulated.« less
Polychromatic plots: graphical display of multidimensional data.
Roederer, Mario; Moody, M Anthony
2008-09-01
Limitations of graphical displays as well as human perception make the presentation and analysis of multidimensional data challenging. Graphical display of information on paper or by current projectors is perforce limited to two dimensions; the encoding of information from other dimensions must be overloaded into the two physical dimensions. A number of alternative means of encoding this information have been implemented, such as offsetting data points at an angle (e.g., three-dimensional projections onto a two-dimensional surface) or generating derived parameters that are combinations of other variables (e.g., principal components). Here, we explore the use of color to encode additional dimensions of data. PolyChromatic Plots are standard dot plots, where the color of each event is defined by the values of one, two, or three of the measurements for that event. The measurements for these parameters are mapped onto an intensity value for each primary color (red, green, or blue) based on different functions. In addition, differential weighting of the priority with which overlapping events are displayed can be defined by these same measurements. PolyChromatic Plots can encode up to five independent dimensions of data in a single display. By altering the color mapping function and the priority function, very different displays that highlight or de-emphasize populations of events can be generated. As for standard black-and-white dot plots, frequency information can be significantly biased by this display; care must be taken to ensure appropriate interpretation of the displays. PolyChromatic Plots are a powerful display type that enables rapid data exploration. By virtue of encoding as many as five dimensions of data independently, an enormous amount of information can be gleaned from the displays. In many ways, the display performs somewhat like an unsupervised cluster algorithm, by highlighting events of similar distributions in multivariate space.
Imaging nanoscale lattice variations by machine learning of x-ray diffraction microscopy data
Laanait, Nouamane; Zhang, Zhan; Schlepütz, Christian M.
2016-08-09
In this paper, we present a novel methodology based on machine learning to extract lattice variations in crystalline materials, at the nanoscale, from an x-ray Bragg diffraction-based imaging technique. By employing a full-field microscopy setup, we capture real space images of materials, with imaging contrast determined solely by the x-ray diffracted signal. The data sets that emanate from this imaging technique are a hybrid of real space information (image spatial support) and reciprocal lattice space information (image contrast), and are intrinsically multidimensional (5D). By a judicious application of established unsupervised machine learning techniques and multivariate analysis to this multidimensional datamore » cube, we show how to extract features that can be ascribed physical interpretations in terms of common structural distortions, such as lattice tilts and dislocation arrays. Finally, we demonstrate this 'big data' approach to x-ray diffraction microscopy by identifying structural defects present in an epitaxial ferroelectric thin-film of lead zirconate titanate.« less
Imaging nanoscale lattice variations by machine learning of x-ray diffraction microscopy data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laanait, Nouamane; Zhang, Zhan; Schlepütz, Christian M.
In this paper, we present a novel methodology based on machine learning to extract lattice variations in crystalline materials, at the nanoscale, from an x-ray Bragg diffraction-based imaging technique. By employing a full-field microscopy setup, we capture real space images of materials, with imaging contrast determined solely by the x-ray diffracted signal. The data sets that emanate from this imaging technique are a hybrid of real space information (image spatial support) and reciprocal lattice space information (image contrast), and are intrinsically multidimensional (5D). By a judicious application of established unsupervised machine learning techniques and multivariate analysis to this multidimensional datamore » cube, we show how to extract features that can be ascribed physical interpretations in terms of common structural distortions, such as lattice tilts and dislocation arrays. Finally, we demonstrate this 'big data' approach to x-ray diffraction microscopy by identifying structural defects present in an epitaxial ferroelectric thin-film of lead zirconate titanate.« less
NASA Astrophysics Data System (ADS)
Andonov, Zdravko
This R&D represent innovative multidimensional 6D-N(6n)D Space-Time (S-T) Methodology, 6D-6nD Coordinate Systems, 6D Equations, new 6D strategy and technology for development of Planetary Space Sciences, S-T Data Management and S-T Computational To-mography. . . The Methodology is actual for brain new RS Microwaves' Satellites and Compu-tational Tomography Systems development, aimed to defense sustainable Earth, Moon, & Sun System evolution. Especially, extremely important are innovations for monitoring and protec-tion of strategic threelateral system H-OH-H2O Hydrogen, Hydroxyl and Water), correspond-ing to RS VHRS (Very High Resolution Systems) of 1.420-1.657-22.089GHz microwaves. . . One of the Greatest Paradox and Challenge of World Science is the "transformation" of J. L. Lagrange 4D Space-Time (S-T) System to H. Minkovski 4D S-T System (O-X,Y,Z,icT) for Einstein's "Theory of Relativity". As a global result: -In contemporary Advanced Space Sciences there is not real adequate 4D-6D Space-Time Coordinate System and 6D Advanced Cosmos Strategy & Methodology for Multidimensional and Multitemporal Space-Time Data Management and Tomography. . . That's one of the top actual S-T Problems. Simple and optimal nD S-T Methodology discovery is extremely important for all Universities' Space Sci-ences' Education Programs, for advances in space research and especially -for all young Space Scientists R&D!... The top ten 21-Century Challenges ahead of Planetary and Space Sciences, Space Data Management and Computational Space Tomography, important for successfully de-velopment of Young Scientist Generations, are following: 1. R&D of W. R. Hamilton General Idea for transformation all Space Sciences to Time Sciences, beginning with 6D Eukonal for 6D anisotropic mediums & velocities. Development of IERS Earth & Space Systems (VLBI; LLR; GPS; SLR; DORIS Etc.) for Planetary-Space Data Management & Computational Planetary & Space Tomography. 2. R&D of S. W. Hawking Paradigm for 2D Complex Time and Quan-tum Wave Cosmology Paradigm for Decision of the Main Problem of Contemporary Physics. 3. R&D of Einstein-Minkowski Geodesies' Paradigm in the 4D-Space-Time Continuum to 6D-6nD Space-Time Continuum Paradigms and 6D S-T Equations. . . 4. R&D of Erwin Schrüdinger 4D S-T Universe' Evolutional Equation; It's David Bohm 4D generalization for anisotropic mediums and innovative 6D -for instantaneously quantum measurement -Bohm-Schrüdinger 6D S-T Universe' Evolutional Equation. 5. R&D of brain new 6D Planning of S-T Experi-ments, brain new 6D Space Technicks and Space Technology Generalizations, especially for 6D RS VHRS Research, Monitoring and 6D Computational Tomography. 6. R&D of "6D Euler-Poisson Equations" and "6D Kolmogorov Turbulence Theory" for GeoDynamics and for Space Dynamics as evolution of Gauss-Riemann Paradigms. 7. R&D of N. Boneff NASA RD for Asteroid "Eros" & Space Science' Laws Evolution. 8. R&D of H. Poincare Paradigm for Nature and Cosmos as 6D Group of Transferences. 9. R&D of K. Popoff N-Body General Problem & General Thermodynamic S-T Theory as Einstein-Prigogine-Landau' Paradigms Development. ü 10. R&D of 1st GUT since 1958 by N. S. Kalitzin (Kalitzin N. S., 1958: Uber eine einheitliche Feldtheorie. ZAHeidelberg-ARI, WZHUmnR-B., 7 (2), 207-215) and "Multitemporal Theory of Relativity" -With special applications to Photon Rockets and all Space-Time R&D. GENERAL CONCLUSION: Multidimensional Space-Time Methodology is advance in space research, corresponding to the IAF-IAA-COSPAR Innovative Strategy and R&D Programs -UNEP, UNDP, GEOSS, GMES, Etc.
ERIC Educational Resources Information Center
Jacob, Laura Beth
2012-01-01
Virtual world environments have evolved from object-oriented, text-based online games to complex three-dimensional immersive social spaces where the lines between reality and computer-generated begin to blur. Educators use virtual worlds to create engaging three-dimensional learning spaces for students, but the impact of virtual worlds in…
Naranjo, Ramon C.; Niswonger, Richard G.; Stone, Mark; Davis, Clinton; McKay, Alan
2012-01-01
We describe an approach for calibrating a two-dimensional (2-D) flow model of hyporheic exchange using observations of temperature and pressure to estimate hydraulic and thermal properties. A longitudinal 2-D heat and flow model was constructed for a riffle-pool sequence to simulate flow paths and flux rates for variable discharge conditions. A uniform random sampling approach was used to examine the solution space and identify optimal values at local and regional scales. We used a regional sensitivity analysis to examine the effects of parameter correlation and nonuniqueness commonly encountered in multidimensional modeling. The results from this study demonstrate the ability to estimate hydraulic and thermal parameters using measurements of temperature and pressure to simulate exchange and flow paths. Examination of the local parameter space provides the potential for refinement of zones that are used to represent sediment heterogeneity within the model. The results indicate vertical hydraulic conductivity was not identifiable solely using pressure observations; however, a distinct minimum was identified using temperature observations. The measured temperature and pressure and estimated vertical hydraulic conductivity values indicate the presence of a discontinuous low-permeability deposit that limits the vertical penetration of seepage beneath the riffle, whereas there is a much greater exchange where the low-permeability deposit is absent. Using both temperature and pressure to constrain the parameter estimation process provides the lowest overall root-mean-square error as compared to using solely temperature or pressure observations. This study demonstrates the benefits of combining continuous temperature and pressure for simulating hyporheic exchange and flow in a riffle-pool sequence. Copyright 2012 by the American Geophysical Union.
Lin, Chao; Shen, Xueju; Li, Baochen
2014-08-25
We demonstrate that all parameters of optical lightwave can be simultaneously designed as keys in security system. This multi-dimensional property of key can significantly enlarge the key space and further enhance the security level of the system. The single-shot off-axis digital holography with orthogonal polarized reference waves is employed to perform polarization state recording on object wave. Two pieces of polarization holograms are calculated and fabricated to be arranged in reference arms to generate random amplitude and phase distribution respectively. When reconstruction, original information which is represented with QR code can be retrieved using Fresnel diffraction with decryption keys and read out noise-free. Numerical simulation results for this cryptosystem are presented. An analysis on the key sensitivity and fault tolerance properties are also provided.
NASA Astrophysics Data System (ADS)
Boscheri, Walter; Dumbser, Michael; Loubère, Raphaël; Maire, Pierre-Henri
2018-04-01
In this paper we develop a conservative cell-centered Lagrangian finite volume scheme for the solution of the hydrodynamics equations on unstructured multidimensional grids. The method is derived from the Eucclhyd scheme discussed in [47,43,45]. It is second-order accurate in space and is combined with the a posteriori Multidimensional Optimal Order Detection (MOOD) limiting strategy to ensure robustness and stability at shock waves. Second-order of accuracy in time is achieved via the ADER (Arbitrary high order schemes using DERivatives) approach. A large set of numerical test cases is proposed to assess the ability of the method to achieve effective second order of accuracy on smooth flows, maintaining an essentially non-oscillatory behavior on discontinuous profiles, general robustness ensuring physical admissibility of the numerical solution, and precision where appropriate.
Multi-dimensional Fokker-Planck equation analysis using the modified finite element method
NASA Astrophysics Data System (ADS)
Náprstek, J.; Král, R.
2016-09-01
The Fokker-Planck equation (FPE) is a frequently used tool for the solution of cross probability density function (PDF) of a dynamic system response excited by a vector of random processes. FEM represents a very effective solution possibility, particularly when transition processes are investigated or a more detailed solution is needed. Actual papers deal with single degree of freedom (SDOF) systems only. So the respective FPE includes two independent space variables only. Stepping over this limit into MDOF systems a number of specific problems related to a true multi-dimensionality must be overcome. Unlike earlier studies, multi-dimensional simplex elements in any arbitrary dimension should be deployed and rectangular (multi-brick) elements abandoned. Simple closed formulae of integration in multi-dimension domain have been derived. Another specific problem represents the generation of multi-dimensional finite element mesh. Assembling of system global matrices should be subjected to newly composed algorithms due to multi-dimensionality. The system matrices are quite full and no advantages following from their sparse character can be profited from, as is commonly used in conventional FEM applications in 2D/3D problems. After verification of partial algorithms, an illustrative example dealing with a 2DOF non-linear aeroelastic system in combination with random and deterministic excitations is discussed.
Computer-Aided Discovery Tools for Volcano Deformation Studies with InSAR and GPS
NASA Astrophysics Data System (ADS)
Pankratius, V.; Pilewskie, J.; Rude, C. M.; Li, J. D.; Gowanlock, M.; Bechor, N.; Herring, T.; Wauthier, C.
2016-12-01
We present a Computer-Aided Discovery approach that facilitates the cloud-scalable fusion of different data sources, such as GPS time series and Interferometric Synthetic Aperture Radar (InSAR), for the purpose of identifying the expansion centers and deformation styles of volcanoes. The tools currently developed at MIT allow the definition of alternatives for data processing pipelines that use various analysis algorithms. The Computer-Aided Discovery system automatically generates algorithmic and parameter variants to help researchers explore multidimensional data processing search spaces efficiently. We present first application examples of this technique using GPS data on volcanoes on the Aleutian Islands and work in progress on combined GPS and InSAR data in Hawaii. In the model search context, we also illustrate work in progress combining time series Principal Component Analysis with InSAR augmentation to constrain the space of possible model explanations on current empirical data sets and achieve a better identification of deformation patterns. This work is supported by NASA AIST-NNX15AG84G and NSF ACI-1442997 (PI: V. Pankratius).
Mortier, Séverine Thérèse F C; Van Bockstal, Pieter-Jan; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas
2016-06-01
Large molecules, such as biopharmaceuticals, are considered the key driver of growth for the pharmaceutical industry. Freeze-drying is the preferred way to stabilise these products when needed. However, it is an expensive, inefficient, time- and energy-consuming process. During freeze-drying, there are only two main process variables to be set, i.e. the shelf temperature and the chamber pressure, however preferably in a dynamic way. This manuscript focuses on the essential use of uncertainty analysis for the determination and experimental verification of the dynamic primary drying Design Space for pharmaceutical freeze-drying. Traditionally, the chamber pressure and shelf temperature are kept constant during primary drying, leading to less optimal process conditions. In this paper it is demonstrated how a mechanistic model of the primary drying step gives the opportunity to determine the optimal dynamic values for both process variables during processing, resulting in a dynamic Design Space with a well-known risk of failure. This allows running the primary drying process step as time efficient as possible, hereby guaranteeing that the temperature at the sublimation front does not exceed the collapse temperature. The Design Space is the multidimensional combination and interaction of input variables and process parameters leading to the expected product specifications with a controlled (i.e., high) probability. Therefore, inclusion of parameter uncertainty is an essential part in the definition of the Design Space, although it is often neglected. To quantitatively assess the inherent uncertainty on the parameters of the mechanistic model, an uncertainty analysis was performed to establish the borders of the dynamic Design Space, i.e. a time-varying shelf temperature and chamber pressure, associated with a specific risk of failure. A risk of failure acceptance level of 0.01%, i.e. a 'zero-failure' situation, results in an increased primary drying process time compared to the deterministic dynamic Design Space; however, the risk of failure is under control. Experimental verification revealed that only a risk of failure acceptance level of 0.01% yielded a guaranteed zero-defect quality end-product. The computed process settings with a risk of failure acceptance level of 0.01% resulted in a decrease of more than half of the primary drying time in comparison with a regular, conservative cycle with fixed settings. Copyright © 2016. Published by Elsevier B.V.
Determination of the design space of the HPLC analysis of water-soluble vitamins.
Wagdy, Hebatallah A; Hanafi, Rasha S; El-Nashar, Rasha M; Aboul-Enein, Hassan Y
2013-06-01
Analysis of water-soluble vitamins has been tremendously approached through the last decades. A multitude of HPLC methods have been reported with a variety of advantages/shortcomings, yet, the design space of HPLC analysis of these vitamins was not defined in any of these reports. As per the food and drug administration (FDA), implementing the quality by design approach for the analysis of commercially available mixtures is hypothesized to enhance the pharmaceutical industry via facilitating the process of analytical method development and approval. This work illustrates a multifactorial optimization of three measured plus seven calculated influential HPLC parameters on the analysis of a mixture containing seven common water-soluble vitamins (B1, B2, B6, B12, C, PABA, and PP). These three measured parameters are gradient time, temperature, and ternary eluent composition (B1/B2) and the seven calculated parameters are flow rate, column length, column internal diameter, dwell volume, extracolumn volume, %B (start), and %B (end). The design is based on 12 experiments in which, examining of the multifactorial effects of these 3 + 7 parameters on the critical resolution and selectivity, was carried out by systematical variation of all these parameters simultaneously. The 12 basic runs were based on two different gradient time each at two different temperatures, repeated at three different ternary eluent compositions (methanol or acetonitrile or a mixture of both). Multidimensional robust regions of high critical R(s) were defined and graphically verified. The optimum method was selected based on the best resolution separation in the shortest run time for a synthetic mixture, followed by application on two pharmaceutical preparations available in the market. The predicted retention times of all peaks were found to be in good match with the virtual ones. In conclusion, the presented report offers an accurate determination of the design space for critical resolution in the analysis of water-soluble vitamins by HPLC, which would help the regulatory authorities to judge the validity of presented analytical methods for approval. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Robust Multivariable Optimization and Performance Simulation for ASIC Design
NASA Technical Reports Server (NTRS)
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application-specific-integrated-circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power, and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem, which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques, which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable, are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way that facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as a framework of software modules, templates, and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation.
Szűcs, D
2016-01-01
A large body of research suggests that mathematical learning disability (MLD) is related to working memory impairment. Here, I organize part of this literature through a meta-analysis of 36 studies with 665 MLD and 1049 control participants. I demonstrate that one subtype of MLD is associated with reading problems and weak verbal short-term and working memory. Another subtype of MLD does not have associated reading problems and is linked to weak visuospatial short-term and working memory. In order to better understand MLD we need to precisely define potentially modality-specific memory subprocesses and supporting executive functions, relevant for mathematical learning. This can be achieved by taking a multidimensional parametric approach systematically probing an extended network of cognitive functions. Rather than creating arbitrary subgroups and/or focus on a single factor, highly powered studies need to position individuals in a multidimensional parametric space. This will allow us to understand the multidimensional structure of cognitive functions and their relationship to mathematical performance. © 2016 Elsevier B.V. All rights reserved.
Mariani, Alberto; Brunner, S.; Dominski, J.; ...
2018-01-17
Reducing the uncertainty on physical input parameters derived from experimental measurements is essential towards improving the reliability of gyrokinetic turbulence simulations. This can be achieved by introducing physical constraints. Amongst them, the zero particle flux condition is considered here. A first attempt is also made to match as well the experimental ion/electron heat flux ratio. This procedure is applied to the analysis of a particular Tokamak à Configuration Variable discharge. A detailed reconstruction of the zero particle flux hyper-surface in the multi-dimensional physical parameter space at fixed time of the discharge is presented, including the effect of carbon as themore » main impurity. Both collisionless and collisional regimes are considered. Hyper-surface points within the experimental error bars are found. In conclusion, the analysis is done performing gyrokinetic simulations with the local version of the GENE code, computing the fluxes with a Quasi-Linear (QL) model and validating the QL results with non-linear simulations in a subset of cases.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mariani, Alberto; Brunner, S.; Dominski, J.
Reducing the uncertainty on physical input parameters derived from experimental measurements is essential towards improving the reliability of gyrokinetic turbulence simulations. This can be achieved by introducing physical constraints. Amongst them, the zero particle flux condition is considered here. A first attempt is also made to match as well the experimental ion/electron heat flux ratio. This procedure is applied to the analysis of a particular Tokamak à Configuration Variable discharge. A detailed reconstruction of the zero particle flux hyper-surface in the multi-dimensional physical parameter space at fixed time of the discharge is presented, including the effect of carbon as themore » main impurity. Both collisionless and collisional regimes are considered. Hyper-surface points within the experimental error bars are found. In conclusion, the analysis is done performing gyrokinetic simulations with the local version of the GENE code, computing the fluxes with a Quasi-Linear (QL) model and validating the QL results with non-linear simulations in a subset of cases.« less
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less
The Grid[Way] Job Template Manager, a tool for parameter sweeping
NASA Astrophysics Data System (ADS)
Lorca, Alejandro; Huedo, Eduardo; Llorente, Ignacio M.
2011-04-01
Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort. Program summaryProgram title: Grid[Way] Job Template Manager (version 1.0) Catalogue identifier: AEIE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Apache license 2.0 No. of lines in distributed program, including test data, etc.: 3545 No. of bytes in distributed program, including test data, etc.: 126 879 Distribution format: tar.gz Programming language: Perl 5.8.5 and above Computer: Any (tested on PC x86 and x86_64) Operating system: Unix, GNU/Linux (tested on Ubuntu 9.04, Scientific Linux 4.7, centOS 5.4), Mac OS X (tested on Snow Leopard 10.6) RAM: 10 MB Classification: 6.5 External routines: The GridWay Metascheduler [1]. Nature of problem: To parameterize and manage an application running on a grid or cluster. Solution method: Generation of job templates as a cross product of the input parameter sets. Also management of the job template files including the job submission to the grid, control and information retrieval. Restrictions: The parameter sweep is limited by disk space during generation of the job templates. The wild-carding of parameters cannot be done in decreasing order. Job submission, control and information is delegated to the GridWay Metascheduler. Running time: From half a second in the simplest operation to a few minutes for thousands of exponential sampling parameters.
Two-dimensional Manifold with Point-like Defects
NASA Astrophysics Data System (ADS)
Gani, V. A.; Dmitriev, A. E.; Rubin, S. G.
We study a class of two-dimensional compact extra spaces isomorphic to the sphere S 2 in the framework of multidimensional gravitation. We show that there exists a family of stationary metrics that depend on the initial (boundary) conditions. All these geometries have a singular point. We also discuss the possibility for these deformed extra spaces to be considered as dark matter candidates.
ERIC Educational Resources Information Center
Frelin, Anneli; Grannäs, Jan
2014-01-01
This article introduces a theoretical framework for studying school improvement processes such as making school environments safer. Using concepts from spatial theory, in which distinctions between mental, social and physical space are applied makes for a multidimensional analysis of processes of change. In a multilevel case study, these were…
Multidimensionally encoded magnetic resonance imaging.
Lin, Fa-Hsuan
2013-07-01
Magnetic resonance imaging (MRI) typically achieves spatial encoding by measuring the projection of a q-dimensional object over q-dimensional spatial bases created by linear spatial encoding magnetic fields (SEMs). Recently, imaging strategies using nonlinear SEMs have demonstrated potential advantages for reconstructing images with higher spatiotemporal resolution and reducing peripheral nerve stimulation. In practice, nonlinear SEMs and linear SEMs can be used jointly to further improve the image reconstruction performance. Here, we propose the multidimensionally encoded (MDE) MRI to map a q-dimensional object onto a p-dimensional encoding space where p > q. MDE MRI is a theoretical framework linking imaging strategies using linear and nonlinear SEMs. Using a system of eight surface SEM coils with an eight-channel radiofrequency coil array, we demonstrate the five-dimensional MDE MRI for a two-dimensional object as a further generalization of PatLoc imaging and O-space imaging. We also present a method of optimizing spatial bases in MDE MRI. Results show that MDE MRI with a higher dimensional encoding space can reconstruct images more efficiently and with a smaller reconstruction error when the k-space sampling distribution and the number of samples are controlled. Copyright © 2012 Wiley Periodicals, Inc.
Using complex networks towards information retrieval and diagnostics in multidimensional imaging
NASA Astrophysics Data System (ADS)
Banerjee, Soumya Jyoti; Azharuddin, Mohammad; Sen, Debanjan; Savale, Smruti; Datta, Himadri; Dasgupta, Anjan Kr; Roy, Soumen
2015-12-01
We present a fresh and broad yet simple approach towards information retrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers.
Using complex networks towards information retrieval and diagnostics in multidimensional imaging.
Banerjee, Soumya Jyoti; Azharuddin, Mohammad; Sen, Debanjan; Savale, Smruti; Datta, Himadri; Dasgupta, Anjan Kr; Roy, Soumen
2015-12-02
We present a fresh and broad yet simple approach towards information retrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers.
Using complex networks towards information retrieval and diagnostics in multidimensional imaging
Banerjee, Soumya Jyoti; Azharuddin, Mohammad; Sen, Debanjan; Savale, Smruti; Datta, Himadri; Dasgupta, Anjan Kr; Roy, Soumen
2015-01-01
We present a fresh and broad yet simple approach towards information retrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers. PMID:26626047
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao Yang; Luo, Gang; Jiang, Fangming
2010-05-01
Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less
Aksu, Buket; Paradkar, Anant; de Matas, Marcel; Özer, Özgen; Güneri, Tamer; York, Peter
2013-02-01
Quality by design (QbD) is an essential part of the modern approach to pharmaceutical quality. This study was conducted in the framework of a QbD project involving ramipril tablets. Preliminary work included identification of the critical quality attributes (CQAs) and critical process parameters (CPPs) based on the quality target product profiles (QTPPs) using the historical data and risk assessment method failure mode and effect analysis (FMEA). Compendial and in-house specifications were selected as QTPPs for ramipril tablets. CPPs that affected the product and process were used to establish an experimental design. The results thus obtained can be used to facilitate definition of the design space using tools such as design of experiments (DoE), the response surface method (RSM) and artificial neural networks (ANNs). The project was aimed at discovering hidden knowledge associated with the manufacture of ramipril tablets using a range of artificial intelligence-based software, with the intention of establishing a multi-dimensional design space that ensures consistent product quality. At the end of the study, a design space was developed based on the study data and specifications, and a new formulation was optimized. On the basis of this formulation, a new laboratory batch formulation was prepared and tested. It was confirmed that the explored formulation was within the design space.
Hout, Michael C; Goldinger, Stephen D; Brady, Kyle J
2014-01-01
Cognitive theories in visual attention and perception, categorization, and memory often critically rely on concepts of similarity among objects, and empirically require measures of "sameness" among their stimuli. For instance, a researcher may require similarity estimates among multiple exemplars of a target category in visual search, or targets and lures in recognition memory. Quantifying similarity, however, is challenging when everyday items are the desired stimulus set, particularly when researchers require several different pictures from the same category. In this article, we document a new multidimensional scaling database with similarity ratings for 240 categories, each containing color photographs of 16-17 exemplar objects. We collected similarity ratings using the spatial arrangement method. Reports include: the multidimensional scaling solutions for each category, up to five dimensions, stress and fit measures, coordinate locations for each stimulus, and two new classifications. For each picture, we categorized the item's prototypicality, indexed by its proximity to other items in the space. We also classified pairs of images along a continuum of similarity, by assessing the overall arrangement of each MDS space. These similarity ratings will be useful to any researcher that wishes to control the similarity of experimental stimuli according to an objective quantification of "sameness."
NASA Astrophysics Data System (ADS)
He, Jiayi; Shang, Pengjian; Xiong, Hui
2018-06-01
Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.
Structural diversity: a multi-dimensional approach to assess recreational services in urban parks.
Voigt, Annette; Kabisch, Nadja; Wurster, Daniel; Haase, Dagmar; Breuste, Jürgen
2014-05-01
Urban green spaces provide important recreational services for urban residents. In general, when park visitors enjoy "the green," they are in actuality appreciating a mix of biotic, abiotic, and man-made park infrastructure elements and qualities. We argue that these three dimensions of structural diversity have an influence on how people use and value urban parks. We present a straightforward approach for assessing urban parks that combines multi-dimensional landscape mapping and questionnaire surveys. We discuss the method as well the results from its application to differently sized parks in Berlin and Salzburg.
Systems and Methods for Data Visualization Using Three-Dimensional Displays
NASA Technical Reports Server (NTRS)
Davidoff, Scott (Inventor); Djorgovski, Stanislav G. (Inventor); Estrada, Vicente (Inventor); Donalek, Ciro (Inventor)
2017-01-01
Data visualization systems and methods for generating 3D visualizations of a multidimensional data space are described. In one embodiment a 3D data visualization application directs a processing system to: load a set of multidimensional data points into a visualization table; create representations of a set of 3D objects corresponding to the set of data points; receive mappings of data dimensions to visualization attributes; determine the visualization attributes of the set of 3D objects based upon the selected mappings of data dimensions to 3D object attributes; update a visibility dimension in the visualization table for each of the plurality of 3D object to reflect the visibility of each 3D object based upon the selected mappings of data dimensions to visualization attributes; and interactively render 3D data visualizations of the 3D objects within the virtual space from viewpoints determined based upon received user input.
Data analytics and parallel-coordinate materials property charts
NASA Astrophysics Data System (ADS)
Rickman, Jeffrey M.
2018-01-01
It is often advantageous to display material properties relationships in the form of charts that highlight important correlations and thereby enhance our understanding of materials behavior and facilitate materials selection. Unfortunately, in many cases, these correlations are highly multidimensional in nature, and one typically employs low-dimensional cross-sections of the property space to convey some aspects of these relationships. To overcome some of these difficulties, in this work we employ methods of data analytics in conjunction with a visualization strategy, known as parallel coordinates, to represent better multidimensional materials data and to extract useful relationships among properties. We illustrate the utility of this approach by the construction and systematic analysis of multidimensional materials properties charts for metallic and ceramic systems. These charts simplify the description of high-dimensional geometry, enable dimensional reduction and the identification of significant property correlations and underline distinctions among different materials classes.
Variational calculation of macrostate transition rates
NASA Astrophysics Data System (ADS)
Ulitsky, Alex; Shalloway, David
1998-08-01
We develop the macrostate variational method (MVM) for computing reaction rates of diffusive conformational transitions in multidimensional systems by a variational coarse-grained "macrostate" decomposition of the Smoluchowski equation. MVM uses multidimensional Gaussian packets to identify and focus computational effort on the "transition region," a localized, self-consistently determined region in conformational space positioned roughly between the macrostates. It also determines the "transition direction" which optimally specifies the projected potential of mean force for mean first-passage time calculations. MVM is complementary to variational transition state theory in that it can efficiently solve multidimensional problems but does not accommodate memory-friction effects. It has been tested on model 1- and 2-dimensional potentials and on the 12-dimensional conformational transition between the isoforms of a microcluster of six-atoms having only van der Waals interactions. Comparison with Brownian dynamics calculations shows that MVM obtains equivalent results at a fraction of the computational cost.
Behavioural hypervolumes of spider communities predict community performance and disbandment
Sih, Andrew; DiRienzo, Nicholas; Pinter-Wollman, Noa
2016-01-01
Trait-based ecology argues that an understanding of the traits of interactors can enhance the predictability of ecological outcomes. We examine here whether the multidimensional behavioural-trait diversity of communities influences community performance and stability in situ. We created experimental communities of web-building spiders, each with an identical species composition. Communities contained one individual of each of five different species. Prior to establishing these communities in the field, we examined three behavioural traits for each individual spider. These behavioural measures allowed us to estimate community-wide behavioural diversity, as inferred by the multidimensional behavioural volume occupied by the entire community. Communities that occupied a larger region of behavioural-trait space (i.e. where spiders differed more from each other behaviourally) gained more mass and were less likely to disband. Thus, there is a community-wide benefit to multidimensional behavioural diversity in this system that might translate to other multispecies assemblages. PMID:27974515
Q-space trajectory imaging for multidimensional diffusion MRI of the human brain
Westin, Carl-Fredrik; Knutsson, Hans; Pasternak, Ofer; Szczepankiewicz, Filip; Özarslan, Evren; van Westen, Danielle; Mattisson, Cecilia; Bogren, Mats; O’Donnell, Lauren; Kubicki, Marek; Topgaard, Daniel; Nilsson, Markus
2016-01-01
This work describes a new diffusion MR framework for imaging and modeling of microstructure that we call q-space trajectory imaging (QTI). The QTI framework consists of two parts: encoding and modeling. First we propose q-space trajectory encoding, which uses time-varying gradients to probe a trajectory in q-space, in contrast to traditional pulsed field gradient sequences that attempt to probe a point in q-space. Then we propose a microstructure model, the diffusion tensor distribution (DTD) model, which takes advantage of additional information provided by QTI to estimate a distributional model over diffusion tensors. We show that the QTI framework enables microstructure modeling that is not possible with the traditional pulsed gradient encoding as introduced by Stejskal and Tanner. In our analysis of QTI, we find that the well-known scalar b-value naturally extends to a tensor-valued entity, i.e., a diffusion measurement tensor, which we call the b-tensor. We show that b-tensors of rank 2 or 3 enable estimation of the mean and covariance of the DTD model in terms of a second order tensor (the diffusion tensor) and a fourth order tensor. The QTI framework has been designed to improve discrimination of the sizes, shapes, and orientations of diffusion microenvironments within tissue. We derive rotationally invariant scalar quantities describing intuitive microstructural features including size, shape, and orientation coherence measures. To demonstrate the feasibility of QTI on a clinical scanner, we performed a small pilot study comparing a group of five healthy controls with five patients with schizophrenia. The parameter maps derived from QTI were compared between the groups, and 9 out of the 14 parameters investigated showed differences between groups. The ability to measure and model the distribution of diffusion tensors, rather than a quantity that has already been averaged within a voxel, has the potential to provide a powerful paradigm for the study of complex tissue architecture. PMID:26923372
Sempere, Angel Perez; Vera-Lopez, Vanesa; Gimenez-Martinez, Juana; Ruiz-Beato, Elena; Cuervo, Jesús; Maurino, Jorge
2017-01-01
Purpose Multidimensional unfolding is a multivariate method to assess preferences using a small sample size, a geometric model locating individuals and alternatives as points in a joint space. The objective was to evaluate relapsing–remitting multiple sclerosis (RRMS) patient preferences toward key disease-modifying therapy (DMT) attributes using multidimensional unfolding. Patients and methods A cross-sectional pilot study in RRMS patients was conducted. Drug attributes included relapse prevention, disease progression prevention, side-effect risk and route and schedule of administration. Assessment of preferences was performed through a five-card game. Patients were asked to value attributes from 1 (most preferred) to 5 (least preferred). Results A total of 37 patients were included; the mean age was 38.6 years, and 78.4% were female. Disease progression prevention was the most important factor (51.4%), followed by relapse prevention (40.5%). The frequency of administration had the lowest preference rating for 56.8% of patients. Finally, 19.6% valued the side-effect risk attribute as having low/very low importance. Conclusion Patients’ perspective for DMT attributes may provide valuable information to facilitate shared decision-making. Efficacy attributes were the most important drug characteristics for RRMS patients. Multidimensional unfolding seems to be a feasible approach to assess preferences in multiple sclerosis patients. Further elicitation studies using multidimensional unfolding with other stated choice methods are necessary to confirm these findings. PMID:28615928
A cluster pattern algorithm for the analysis of multiparametric cell assays.
Kaufman, Menachem; Bloch, David; Zurgil, Naomi; Shafran, Yana; Deutsch, Mordechai
2005-09-01
The issue of multiparametric analysis of complex single cell assays of both static and flow cytometry (SC and FC, respectively) has become common in recent years. In such assays, the analysis of changes, applying common statistical parameters and tests, often fails to detect significant differences between the investigated samples. The cluster pattern similarity (CPS) measure between two sets of gated clusters is based on computing the difference between their density distribution functions' set points. The CPS was applied for the discrimination between two observations in a four-dimensional parameter space. The similarity coefficient (r) ranges between 0 (perfect similarity) to 1 (dissimilar). Three CPS validation tests were carried out: on the same stock samples of fluorescent beads, yielding very low r's (0, 0.066); and on two cell models: mitogenic stimulation of peripheral blood mononuclear cells (PBMC), and apoptosis induction in Jurkat T cell line by H2O2. In both latter cases, r indicated similarity (r < 0.23) within the same group, and dissimilarity (r > 0.48) otherwise. This classification and algorithm approach offers a measure of similarity between samples. It relies on the multidimensional pattern of the sample parameters. The algorithm compensates for environmental drifts in this apparatus and assay; it also may be applied to more than four dimensions.
A self-adaptive-grid method with application to airfoil flow
NASA Technical Reports Server (NTRS)
Nakahashi, K.; Deiwert, G. S.
1985-01-01
A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Yuling; Liu, Yue, E-mail: Yueqiang.Liu@ccfe.ac.uk, E-mail: liuyue@dlut.edu.cn; Liu, Chao
2016-01-15
A dispersion relation is derived for the stability of the resistive wall mode (RWM), which includes both the resistive layer damping physics and the toroidal precession drift resonance damping from energetic ions in tokamak plasmas. The dispersion relation is numerically solved for a model plasma, for the purpose of systematic investigation of the RWM stability in multi-dimensional plasma parameter space including the plasma resistivity, the radial location of the resistive wall, as well as the toroidal flow velocity. It is found that the toroidal favorable average curvature in the resistive layer contributes a significant stabilization of the RWM. This stabilizationmore » is further enhanced by adding the drift kinetic contribution from energetic ions. Furthermore, two traditionally assumed inner layer models are considered and compared in the dispersion relation, resulting in different predictions for the stability of the RWM.« less
Zeno subspace in quantum-walk dynamics
NASA Astrophysics Data System (ADS)
Chandrashekar, C. M.
2010-11-01
We investigate discrete-time quantum-walk evolution under the influence of periodic measurements in position subspace. The undisturbed survival probability of the particle at the position subspace P(0,t) is compared with the survival probability after frequent (n) measurements at interval τ=t/n, P(0,τ)n. We show that P(0,τ)n>P(0,t) leads to the quantum Zeno effect in position subspace when a parameter θ in the quantum coin operations and frequency of measurements is greater than the critical value, θ>θc and n>nc. This Zeno effect in the subspace preserves the dynamics in coin Hilbert space of the walk dynamics and has the potential to play a significant role in quantum tasks such as preserving the quantum state of the particle at any particular position, and to understand the Zeno dynamics in a multidimensional system that is highly transient in nature.
Scientific Visualization of Radio Astronomy Data using Gesture Interaction
NASA Astrophysics Data System (ADS)
Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.
2015-09-01
MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.
Numerical simulations of high-energy flows in accreting magnetic white dwarfs
NASA Astrophysics Data System (ADS)
Van Box Som, Lucile; Falize, É.; Bonnet-Bidaud, J.-M.; Mouchet, M.; Busschaert, C.; Ciardi, A.
2018-01-01
Some polars show quasi-periodic oscillations (QPOs) in their optical light curves that have been interpreted as the result of shock oscillations driven by the cooling instability. Although numerical simulations can recover this physics, they wrongly predict QPOs in the X-ray luminosity and have also failed to reproduce the observed frequencies, at least for the limited range of parameters explored so far. Given the uncertainties on the observed polar parameters, it is still unclear whether simulations can reproduce the observations. The aim of this work is to study QPOs covering all relevant polars showing QPOs. We perform numerical simulations including gravity, cyclotron and bremsstrahlung radiative losses, for a wide range of polar parameters, and compare our results with the astronomical data using synthetic X-ray and optical luminosities. We show that shock oscillations are the result of complex shock dynamics triggered by the interplay of two radiative instabilities. The secondary shock forms at the acoustic horizon in the post-shock region in agreement with our estimates from steady-state solutions. We also demonstrate that the secondary shock is essential to sustain the accretion shock oscillations at the average height predicted by our steady-state accretion model. Finally, in spite of the large explored parameter space, matching the observed QPO parameters requires a combination of parameters inconsistent with the observed ones. This difficulty highlights the limits of one-dimensional simulations, suggesting that multi-dimensional effects are needed to understand the non-linear dynamics of accretion columns in polars and the origins of QPOs.
Entropy production due to gravitational-wave viscosity in a Kaluza-Klein inflationary universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomita, K.; Ishihara, H.
1985-10-15
The role of viscosity due to the transport of gravitational radiation in a Kaluza-Klein multidimensional universe is considered, and it is shown that vast entropy may be produced by it owing to inflation of the external (ordinary) space and collapse of the internal (compact) space. The inflation and collapse factors necessary for increasing the total entropy by a factor approx.10/sup 88/ are derived.
Deuterium Abundance in Consciousness and Current Cosmology
NASA Astrophysics Data System (ADS)
Rauscher, Elizabeth A.
We utilize the deuterium-hydrogen abundances and their role in setting limits on the mass and other conditions of cosmogenesis and cosmological evolution. We calculate the dependence of a set of physical variables such as density, temperature, energy mass, entropy and other physical variable parameters through the evolution of the universe under the Schwarzschild conditions as a function from early to present time. Reconciliation with the 3°K and missing mass is made. We first examine the Schwarzschild condition; second, the geometrical constraints of a multidimensional Cartesian space on closed cosmologies, and third we will consider the cosmogenesis and evolution of the universe in a multidimensional Cartesian space, obeying the Schwarzschild condition. Implications of this model for matter creation are made. We also examine experimental evidence for closed versus open cosmologies; x-ray detection of the "missing mass" density. Also the interstellar deuterium abundance, along with the value of the Hubble constant set a general criterion on the value of the curvature constant, k. Once the value of the Hubble constant, H is determined, the deuterium abundance sets stringent restrictions on the value of the curvature constant k by an detailed discussion is presented. The experimental evidences for the determination of H and the primary set of coupled equations to determine D abundance is given. 'The value of k for an open, closed, or flat universe will be discussed in terms of the D abundance which will affect the interpretation of the Schwarzschild, black hole universe. We determine cosmology solutions to Einstein's field obeying the Schwarzschild solutions condition. With this model, we can form a reconciliation of the black hole, from galactic to cosmological scale. Continuous creation occurs at the dynamic blackhole plasma field. We term this new model the multiple big bang or "little whimper model". We utilize the deuteriumhydrogen abundances and their role in setting limits on the mass and other conditions of cosmogenesis and cosmological evolution. We calculate the dependence of a set of physical variables such as density, temperature, energy mass, entropy and other physical variable parameters through the evolution of the universe under the Schwarzschild conditions as a function from early to present time. Reconciliation with the 3°K background and missing mass is made.
Information gains from cosmological probes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grandis, S.; Seehars, S.; Refregier, A.
In light of the growing number of cosmological observations, it is important to develop versatile tools to quantify the constraining power and consistency of cosmological probes. Originally motivated from information theory, we use the relative entropy to compute the information gained by Bayesian updates in units of bits. This measure quantifies both the improvement in precision and the 'surprise', i.e. the tension arising from shifts in central values. Our starting point is a WMAP9 prior which we update with observations of the distance ladder, supernovae (SNe), baryon acoustic oscillations (BAO), and weak lensing as well as the 2015 Planck release.more » We consider the parameters of the flat ΛCDM concordance model and some of its extensions which include curvature and Dark Energy equation of state parameter w . We find that, relative to WMAP9 and within these model spaces, the probes that have provided the greatest gains are Planck (10 bits), followed by BAO surveys (5.1 bits) and SNe experiments (3.1 bits). The other cosmological probes, including weak lensing (1.7 bits) and (H{sub 0}) measures (1.7 bits), have contributed information but at a lower level. Furthermore, we do not find any significant surprise when updating the constraints of WMAP9 with any of the other experiments, meaning that they are consistent with WMAP9. However, when we choose Planck15 as the prior, we find that, accounting for the full multi-dimensionality of the parameter space, the weak lensing measurements of CFHTLenS produce a large surprise of 4.4 bits which is statistically significant at the 8 σ level. We discuss how the relative entropy provides a versatile and robust framework to compare cosmological probes in the context of current and future surveys.« less
Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A
2009-06-01
In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.
Big Data Analytics for Prostate Radiotherapy.
Coates, James; Souhami, Luis; El Naqa, Issam
2016-01-01
Radiation therapy is a first-line treatment option for localized prostate cancer and radiation-induced normal tissue damage are often the main limiting factor for modern radiotherapy regimens. Conversely, under-dosing of target volumes in an attempt to spare adjacent healthy tissues limits the likelihood of achieving local, long-term control. Thus, the ability to generate personalized data-driven risk profiles for radiotherapy outcomes would provide valuable prognostic information to help guide both clinicians and patients alike. Big data applied to radiation oncology promises to deliver better understanding of outcomes by harvesting and integrating heterogeneous data types, including patient-specific clinical parameters, treatment-related dose-volume metrics, and biological risk factors. When taken together, such variables make up the basis for a multi-dimensional space (the "RadoncSpace") in which the presented modeling techniques search in order to identify significant predictors. Herein, we review outcome modeling and big data-mining techniques for both tumor control and radiotherapy-induced normal tissue effects. We apply many of the presented modeling approaches onto a cohort of hypofractionated prostate cancer patients taking into account different data types and a large heterogeneous mix of physical and biological parameters. Cross-validation techniques are also reviewed for the refinement of the proposed framework architecture and checking individual model performance. We conclude by considering advanced modeling techniques that borrow concepts from big data analytics, such as machine learning and artificial intelligence, before discussing the potential future impact of systems radiobiology approaches.
Li, Ying-Jun; Yang, Cong; Wang, Gui-Cong; Zhang, Hui; Cui, Huan-Yong; Zhang, Yong-Liang
2017-09-01
This paper presents a novel integrated piezoelectric six-dimensional force sensor which can realize dynamic measurement of multi-dimensional space load. Firstly, the composition of the sensor, the spatial layout of force-sensitive components, and measurement principle are analyzed and designed. There is no interference of piezoelectric six-dimensional force sensor in theoretical analysis. Based on the principle of actual work and deformation compatibility coherence, this paper deduces the parallel load sharing principle of the piezoelectric six-dimensional force sensor. The main effect factors which affect the load sharing ratio are obtained. The finite element model of the piezoelectric six-dimensional force sensor is established. In order to verify the load sharing principle of the sensor, a load sharing test device of piezoelectric force sensor is designed and fabricated. The load sharing experimental platform is set up. The experimental results are in accordance with the theoretical analysis and simulation results. The experiments show that the multi-dimensional and heavy force measurement can be realized by the parallel arrangement of the load sharing ring and the force sensitive element in the novel integrated piezoelectric six-dimensional force sensor. The ideal load sharing effect of the sensor can be achieved by appropriate size parameters. This paper has an important guide for the design of the force measuring device according to the load sharing mode. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Structural Optimization of a Force Balance Using a Computational Experiment Design
NASA Technical Reports Server (NTRS)
Parker, P. A.; DeLoach, R.
2002-01-01
This paper proposes a new approach to force balance structural optimization featuring a computational experiment design. Currently, this multi-dimensional design process requires the designer to perform a simplification by executing parameter studies on a small subset of design variables. This one-factor-at-a-time approach varies a single variable while holding all others at a constant level. Consequently, subtle interactions among the design variables, which can be exploited to achieve the design objectives, are undetected. The proposed method combines Modern Design of Experiments techniques to direct the exploration of the multi-dimensional design space, and a finite element analysis code to generate the experimental data. To efficiently search for an optimum combination of design variables and minimize the computational resources, a sequential design strategy was employed. Experimental results from the optimization of a non-traditional force balance measurement section are presented. An approach to overcome the unique problems associated with the simultaneous optimization of multiple response criteria is described. A quantitative single-point design procedure that reflects the designer's subjective impression of the relative importance of various design objectives, and a graphical multi-response optimization procedure that provides further insights into available tradeoffs among competing design objectives are illustrated. The proposed method enhances the intuition and experience of the designer by providing new perspectives on the relationships between the design variables and the competing design objectives providing a systematic foundation for advancements in structural design.
Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets.
Demartines, P; Herault, J
1997-01-01
We present a new strategy called "curvilinear component analysis" (CCA) for dimensionality reduction and representation of multidimensional data sets. The principle of CCA is a self-organized neural network performing two tasks: vector quantization (VQ) of the submanifold in the data set (input space); and nonlinear projection (P) of these quantizing vectors toward an output space, providing a revealing unfolding of the submanifold. After learning, the network has the ability to continuously map any new point from one space into another: forward mapping of new points in the input space, or backward mapping of an arbitrary position in the output space.
Mission Analysis and Design for Space Based Inter-Satellite Laser Power Beaming
2010-03-01
56 4.3.1 Darwin Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.4 Obscuration Analysis...81 Appendix B. Additional Multi-Dimensional Darwin Plots from ModelCenter . 86 Appendix C. STK Access Report for... Darwin Data Explorer Window Showing Optimized Results in Tabular Form
A New Approach to Monte Carlo Simulations in Statistical Physics
NASA Astrophysics Data System (ADS)
Landau, David P.
2002-08-01
Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).
Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390
Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.
Initiating heavy-atom-based phasing by multi-dimensional molecular replacement.
Pedersen, Bjørn Panyella; Gourdon, Pontus; Liu, Xiangyu; Karlsen, Jesper Lykkegaard; Nissen, Poul
2016-03-01
To obtain an electron-density map from a macromolecular crystal the phase problem needs to be solved, which often involves the use of heavy-atom derivative crystals and concomitant heavy-atom substructure determination. This is typically performed by dual-space methods, direct methods or Patterson-based approaches, which however may fail when only poorly diffracting derivative crystals are available. This is often the case for, for example, membrane proteins. Here, an approach for heavy-atom site identification based on a molecular-replacement parameter matrix (MRPM) is presented. It involves an n-dimensional search to test a wide spectrum of molecular-replacement parameters, such as different data sets and search models with different conformations. Results are scored by the ability to identify heavy-atom positions from anomalous difference Fourier maps. The strategy was successfully applied in the determination of a membrane-protein structure, the copper-transporting P-type ATPase CopA, when other methods had failed to determine the heavy-atom substructure. MRPM is well suited to proteins undergoing large conformational changes where multiple search models should be considered, and it enables the identification of weak but correct molecular-replacement solutions with maximum contrast to prime experimental phasing efforts.
Initiating heavy-atom-based phasing by multi-dimensional molecular replacement
Pedersen, Bjørn Panyella; Gourdon, Pontus; Liu, Xiangyu; Karlsen, Jesper Lykkegaard; Nissen, Poul
2016-01-01
To obtain an electron-density map from a macromolecular crystal the phase problem needs to be solved, which often involves the use of heavy-atom derivative crystals and concomitant heavy-atom substructure determination. This is typically performed by dual-space methods, direct methods or Patterson-based approaches, which however may fail when only poorly diffracting derivative crystals are available. This is often the case for, for example, membrane proteins. Here, an approach for heavy-atom site identification based on a molecular-replacement parameter matrix (MRPM) is presented. It involves an n-dimensional search to test a wide spectrum of molecular-replacement parameters, such as different data sets and search models with different conformations. Results are scored by the ability to identify heavy-atom positions from anomalous difference Fourier maps. The strategy was successfully applied in the determination of a membrane-protein structure, the copper-transporting P-type ATPase CopA, when other methods had failed to determine the heavy-atom substructure. MRPM is well suited to proteins undergoing large conformational changes where multiple search models should be considered, and it enables the identification of weak but correct molecular-replacement solutions with maximum contrast to prime experimental phasing efforts. PMID:26960131
Greene, Samuel M; Batista, Victor S
2017-09-12
We introduce the "tensor-train split-operator Fourier transform" (TT-SOFT) method for simulations of multidimensional nonadiabatic quantum dynamics. TT-SOFT is essentially the grid-based SOFT method implemented in dynamically adaptive tensor-train representations. In the same spirit of all matrix product states, the tensor-train format enables the representation, propagation, and computation of observables of multidimensional wave functions in terms of the grid-based wavepacket tensor components, bypassing the need of actually computing the wave function in its full-rank tensor product grid space. We demonstrate the accuracy and efficiency of the TT-SOFT method as applied to propagation of 24-dimensional wave packets, describing the S 1 /S 2 interconversion dynamics of pyrazine after UV photoexcitation to the S 2 state. Our results show that the TT-SOFT method is a powerful computational approach for simulations of quantum dynamics of polyatomic systems since it avoids the exponential scaling problem of full-rank grid-based representations.
Robustness of multidimensional Brownian ratchets as directed transport mechanisms.
González-Candela, Ernesto; Romero-Rochín, Víctor; Del Río, Fernando
2011-08-07
Brownian ratchets have recently been considered as models to describe the ability of certain systems to locate very specific states in multidimensional configuration spaces. This directional process has particularly been proposed as an alternative explanation for the protein folding problem, in which the polypeptide is driven toward the native state by a multidimensional Brownian ratchet. Recognizing the relevance of robustness in biological systems, in this work we analyze such a property of Brownian ratchets by pushing to the limits all the properties considered essential to produce directed transport. Based on the results presented here, we can state that Brownian ratchets are able to deliver current and locate funnel structures under a wide range of conditions. As a result, they represent a simple model that solves the Levinthal's paradox with great robustness and flexibility and without requiring any ad hoc biased transition probability. The behavior of Brownian ratchets shown in this article considerably enhances the plausibility of the model for at least part of the structural mechanism behind protein folding process.
Multistability inspired by the oblique, pennate architectures of skeletal muscle
NASA Astrophysics Data System (ADS)
Kidambi, Narayanan; Harne, Ryan L.; Wang, K. W.
2017-04-01
Skeletal muscle mechanics exhibit a range of noteworthy characteristics, providing great inspiration for the development of advanced structural and material systems. These characteristics arise from the synergies demonstrated between muscle's constituents across the various length scales. From the macroscale oblique orientation of muscle fibers to the microscale lattice spacing of sarcomeres, muscle takes advantage of geometries and multidimensionality for force generation or length change along a desired axis. Inspired by these behaviors, this research investigates how the incorporation of multidimensionality afforded by oblique, pennate architectures can uncover novel mechanics in structures exhibiting multistability. Experimental investigation of these mechanics is undertaken using specimens of molded silicone rubber with patterned voids, and results reveal tailorable mono-, bi-, and multi-stability under axial displacements by modulation of transverse confinement. If the specimen is considered as an architected material, these results show its ability to generate intriguing, non-monotonic shear stresses. The outcomes would foster the development of novel, advanced mechanical metamaterials that exploit pennation and multidimensionality.
NASA Astrophysics Data System (ADS)
Su, Zhi-Yuan; Wu, Tzuyin; Yang, Po-Hua; Wang, Yeng-Tseng
2008-04-01
The heartbeat rate signal provides an invaluable means of assessing the sympathetic-parasympathetic balance of the human autonomic nervous system and thus represents an ideal diagnostic mechanism for detecting a variety of disorders such as epilepsy, cardiac disease and so forth. The current study analyses the dynamics of the heartbeat rate signal of known epilepsy sufferers in order to obtain a detailed understanding of the heart rate pattern during a seizure event. In the proposed approach, the ECG signals are converted into heartbeat rate signals and the embedology theorem is then used to construct the corresponding multidimensional phase space. The dynamics of the heartbeat rate signal are then analyzed before, during and after an epileptic seizure by examining the maximum Lyapunov exponent and the correlation dimension of the attractors in the reconstructed phase space. In general, the results reveal that the heartbeat rate signal transits from an aperiodic, highly-complex behaviour before an epileptic seizure to a low dimensional chaotic motion during the seizure event. Following the seizure, the signal trajectories return to a highly-complex state, and the complex signal patterns associated with normal physiological conditions reappear.
Data Visualization in Information Retrieval and Data Mining (SIG VIS).
ERIC Educational Resources Information Center
Efthimiadis, Efthimis
2000-01-01
Presents abstracts that discuss using data visualization for information retrieval and data mining, including immersive information space and spatial metaphors; spatial data using multi-dimensional matrices with maps; TREC (Text Retrieval Conference) experiments; users' information needs in cartographic information retrieval; and users' relevance…
Defining and Differentiating the Makerspace
ERIC Educational Resources Information Center
Dousay, Tonia A.
2017-01-01
Many resources now punctuate the maker movement landscape. However, some schools and communities still struggle to understand this burgeoning movement. How do we define these spaces and differentiate them from previous labs and shops? Through a multidimensional framework, stakeholders should consider how the structure, access, staffing, and tools…
A Computational Model of Multidimensional Shape
Liu, Xiuwen; Shi, Yonggang; Dinov, Ivo
2010-01-01
We develop a computational model of shape that extends existing Riemannian models of curves to multidimensional objects of general topological type. We construct shape spaces equipped with geodesic metrics that measure how costly it is to interpolate two shapes through elastic deformations. The model employs a representation of shape based on the discrete exterior derivative of parametrizations over a finite simplicial complex. We develop algorithms to calculate geodesics and geodesic distances, as well as tools to quantify local shape similarities and contrasts, thus obtaining a formulation that accounts for regional differences and integrates them into a global measure of dissimilarity. The Riemannian shape spaces provide a common framework to treat numerous problems such as the statistical modeling of shapes, the comparison of shapes associated with different individuals or groups, and modeling and simulation of shape dynamics. We give multiple examples of geodesic interpolations and illustrations of the use of the models in brain mapping, particularly, the analysis of anatomical variation based on neuroimaging data. PMID:21057668
A multidimensional analysis of the epistemic origins of nursing theories, models, and frameworks.
Beckstead, Jason W; Beckstead, Laura Grace
2006-01-01
The purpose of this article is to introduce our notion of epistemic space and to demonstrate its utility for understanding the origins and trajectories of nursing theory in the 20th century using multidimensional scaling (MDS). A literature review was conducted on primary and secondary sources written by and about 20 nurse theorists to investigate whether or not they cited 129 different scholars in the fields of anthropology, biology, nursing, philosophy, psychology, and sociology. Seventy-four scholars were identified as having been cited by at least two nurse theorists (319 citations total). Proximity scores, quantifying the similarity among nurse theorists based on proportions of shared citations, were calculated and analyzed using MDS. The emergent model of epistemic space that accommodated these similarities among nurse theorists revealed the systematic influence of scholars from various fields, notably psychology, biology, and philosophy. We believe that this schema and resulting taxonomy will prove useful for furthering our understanding of the relationships among nursing theories and theories in other fields of science.
The Structure of Integral Dimensions: Contrasting Topological and Cartesian Representations
ERIC Educational Resources Information Center
Jones, Matt; Goldstone, Robert L.
2013-01-01
Diverse evidence shows that perceptually integral dimensions, such as those composing color, are represented holistically. However, the nature of these holistic representations is poorly understood. Extant theories, such as those founded on multidimensional scaling or general recognition theory, model integral stimulus spaces using a Cartesian…
A Re-Unification of Two Competing Models for Document Retrieval.
ERIC Educational Resources Information Center
Bodoff, David
1999-01-01
Examines query-oriented versus document-oriented information retrieval and feedback learning. Highlights include a reunification of the two approaches for probabilistic document retrieval and for vector space model (VSM) retrieval; learning in VSM and in probabilistic models; multi-dimensional scaling; and ongoing field studies. (LRW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braumann, Andreas; Kraft, Markus, E-mail: mk306@cam.ac.u; Wagner, Wolfgang
2010-10-01
This paper is concerned with computational aspects of a multidimensional population balance model of a wet granulation process. Wet granulation is a manufacturing method to form composite particles, granules, from small particles and binders. A detailed numerical study of a stochastic particle algorithm for the solution of a five-dimensional population balance model for wet granulation is presented. Each particle consists of two types of solids (containing pores) and of external and internal liquid (located in the pores). Several transformations of particles are considered, including coalescence, compaction and breakage. A convergence study is performed with respect to the parameter that determinesmore » the number of numerical particles. Averaged properties of the system are computed. In addition, the ensemble is subdivided into practically relevant size classes and analysed with respect to the amount of mass and the particle porosity in each class. These results illustrate the importance of the multidimensional approach. Finally, the kinetic equation corresponding to the stochastic model is discussed.« less
NASA Astrophysics Data System (ADS)
Maack Rasmussen, Thorkild; Brethes, Anaïs; Pierpaolo Guarnieri, Pierpaolo; Bauer, Tobias
2017-04-01
Data from a high-resolution airborne SkyTEM time-domain electromagnetic survey conducted in central East Greenland were analysed. An analysis based on utilization of a Self Organizing Map procedure for response curve characterization and analyses based on data inversion and modelling are presented. The survey was flown in 2013 along the eastern margin of the Jameson Land basin with the purpose of base metal exploration and with sulphide mineralization as target. The survey area comprises crystalline basement to the East and layered Early Triassic to Jurassic sediments to the West. The layers are dipping a few degrees towards West. The Triassic sequence is 1 to 2 km thick and mostly of continental origin. The fluviatile Early Triassic arkoses and conglomerates, the Upper Triassic grey limestone and black shale beds and overlying gypsiferous sandstones and mudstones are known to host disseminated sulphides. E-W oriented lines were flown with an average terrain clearance of 30m and a separation of 300m. The data were initially processed and inverted by SkyTEM Aps. The conductivity models showed some conductive layers as well as induced polarization (IP) effects in the data. IP effects in TEM data reflect the relaxation of polarized charges in the ground which can be good indicators of the presence of metallic particles. Some of these locations were drilled during the following field season but unfortunately did not reveal the presence of mineralization. The aim of this study is therefore to understand the possible causes of these IP effects. Electrical charge accumulation in the ground can be related to the presence of sulphides, oxides or graphite or to the presence of clays or fibrous minerals. Permafrost may also cause IP effects and is then expected to be associated with a highly resistive subsurface. Several characteristics of the transient curves (IP indicators) of the SkyTEM survey were extracted and analysed by using the Kohonen Self-Organizing Map (SOM) technique. SOM is a type of neural network algorithm developed for analysis of non-linear relationships in multivariate data. The basic idea of SOM is to provide a method for easy visualizing of multi-dimensional data. The SOM may be viewed as a two-dimensional grid onto which multi-dimensional input data are projected or mapped from a multi-dimensional space. The space dimension is equal to the number of analysed variables containing the geo-referenced IP indicators that characterise the transient curves. Input data that are similar or close to each other, irrespectively of their geographic location, are mapped to the same or adjacent position in the SOM. Data that belongs to a particular cluster in the SOM space are afterwards mapped into a geographical space. Characteristics of the IP effects can therefore be mapped and spatially compared with the geology of the area. Once IP were identified and located, Cole-Cole parameters were recovered from the airborne TEM data in specific locations by inversion of the TEM data. An interpretation of the derived models is discussed in relation to possible causes of the observed IP effects.
The Influence of Dimensionality on Estimation in the Partial Credit Model.
ERIC Educational Resources Information Center
De Ayala, R. J.
1995-01-01
The effect of multidimensionality on partial credit model parameter estimation was studied with noncompensatory and compensatory data. Analysis results, consisting of root mean square error bias, Pearson product-moment corrections, standardized root mean squared differences, standardized differences between means, and descriptive statistics…
Multi-dimensional simulation package for ultrashort pulse laser-matter interactions
NASA Astrophysics Data System (ADS)
Suslova, Anastassiya; Hassanein, Ahmed
2017-10-01
Advanced simulation models recently became a popular tool of investigation of ultrashort pulse lasers (USPLs) to enhance understanding of the physics and allow minimizing the experimental costs for optimization of laser and target parameters for various applications. Our research interest is focused on developing multi-dimensional simulation package FEMTO-2D to investigate the USPL-matter interactions and laser induced effects. The package is based on solution of two heat conduction equations for electron and lattice sub-systems - enhanced two temperature model (TTM). We have implemented theoretical approach based on the collision theory to define the thermal dependence of target material optical properties and thermodynamic parameters. Our approach allowed elimination of fitted parameters commonly used in TTM based simulations. FEMTO-2D is used to simulated the light absorption and interactions for several metallic targets as a function of wavelength and pulse duration for wide range of laser intensity. The package has capability to consider different angles of incidence and polarization. It has also been used to investigate the damage threshold of the gold coated optical components with the focus on the role of the film thickness and substrate heat sink effect. This work was supported by the NSF, PIRE project.
Modeling Age-Related Differences in Immediate Memory Using SIMPLE
ERIC Educational Resources Information Center
Surprenant, Aimee M.; Neath, Ian; Brown, Gordon D. A.
2006-01-01
In the SIMPLE model (Scale Invariant Memory and Perceptual Learning), performance on memory tasks is determined by the locations of items in multidimensional space, and better performance is associated with having fewer close neighbors. Unlike most previous simulations with SIMPLE, the ones reported here used measured, rather than assumed,…
Multi-Dimensional Optimization for Cloud Based Multi-Tier Applications
ERIC Educational Resources Information Center
Jung, Gueyoung
2010-01-01
Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these…
Discovering Middle Space: Distinctions of Sex and Gender in Resilient Leadership
ERIC Educational Resources Information Center
Christman, Dana E.; McClellan, Rhonda L.
2012-01-01
This study contrasts findings from two Delphi studies that investigated how women and men who are higher education academic administrators in educational leadership programs and colleges define and describe resiliency in their leadership. Using gender theories, both studies revealed a multidimensional gendering of leadership, a gendering more…
Visual reconciliation of alternative similarity spaces in climate modeling
J Poco; A Dasgupta; Y Wei; William Hargrove; C.R. Schwalm; D.N. Huntzinger; R Cook; E Bertini; C.T. Silva
2015-01-01
Visual data analysis often requires grouping of data objects based on their similarity. In many application domains researchers use algorithms and techniques like clustering and multidimensional scaling to extract groupings from data. While extracting these groups using a single similarity criteria is relatively straightforward, comparing alternative criteria poses...
Multi-Dimensional Perception of Parental Involvement
ERIC Educational Resources Information Center
Fisher, Yael
2016-01-01
The main purpose of this study was to define and conceptualize the term parental involvement. A questionnaire was administrated to parents (140), teachers (145), students (120) and high ranking civil servants in the Ministry of Education (30). Responses were analyzed through Smallest Space Analysis (SSA). The SSA solution among all groups rendered…
DOT National Transportation Integrated Search
2017-11-30
The objective of this project is to explore the role of visual information in determining the users subjective valuation of multidimensional trip attributes that are relevant in decision-making, but are neglected in standard travel demand models. ...
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1988-01-01
During the period December 1, 1987 through May 31, 1988, progress was made in the following areas: construction of Multi-Dimensional Bandwidth Efficient Trellis Codes with MPSK modulation; performance analysis of Bandwidth Efficient Trellis Coded Modulation schemes; and performance analysis of Bandwidth Efficient Trellis Codes on Fading Channels.
ERIC Educational Resources Information Center
Geary, David C.
2009-01-01
Alexander, Schallert, and Reynolds's (2009/this issue) "what," "where," "who," and "when" framework situates different perspectives on learning in different places in this multidimensional space and by doing so helps us to better understand seemingly disparate approaches to learning. The framework is in need of a fifth, "why" dimension. The "why"…
"Pedagogy of Third Space": A Multidimensional Early Childhood Curriculum
ERIC Educational Resources Information Center
Gupta, Amita
2015-01-01
This paper will illustrate how philosophical and pedagogical boundaries that are defined by diverse cultures and ideologies might be navigated in the practical implementation of an early childhood curriculum in postcolonial urban India. Findings from a qualitative naturalistic inquiry indicated that a complex, multifaceted curriculum shaped by…
Knowledge and Vision in Teaching
ERIC Educational Resources Information Center
Kennedy, Mary M.
2006-01-01
The author challenges the role of knowledge in teaching by pointing out the variety of issues and concerns teachers must simultaneously address. Teachers use two strategies to manage their multidimensional space: They develop integrated habits and rules of thumb for handling situations as they arise, and they plan their lessons by envisioning them…
Measuring the Effectiveness of Educational Technology: What Are We Attempting to Measure?
ERIC Educational Resources Information Center
Jenkinson, Jodie
2009-01-01
In many academic areas, students' success depends upon their ability to envision and manipulate complex multidimensional information spaces. Fields in which students struggle with mastering these types of representations include (but are by no means limited to) mathematics, science, medicine, and engineering. There has been some educational…
Al-Nasheri, Ahmed; Muhammad, Ghulam; Alsulaiman, Mansour; Ali, Zulfiqar; Mesallam, Tamer A; Farahat, Mohamed; Malki, Khalid H; Bencherif, Mohamed A
2017-01-01
Automatic voice-pathology detection and classification systems may help clinicians to detect the existence of any voice pathologies and the type of pathology from which patients suffer in the early stages. The main aim of this paper is to investigate Multidimensional Voice Program (MDVP) parameters to automatically detect and classify the voice pathologies in multiple databases, and then to find out which parameters performed well in these two processes. Samples of the sustained vowel /a/ of normal and pathological voices were extracted from three different databases, which have three voice pathologies in common. The selected databases in this study represent three distinct languages: (1) the Arabic voice pathology database; (2) the Massachusetts Eye and Ear Infirmary database (English database); and (3) the Saarbruecken Voice Database (German database). A computerized speech lab program was used to extract MDVP parameters as features, and an acoustical analysis was performed. The Fisher discrimination ratio was applied to rank the parameters. A t test was performed to highlight any significant differences in the means of the normal and pathological samples. The experimental results demonstrate a clear difference in the performance of the MDVP parameters using these databases. The highly ranked parameters also differed from one database to another. The best accuracies were obtained by using the three highest ranked MDVP parameters arranged according to the Fisher discrimination ratio: these accuracies were 99.68%, 88.21%, and 72.53% for the Saarbruecken Voice Database, the Massachusetts Eye and Ear Infirmary database, and the Arabic voice pathology database, respectively. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Localization from near-source quasi-static electromagnetic fields
NASA Astrophysics Data System (ADS)
Mosher, J. C.
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.
Localization from near-source quasi-static electromagnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, John Compton
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. Themore » nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.« less
Murakami, Tomoaki; Ueda-Arakawa, Naoko; Nishijima, Kazuaki; Uji, Akihito; Horii, Takahiro; Ogino, Ken; Yoshimura, Nagahisa
2014-03-28
To integrate parameters on spectral-domain optical coherence tomography (SD-OCT) in diabetic retinopathy (DR) based on the self-organizing map and objectively describe the macular morphologic patterns. A total of 336 consecutive eyes of 216 patients with DR for whom clear SD-OCT images were available were retrospectively reviewed. Eleven OCT parameters and the logarithm of the minimal angle of resolution (logMAR) were measured. These multidimensional data were analyzed based on the self-organizing map on which similar cases were near each other according to the degree of their similarities, followed by the objective clustering. Self-organizing maps indicated that eyes with greater retinal thickness in the central subfield had greater thicknesses in the superior and temporal subfields. Eyes with foveal serous retinal detachment (SRD) had greater thickness in the nasal or inferior subfield. Eyes with foveal cystoid spaces were arranged to the left upper corner on the two-dimensional map; eyes with foveal SRD to the left lower corner; eyes with thickened retinal parenchyma to the lower area. The following objective clustering demonstrated the unsupervised pattern recognition of macular morphologies in diabetic macular edema (DME) as well as the higher-resolution discrimination of DME per se. Multiple regression analyses showed better association of logMAR with retinal thickness in the inferior subfield in eyes with SRD and with external limiting membrane disruption in eyes with foveal cystoid spaces or thickened retinal parenchyma. The self-organizing map facilitates integrative understanding of the macular morphologic patterns and the structural/functional relationship in DR.
NASA Technical Reports Server (NTRS)
Glick, B. J.
1985-01-01
Techniques for classifying objects into groups or clases go under many different names including, most commonly, cluster analysis. Mathematically, the general problem is to find a best mapping of objects into an index set consisting of class identifiers. When an a priori grouping of objects exists, the process of deriving the classification rules from samples of classified objects is known as discrimination. When such rules are applied to objects of unknown class, the process is denoted classification. The specific problem addressed involves the group classification of a set of objects that are each associated with a series of measurements (ratio, interval, ordinal, or nominal levels of measurement). Each measurement produces one variable in a multidimensional variable space. Cluster analysis techniques are reviewed and methods for incuding geographic location, distance measures, and spatial pattern (distribution) as parameters in clustering are examined. For the case of patterning, measures of spatial autocorrelation are discussed in terms of the kind of data (nominal, ordinal, or interval scaled) to which they may be applied.
Central Pb+Pb collisions at 158 A GeV/c studied by $$\\pi^-\\pi^-$$ interferometry
Aggarwal et al., M. M.
2000-05-18
Two-particle correlations have been measured for identifiedmore » $$\\pi^-$$ from central 158 A GeV Pb+Pb collisions and fitted radii of about 7 fm in all dimensions have been obtained. A multi-dimensional study of the radii as a function of k T is presented, including a full correction for the resolution effects of the apparatus. The cross term R 2 out-long of the standard fit in the Longitudinally CoMoving System (LCMS) and the v L parameter of the generalised Yano-Koonin fit are compatible with o, suggesting that the source undergoes a boost invariant expansion. The shapes of the correlation functions in Q inv and Q space = √Q$$2\\atop{x}$$ + Q$$2\\atop{y}$$ + Q$$2\\atop{z}$$ have been analyzed in detail. They are not Gaussian but better represented by exponentials. As a consequence fitting Gaussians to these correlation functions may produce different radii depending on the acceptance of the experimental setup used for the measurement.« less
Mixed-RKDG Finite Element Methods for the 2-D Hydrodynamic Model for Semiconductor Device Simulation
Chen, Zhangxin; Cockburn, Bernardo; Jerome, Joseph W.; ...
1995-01-01
In this paper we introduce a new method for numerically solving the equations of the hydrodynamic model for semiconductor devices in two space dimensions. The method combines a standard mixed finite element method, used to obtain directly an approximation to the electric field, with the so-called Runge-Kutta Discontinuous Galerkin (RKDG) method, originally devised for numerically solving multi-dimensional hyperbolic systems of conservation laws, which is applied here to the convective part of the equations. Numerical simulations showing the performance of the new method are displayed, and the results compared with those obtained by using Essentially Nonoscillatory (ENO) finite difference schemes. Frommore » the perspective of device modeling, these methods are robust, since they are capable of encompassing broad parameter ranges, including those for which shock formation is possible. The simulations presented here are for Gallium Arsenide at room temperature, but we have tested them much more generally with considerable success.« less
Influence of implantable hearing aids and neuroprosthesison music perception.
Rahne, Torsten; Böhme, Lars; Götze, Gerrit
2012-01-01
The identification and discrimination of timbre are essential features of music perception. One dominating parameter within the multidimensional timbre space is the spectral shape of complex sounds. As hearing loss interferes with the perception and enjoyment of music, we approach the individual timbre discrimination skills in individuals with severe to profound hearing loss using a cochlear implant (CI) and normal hearing individuals using a bone-anchored hearing aid (Baha). With a recent developed behavioral test relying on synthetically sounds forming a spectral continuum, the timbre difference was changed adaptively to measure the individual just noticeable difference (JND) in a forced-choice paradigm. To explore the differences in timbre perception abilities caused by the hearing mode, the sound stimuli were varied in their fundamental frequency, thus generating different spectra which are not completely covered by a CI or Baha system. The resulting JNDs demonstrate differences in timbre perception between normal hearing individuals, Baha users, and CI users. Beside the physiological reasons, also technical limitations appear as the main contributing factors.
Gravitational wave-Gauge field oscillations
NASA Astrophysics Data System (ADS)
Caldwell, R. R.; Devulder, C.; Maksimova, N. A.
2016-09-01
Gravitational waves propagating through a stationary gauge field transform into gauge field waves and back again. When multiple families of flavor-space locked gauge fields are present, the gravitational and gauge field waves exhibit novel dynamics. At high frequencies, the system behaves like coupled oscillators in which the gravitational wave is the central pacemaker. Due to energy conservation and exchange among the oscillators, the wave amplitudes lie on a multidimensional sphere, reminiscent of neutrino flavor oscillations. This phenomenon has implications for cosmological scenarios based on flavor-space locked gauge fields.
Parameter estimation for chaotic systems using improved bird swarm algorithm
NASA Astrophysics Data System (ADS)
Xu, Chuangbiao; Yang, Renhuan
2017-12-01
Parameter estimation of chaotic systems is an important problem in nonlinear science and has aroused increasing interest of many research fields, which can be basically reduced to a multidimensional optimization problem. In this paper, an improved boundary bird swarm algorithm is used to estimate the parameters of chaotic systems. This algorithm can combine the good global convergence and robustness of the bird swarm algorithm and the exploitation capability of improved boundary learning strategy. Experiments are conducted on the Lorenz system and the coupling motor system. Numerical simulation results reveal the effectiveness and with desirable performance of IBBSA for parameter estimation of chaotic systems.
NASA Technical Reports Server (NTRS)
Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu
1997-01-01
The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.
A Generic multi-dimensional feature extraction method using multiobjective genetic programming.
Zhang, Yang; Rockett, Peter I
2009-01-01
In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyun Jung; McDonnell, Kevin T.; Zelenyuk, Alla
2014-03-01
Although the Euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging inter-cluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multi-dimensional scaling (MDS) where one can often observe non-intuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly inmore » high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our MDS plots also exhibit similar visual relationships as the method of parallel coordinates which is often used alongside to visualize the high-dimensional data in raw form. We then cast our metric into a bi-scale framework which distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate Euclidean distance.« less
Estimating a Noncompensatory IRT Model Using Metropolis within Gibbs Sampling
ERIC Educational Resources Information Center
Babcock, Ben
2011-01-01
Relatively little research has been conducted with the noncompensatory class of multidimensional item response theory (MIRT) models. A Monte Carlo simulation study was conducted exploring the estimation of a two-parameter noncompensatory item response theory (IRT) model. The estimation method used was a Metropolis-Hastings within Gibbs algorithm…
ERIC Educational Resources Information Center
Klein, Harriet B.; McAllister Byun, Tara; Davidson, Lisa; Grigos, Maria I.
2013-01-01
Purpose: This study explored relationships among perceptual, ultrasound, and acoustic measurements of children's correct and misarticulated /r/ sounds. Longitudinal data documenting changes across these parameters were collected from 2 children who acquired /r/ over a period of intervention and were compared with data from children with typical…
A cross-sectional analysis of green space prevalence and mental wellbeing in England.
Houlden, Victoria; Weich, Scott; Jarvis, Stephen
2017-05-17
With urbanisation increasing, it is important to understand how to design changing environments to promote mental wellbeing. Evidence suggests that local-area proportions of green space may be associated with happiness and life satisfaction; however, the available evidence on such associations with more broadly defined mental wellbeing in still very scarce. This study aimed to establish whether the amount of neighbourhood green space was associated with mental wellbeing. Data were drawn from Understanding Society, a national survey of 30,900 individuals across 11,096 Census Lower-Layer Super Output Areas (LSOAs) in England, over the period 2009-2010. Measures included the multi-dimensional Warwick-Edinburgh Mental Well-Being Scale (SWEMWBS) and LSOA proportion of green space, which was derived from the General Land Use Database (GLUD), and were analysed using linear regression, while controlling for individual, household and area-level factors. Those living in areas with greater proportions of green space had significantly higher mental wellbeing scores in unadjusted analyses (an expected increase of 0.17 points (95% CI 0.11, 0.23) in the SWEMWBS score for a standard deviation increase of green space). However, after adjustment for confounding by respondent sociodemographic characteristics and urban/rural location, the association was attenuated to the null (regression coefficient B = - 0.01, 95% CI -0.08, 0.05, p = 0.712). While the green space in an individual's local area has been shown through other research to be related to aspects of mental health such as happiness and life satisfaction, the association with multidimensional mental wellbeing is much less clear from our results. While we did not find a statistically significant association between the amount of green space in residents' local areas and mental wellbeing, further research is needed to understand whether other features of green space, such as accessibility, aesthetics or use, are important for mental wellbeing.
Wojtas-Niziurski, Wojciech; Meng, Yilin; Roux, Benoit; Bernèche, Simon
2013-01-01
The potential of mean force describing conformational changes of biomolecules is a central quantity that determines the function of biomolecular systems. Calculating an energy landscape of a process that depends on three or more reaction coordinates might require a lot of computational power, making some of multidimensional calculations practically impossible. Here, we present an efficient automatized umbrella sampling strategy for calculating multidimensional potential of mean force. The method progressively learns by itself, through a feedback mechanism, which regions of a multidimensional space are worth exploring and automatically generates a set of umbrella sampling windows that is adapted to the system. The self-learning adaptive umbrella sampling method is first explained with illustrative examples based on simplified reduced model systems, and then applied to two non-trivial situations: the conformational equilibrium of the pentapeptide Met-enkephalin in solution and ion permeation in the KcsA potassium channel. With this method, it is demonstrated that a significant smaller number of umbrella windows needs to be employed to characterize the free energy landscape over the most relevant regions without any loss in accuracy. PMID:23814508
Uher, Vojtěch; Gajdoš, Petr; Radecký, Michal; Snášel, Václav
2016-01-01
The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds.
Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds
Radecký, Michal; Snášel, Václav
2016-01-01
The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds. PMID:27974884
Villéger, Sébastien; Mason, Norman W H; Mouillot, David
2008-08-01
Functional diversity is increasingly identified as an important driver of ecosystem functioning. Various indices have been proposed to measure the functional diversity of a community, but there is still no consensus on which are most suitable. Indeed, none of the existing indices meets all the criteria required for general use. The main criteria are that they must be designed to deal with several traits, take into account abundances, and measure all the facets of functional diversity. Here we propose three indices to quantify each facet of functional diversity for a community with species distributed in a multidimensional functional space: functional richness (volume of the functional space occupied by the community), functional evenness (regularity of the distribution of abundance in this volume), and functional divergence (divergence in the distribution of abundance in this volume). Functional richness is estimated using the existing convex hull volume index. The new functional evenness index is based on the minimum spanning tree which links all the species in the multidimensional functional space. Then this new index quantifies the regularity with which species abundances are distributed along the spanning tree. Functional divergence is measured using a novel index which quantifies how species diverge in their distances (weighted by their abundance) from the center of gravity in the functional space. We show that none of the indices meets all the criteria required for a functional diversity index, but instead we show that the set of three complementary indices meets these criteria. Through simulations of artificial data sets, we demonstrate that functional divergence and functional evenness are independent of species richness and that the three functional diversity indices are independent of each other. Overall, our study suggests that decomposition of functional diversity into its three primary components provides a meaningful framework for its quantification and for the classification of existing functional diversity indices. This decomposition has the potential to shed light on the role of biodiversity on ecosystem functioning and on the influence of biotic and abiotic filters on the structure of species communities. Finally, we propose a general framework for applying these three functional diversity indices.
The Development of Voiceless Sibilant Fricatives in Putonghua-Speaking Children
ERIC Educational Resources Information Center
Li, Fangfang; Munson, Benjamin
2016-01-01
Purpose The aims of the present study are (a) to quantify the developmental sequence of fricative mastery in Putonghua-speaking children and discuss the observed pattern in relation to existing theoretical positions, and (b) to describe the acquisition of the fine-articulatory/acoustic details of fricatives in the multidimensional acoustic space.…
Intensifying Innovation Adoption in Educational eHealth
ERIC Educational Resources Information Center
Rissanen, M. K.
2014-01-01
In demanding innovation areas such as eHealth, the primary emphasis is easily placed on the product and process quality aspects in the design phase. Customer quality may receive adequate attention when the target audience is well-defined. But if the multidimensional evaluative focus does not get enough space until the implementation phase, this…
Chlorofluoromethanes and the Stratosphere
NASA Technical Reports Server (NTRS)
Hudson, R. D. (Editor)
1977-01-01
The conclusions of a workshop held by the National Aeronautics and Space Administration to assess the current knowledge of the impact of chlorofluoromethane release in the troposphere on stratospheric ozone concentrations. The following topics are discussed; (1) Laboratory measurements; (2) Ozone measurements and trends; (3) Minor species and aerosol measurements; (4) One dimensional modeling; and (5) Multidimensional modeling.
Diverse applications of advanced man-telerobot interfaces
NASA Technical Reports Server (NTRS)
Mcaffee, Douglas A.
1991-01-01
Advancements in man-machine interfaces and control technologies used in space telerobotics and teleoperators have potential application wherever human operators need to manipulate multi-dimensional spatial relationships. Bilateral six degree-of-freedom position and force cues exchanged between the user and a complex system can broaden and improve the effectiveness of several diverse man-machine interfaces.
Non-uniform sampling: post-Fourier era of NMR data collection and processing.
Kazimierczuk, Krzysztof; Orekhov, Vladislav
2015-11-01
The invention of multidimensional techniques in the 1970s revolutionized NMR, making it the general tool of structural analysis of molecules and materials. In the most straightforward approach, the signal sampling in the indirect dimensions of a multidimensional experiment is performed in the same manner as in the direct dimension, i.e. with a grid of equally spaced points. This results in lengthy experiments with a resolution often far from optimum. To circumvent this problem, numerous sparse-sampling techniques have been developed in the last three decades, including two traditionally distinct approaches: the radial sampling and non-uniform sampling. This mini review discusses the sparse signal sampling and reconstruction techniques from the point of view of an underdetermined linear algebra problem that arises when a full, equally spaced set of sampled points is replaced with sparse sampling. Additional assumptions that are introduced to solve the problem, as well as the shape of the undersampled Fourier transform operator (visualized as so-called point spread function), are shown to be the main differences between various sparse-sampling methods. Copyright © 2015 John Wiley & Sons, Ltd.
Chartier, Sylvain; Giguère, Gyslain; Langlois, Dominic
2009-01-01
In this paper, we present a new recurrent bidirectional model that encompasses correlational, competitive and topological model properties. The simultaneous use of many classes of network behaviors allows for the unsupervised learning/categorization of perceptual patterns (through input compression) and the concurrent encoding of proximities in a multidimensional space. All of these operations are achieved within a common learning operation, and using a single set of defining properties. It is shown that the model can learn categories by developing prototype representations strictly from exposition to specific exemplars. Moreover, because the model is recurrent, it can reconstruct perfect outputs from incomplete and noisy patterns. Empirical exploration of the model's properties and performance shows that its ability for adequate clustering stems from: (1) properly distributing connection weights, and (2) producing a weight space with a low dispersion level (or higher density). In addition, since the model uses a sparse representation (k-winners), the size of topological neighborhood can be fixed, and no longer requires a decrease through time as was the case with classic self-organizing feature maps. Since the model's learning and transmission parameters are independent from learning trials, the model can develop stable fixed points in a constrained topological architecture, while being flexible enough to learn novel patterns.
NASA Technical Reports Server (NTRS)
Englander, Jacob; Englander, Arnold
2014-01-01
Trajectory optimization methods using MBH have become well developed during the past decade. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing RVs from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by Englander significantly improves MBH performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness, where efficiency is finding better solutions in less time, and robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive RWs originally developed in the field of statistical physics.
The Wang-Landau Sampling Algorithm
NASA Astrophysics Data System (ADS)
Landau, David P.
2003-03-01
Over the past several decades Monte Carlo simulations[1] have evolved into a powerful tool for the study of wide-ranging problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, usually in the canonical ensemble, and enormous improvements have been made in performance through the implementation of novel algorithms. Nonetheless, difficulties arise near phase transitions, either due to critical slowing down near 2nd order transitions or to metastability near 1st order transitions, thus limiting the applicability of the method. We shall describe a new and different Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is estimated, all thermodynamic properties can be calculated at all temperatures. This approach can be extended to multi-dimensional parameter spaces and has already found use in classical models of interacting particles including systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc., as well as for quantum models. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).
Perceptually relevant parameters for virtual listening simulation of small room acoustics
Zahorik, Pavel
2009-01-01
Various physical aspects of room-acoustic simulation techniques have been extensively studied and refined, yet the perceptual attributes of the simulations have received relatively little attention. Here a method of evaluating the perceptual similarity between rooms is described and tested using 15 small-room simulations based on binaural room impulse responses (BRIRs) either measured from a real room or estimated using simple geometrical acoustic modeling techniques. Room size and surface absorption properties were varied, along with aspects of the virtual simulation including the use of individualized head-related transfer function (HRTF) measurements for spatial rendering. Although differences between BRIRs were evident in a variety of physical parameters, a multidimensional scaling analysis revealed that when at-the-ear signal levels were held constant, the rooms differed along just two perceptual dimensions: one related to reverberation time (T60) and one related to interaural coherence (IACC). Modeled rooms were found to differ from measured rooms in this perceptual space, but the differences were relatively small and should be easily correctable through adjustment of T60 and IACC in the model outputs. Results further suggest that spatial rendering using individualized HRTFs offers little benefit over nonindividualized HRTF rendering for room simulation applications where source direction is fixed. PMID:19640043
Estimating metallicities with isochrone fits to photometric data of open clusters
NASA Astrophysics Data System (ADS)
Monteiro, H.; Oliveira, A. F.; Dias, W. S.; Caetano, T. C.
2014-10-01
The metallicity is a critical parameter that affects the correct determination of stellar cluster's fundamental characteristics and has important implications in Galactic and Stellar evolution research. Fewer than 10% of the 2174 currently catalogued open clusters have their metallicity determined in the literature. In this work we present a method for estimating the metallicity of open clusters via non-subjective isochrone fitting using the cross-entropy global optimization algorithm applied to UBV photometric data. The free parameters distance, reddening, age, and metallicity are simultaneously determined by the fitting method. The fitting procedure uses weights for the observational data based on the estimation of membership likelihood for each star, which considers the observational magnitude limit, the density profile of stars as a function of radius from the center of the cluster, and the density of stars in multi-dimensional magnitude space. We present results of [Fe/H] for well-studied open clusters based on distinct UBV data sets. The [Fe/H] values obtained in the ten cases for which spectroscopic determinations were available in the literature agree, indicating that our method provides a good alternative to estimating [Fe/H] by using an objective isochrone fitting. Our results show that the typical precision is about 0.1 dex.
Path integral learning of multidimensional movement trajectories
NASA Astrophysics Data System (ADS)
André, João; Santos, Cristina; Costa, Lino
2013-10-01
This paper explores the use of Path Integral Methods, particularly several variants of the recent Path Integral Policy Improvement (PI2) algorithm in multidimensional movement parametrized policy learning. We rely on Dynamic Movement Primitives (DMPs) to codify discrete and rhythmic trajectories, and apply the PI2-CMA and PIBB methods in the learning of optimal policy parameters, according to different cost functions that inherently encode movement objectives. Additionally we merge both of these variants and propose the PIBB-CMA algorithm, comparing all of them with the vanilla version of PI2. From the obtained results we conclude that PIBB-CMA surpasses all other methods in terms of convergence speed and iterative final cost, which leads to an increased interest in its application to more complex robotic problems.
A support vector machine based test for incongruence between sets of trees in tree space
2012-01-01
Background The increased use of multi-locus data sets for phylogenetic reconstruction has increased the need to determine whether a set of gene trees significantly deviate from the phylogenetic patterns of other genes. Such unusual gene trees may have been influenced by other evolutionary processes such as selection, gene duplication, or horizontal gene transfer. Results Motivated by this problem we propose a nonparametric goodness-of-fit test for two empirical distributions of gene trees, and we developed the software GeneOut to estimate a p-value for the test. Our approach maps trees into a multi-dimensional vector space and then applies support vector machines (SVMs) to measure the separation between two sets of pre-defined trees. We use a permutation test to assess the significance of the SVM separation. To demonstrate the performance of GeneOut, we applied it to the comparison of gene trees simulated within different species trees across a range of species tree depths. Applied directly to sets of simulated gene trees with large sample sizes, GeneOut was able to detect very small differences between two set of gene trees generated under different species trees. Our statistical test can also include tree reconstruction into its test framework through a variety of phylogenetic optimality criteria. When applied to DNA sequence data simulated from different sets of gene trees, results in the form of receiver operating characteristic (ROC) curves indicated that GeneOut performed well in the detection of differences between sets of trees with different distributions in a multi-dimensional space. Furthermore, it controlled false positive and false negative rates very well, indicating a high degree of accuracy. Conclusions The non-parametric nature of our statistical test provides fast and efficient analyses, and makes it an applicable test for any scenario where evolutionary or other factors can lead to trees with different multi-dimensional distributions. The software GeneOut is freely available under the GNU public license. PMID:22909268
Rosset, Antoine; Spadola, Luca; Pysher, Lance; Ratib, Osman
2006-01-01
The display and interpretation of images obtained by combining three-dimensional data acquired with two different modalities (eg, positron emission tomography and computed tomography) in the same subject require complex software tools that allow the user to adjust the image parameters. With the current fast imaging systems, it is possible to acquire dynamic images of the beating heart, which add a fourth dimension of visual information-the temporal dimension. Moreover, images acquired at different points during the transit of a contrast agent or during different functional phases add a fifth dimension-functional data. To facilitate real-time image navigation in the resultant large multidimensional image data sets, the authors developed a Digital Imaging and Communications in Medicine-compliant software program. The open-source software, called OsiriX, allows the user to navigate through multidimensional image series while adjusting the blending of images from different modalities, image contrast and intensity, and the rate of cine display of dynamic images. The software is available for free download at http://homepage.mac.com/rossetantoine/osirix. (c) RSNA, 2006.
NASA Astrophysics Data System (ADS)
Zhu, Wenbin; Chao, Ju-Hung; Chen, Chang-Jiang; Campbell, Adrian L.; Henry, Michael G.; Yin, Stuart Shizhuo; Hoffman, Robert C.
2017-10-01
In most beam steering applications such as 3D printing and in vivo imaging, one of the essential challenges has been high-resolution high-speed multi-dimensional optical beam scanning. Although the pre-injected space charge controlled potassium tantalate niobate (KTN) deflectors can achieve speeds in the nanosecond regime, they deflect in only one dimension. In order to develop a high-resolution high-speed multi-dimensional KTN deflector, we studied the deflection behavior of KTN deflectors in the case of coexisting pre-injected space charge and composition gradient. We find that such coexistence can enable new functionalities of KTN crystal based electro-optic deflectors. When the direction of the composition gradient is parallel to the direction of the external electric field, the zero-deflection position can be shifted, which can reduce the internal electric field induced beam distortion, and thus enhance the resolution. When the direction of the composition gradient is perpendicular to the direction of the external electric field, two-dimensional beam scanning can be achieved by harnessing only one single piece of KTN crystal, which can result in a compact, high-speed two-dimensional deflector. Both theoretical analyses and experiments are conducted, which are consistent with each other. These new functionalities can expedite the usage of KTN deflection in many applications such as high-speed 3D printing, high-speed, high-resolution imaging, and free space broadband optical communication.
Zhou, Xiaolu; Li, Dongying
2018-05-09
Advancement in location-aware technologies, and information and communication technology in the past decades has furthered our knowledge of the interaction between human activities and the built environment. An increasing number of studies have collected data regarding individual activities to better understand how the environment shapes human behavior. Despite this growing interest, some challenges exist in collecting and processing individual's activity data, e.g., capturing people's precise environmental contexts and analyzing data at multiple spatial scales. In this study, we propose and implement an innovative system that integrates smartphone-based step tracking with an app and the sequential tile scan techniques to collect and process activity data. We apply the OpenStreetMap tile system to aggregate positioning points at various scales. We also propose duration, step and probability surfaces to quantify the multi-dimensional attributes of activities. Results show that, by running the app in the background, smartphones can measure multi-dimensional attributes of human activities, including space, duration, step, and location uncertainty at various spatial scales. By coordinating Global Positioning System (GPS) sensor with accelerometer sensor, this app can save battery which otherwise would be drained by GPS sensor quickly. Based on a test dataset, we were able to detect the recreational center and sports center as the space where the user was most active, among other places visited. The methods provide techniques to address key issues in analyzing human activity data. The system can support future studies on behavioral and health consequences related to individual's environmental exposure.
A Computational Methodology for Simulating Thermal Loss Testing of the Advanced Stirling Convertor
NASA Technical Reports Server (NTRS)
Reid, Terry V.; Wilson, Scott D.; Schifer, Nicholas A.; Briggs, Maxwell H.
2012-01-01
The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two highefficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including the use of multidimensional numerical models. Validation test hardware has also been used to provide a direct comparison of numerical results and validate the multi-dimensional numerical models used to predict convertor net heat input and efficiency. These validation tests were designed to simulate the temperature profile of an operating Stirling convertor and resulted in a measured net heat input of 244.4 W. The methodology was applied to the multi-dimensional numerical model which resulted in a net heat input of 240.3 W. The computational methodology resulted in a value of net heat input that was 1.7 percent less than that measured during laboratory testing. The resulting computational methodology and results are discussed.
NASA Astrophysics Data System (ADS)
White, Irene; Lorenzi, Francesca
2016-12-01
Creativity has been emerging as a key concept in educational policies since the mid-1990s, with many Western countries restructuring their education systems to embrace innovative approaches likely to stimulate creative and critical thinking. But despite current intentions of putting more emphasis on creativity in education policies worldwide, there is still a relative dearth of viable models which capture the complexity of creativity and the conditions for its successful infusion into formal school environments. The push for creativity is in direct conflict with the results-driven/competitive performance-oriented culture which continues to dominate formal education systems. The authors of this article argue that incorporating creativity into mainstream education is a complex task and is best tackled by taking a systematic and multifaceted approach. They present a multidimensional model designed to help educators in tackling the challenges of the promotion of creativity. Their model encompasses three distinct yet interrelated dimensions of a creative space - physical, social-emotional and critical. The authors use the metaphor of space to refer to the interplay of the three identified dimensions. Drawing on confluence approaches to the theorisation of creativity, this paper exemplifies the development of a model before the background of a growing trend of systems theories. The aim of the model is to be helpful in systematising creativity by offering parameters - derived from the evaluation of an example offered by a non-formal educational environment - for the development of creative environments within mainstream secondary schools.
A NEW METHOD FOR DERIVING THE STELLAR BIRTH FUNCTION OF RESOLVED STELLAR POPULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gennaro, M.; Brown, T. M.; Gordon, K. D.
We present a new method for deriving the stellar birth function (SBF) of resolved stellar populations. The SBF (stars born per unit mass, time, and metallicity) is the combination of the initial mass function (IMF), the star formation history (SFH), and the metallicity distribution function (MDF). The framework of our analysis is that of Poisson Point Processes (PPPs), a class of statistical models suitable when dealing with points (stars) in a multidimensional space (the measurement space of multiple photometric bands). The theory of PPPs easily accommodates the modeling of measurement errors as well as that of incompleteness. Our method avoidsmore » binning stars in the color–magnitude diagram and uses the whole likelihood function for each data point; combining the individual likelihoods allows the computation of the posterior probability for the population's SBF. Within the proposed framework it is possible to include nuisance parameters, such as distance and extinction, by specifying their prior distributions and marginalizing over them. The aim of this paper is to assess the validity of this new approach under a range of assumptions, using only simulated data. Forthcoming work will show applications to real data. Although it has a broad scope of possible applications, we have developed this method to study multi-band Hubble Space Telescope observations of the Milky Way Bulge. Therefore we will focus on simulations with characteristics similar to those of the Galactic Bulge.« less
Molecular quantum control landscapes in von Neumann time-frequency phase space
NASA Astrophysics Data System (ADS)
Ruetzel, Stefan; Stolzenberger, Christoph; Fechner, Susanne; Dimler, Frank; Brixner, Tobias; Tannor, David J.
2010-10-01
Recently we introduced the von Neumann representation as a joint time-frequency description for femtosecond laser pulses and suggested its use as a basis for pulse shaping experiments. Here we use the von Neumann basis to represent multidimensional molecular control landscapes, providing insight into the molecular dynamics. We present three kinds of time-frequency phase space scanning procedures based on the von Neumann formalism: variation of intensity, time-frequency phase space position, and/or the relative phase of single subpulses. The shaped pulses produced are characterized via Fourier-transform spectral interferometry. Quantum control is demonstrated on the laser dye IR140 elucidating a time-frequency pump-dump mechanism.
Molecular quantum control landscapes in von Neumann time-frequency phase space.
Ruetzel, Stefan; Stolzenberger, Christoph; Fechner, Susanne; Dimler, Frank; Brixner, Tobias; Tannor, David J
2010-10-28
Recently we introduced the von Neumann representation as a joint time-frequency description for femtosecond laser pulses and suggested its use as a basis for pulse shaping experiments. Here we use the von Neumann basis to represent multidimensional molecular control landscapes, providing insight into the molecular dynamics. We present three kinds of time-frequency phase space scanning procedures based on the von Neumann formalism: variation of intensity, time-frequency phase space position, and/or the relative phase of single subpulses. The shaped pulses produced are characterized via Fourier-transform spectral interferometry. Quantum control is demonstrated on the laser dye IR140 elucidating a time-frequency pump-dump mechanism.
ERIC Educational Resources Information Center
Bergner, Yoav; Droschler, Stefan; Kortemeyer, Gerd; Rayyan, Saif; Seaton, Daniel; Pritchard, David E.
2012-01-01
We apply collaborative filtering (CF) to dichotomously scored student response data (right, wrong, or no interaction), finding optimal parameters for each student and item based on cross-validated prediction accuracy. The approach is naturally suited to comparing different models, both unidimensional and multidimensional in ability, including a…
An Investigation of Sample Size Splitting on ATFIND and DIMTEST
ERIC Educational Resources Information Center
Socha, Alan; DeMars, Christine E.
2013-01-01
Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…
Calibration of Response Data Using MIRT Models with Simple and Mixed Structures
ERIC Educational Resources Information Center
Zhang, Jinming
2012-01-01
It is common to assume during a statistical analysis of a multiscale assessment that the assessment is composed of several unidimensional subtests or that it has simple structure. Under this assumption, the unidimensional and multidimensional approaches can be used to estimate item parameters. These two approaches are equivalent in parameter…
Multi-Dimensional Calibration of Impact Dynamic Models
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.
2011-01-01
NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.
NASA Astrophysics Data System (ADS)
Ojo, A. O.; Xie, Jun; Olorunfemi, M. O.
2018-01-01
To reduce ambiguity related to nonlinearities in the resistivity model-data relationships, an efficient direct-search scheme employing the Neighbourhood Algorithm (NA) was implemented to solve the 1-D resistivity problem. In addition to finding a range of best-fit models which are more likely to be global minimums, this method investigates the entire multi-dimensional model space and provides additional information about the posterior model covariance matrix, marginal probability density function and an ensemble of acceptable models. This provides new insights into how well the model parameters are constrained and make assessing trade-offs between them possible, thus avoiding some common interpretation pitfalls. The efficacy of the newly developed program is tested by inverting both synthetic (noisy and noise-free) data and field data from other authors employing different inversion methods so as to provide a good base for comparative performance. In all cases, the inverted model parameters were in good agreement with the true and recovered model parameters from other methods and remarkably correlate with the available borehole litho-log and known geology for the field dataset. The NA method has proven to be useful whilst a good starting model is not available and the reduced number of unknowns in the 1-D resistivity inverse problem makes it an attractive alternative to the linearized methods. Hence, it is concluded that the newly developed program offers an excellent complementary tool for the global inversion of the layered resistivity structure.
Sensitivity of the nuclear deformability and fission barriers to the equation of state
NASA Astrophysics Data System (ADS)
Seif, W. M.; Anwer, Hisham
2018-07-01
The model-dependent analysis of the fission data impacts the extracted fission-related quantities, which are not directly observables, such as the super- and hyperdeformed isomeric states and their energies. We investigated the model dependence of the deformability of a nucleus and its fission barriers on the nuclear equation of state. Within the microscopic-macroscopic model based on a large number of Skyrme nucleon-nucleon interactions, the total energy surfaces and the double-humped fission barrier of 230Th are calculated in a multidimensional deformation space. In addition to the ground-state (GS) and the superdeformed (SD) minima, all the investigated forces yielded a hyperdeformed (HD) minimum. The contour map of the shell-plus-pairing energy clearly displayed the three minima. We found that the GS binding energy and the deformation energy of the different deformation modes along the fission path increase with the incompressibility coefficient K0, while the fission barrier heights and the excitation energies of the SD and HD modes decrease with it. Conversely, the surface-energy coefficient asurf, the symmetry-energy, and its density-slope parameter decrease the GS energy and the deformation energies, but increase the fission barrier heights and the excitation energies. The obtained deformation parameters of the different deformation modes exhibit almost independence on K0, and on the symmetry-energy and its density-slope. The principle deformation parameters of the SD and HD isomeric states tend to decrease with asurf.
Exploration of the Medicinal Peptide Space.
Gevaert, Bert; Stalmans, Sofie; Wynendaele, Evelien; Taevernier, Lien; Bracke, Nathalie; D'Hondt, Matthias; De Spiegeleer, Bart
2016-01-01
The chemical properties of peptide medicines, known as the 'medicinal peptide space' is considered a multi-dimensional subset of the global peptide space, where each dimension represents a chemical descriptor. These descriptors can be linked to biofunctional, medicinal properties to varying degrees. Knowledge of this space can increase the efficiency of the peptide-drug discovery and development process, as well as advance our understanding and classification of peptide medicines. For 245 peptide drugs, already available on the market or in clinical development, multivariate dataexploration was performed using peptide relevant physicochemical descriptors, their specific peptidedrug target and their clinical use. Our retrospective analysis indicates that clusters in the medicinal peptide space are located in a relatively narrow range of the physicochemical space: dense and empty regions were found, which can be explored for the discovery of novel peptide drugs.
NASA Astrophysics Data System (ADS)
Santarius, John; Navarro, Marcos; Michalak, Matthew; Fancher, Aaron; Kulcinski, Gerald; Bonomo, Richard
2016-10-01
A newly initiated research project will be described that investigates methods for detecting shielded special nuclear materials by combining multi-dimensional neutron sources, forward/adjoint calculations modeling neutron and gamma transport, and sparse data analysis of detector signals. The key tasks for this project are: (1) developing a radiation transport capability for use in optimizing adaptive-geometry, inertial-electrostatic confinement (IEC) neutron source/detector configurations for neutron pulses distributed in space and/or phased in time; (2) creating distributed-geometry, gas-target, IEC fusion neutron sources; (3) applying sparse data and noise reduction algorithms, such as principal component analysis (PCA) and wavelet transform analysis, to enhance detection fidelity; and (4) educating graduate and undergraduate students. Funded by DHS DNDO Project 2015-DN-077-ARI095.
Amyloid PET in clinical practice: Its place in the multidimensional space of Alzheimer's disease☆
Vandenberghe, Rik; Adamczuk, Katarzyna; Dupont, Patrick; Laere, Koen Van; Chételat, Gaël
2013-01-01
Amyloid imaging is currently introduced to the market for clinical use. We will review the evidence demonstrating that the different amyloid PET ligands that are currently available are valid biomarkers for Alzheimer-related β amyloidosis. Based on recent findings from cross-sectional and longitudinal imaging studies using different modalities, we will incorporate amyloid imaging into a multidimensional model of Alzheimer's disease. Aside from the critical role in improving clinical trial design for amyloid-lowering drugs, we will also propose a tentative algorithm for when it may be useful in a memory clinic environment. Gaps in our evidence-based knowledge of the added value of amyloid imaging in a clinical context will be identified and will need to be addressed by dedicated studies of clinical utility. PMID:24179802
Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Alder, J.; van Griensven, A.; Meixner, T.
2003-12-01
Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.
Embedding of multidimensional time-dependent observations.
Barnard, J P; Aldrich, C; Gerber, M
2001-10-01
A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.
Embedding of multidimensional time-dependent observations
NASA Astrophysics Data System (ADS)
Barnard, Jakobus P.; Aldrich, Chris; Gerber, Marius
2001-10-01
A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.
ERIC Educational Resources Information Center
Samejima, Fumiko
This paper is the final report of a multi-year project sponsored by the Office of Naval Research (ONR) in 1987 through 1990. The main objectives of the research summarized were to: investigate the non-parametric approach to the estimation of the operating characteristics of discrete item responses; revise and strengthen the package computer…
The National Mapping of Teacher Professional Learning Project: A Multi-Dimensional Space?
ERIC Educational Resources Information Center
Doecke, Brenton; Parr, Graham
2011-01-01
This essay focuses on the "National Mapping of Teacher Professional Learning" (2008), a report that we co-authored along with a number of other researchers on the basis of extensive surveys and interviews relating to the policies and practices of teacher professional learning in Australia. The report is an update of an earlier survey…
Systematization of actinides using cluster analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kopyrin, A.A.; Terent`eva, T.N.; Khramov, N.N.
1994-11-01
A representation of the actinides in multidimensional property space is proposed for systematization of these elements using cluster analysis. Literature data for their atomic properties are used. Owing to the wide variation of published ionization potentials, medians are used to estimate them. Vertical dendograms are used for classification on the basis of distances between the actinides in atomic-property space. The properties of actinium and lawrencium are furthest removed from the main group. Thorium and mendelevium exhibit individualized properties. A cluster based on the einsteinium-fermium pair is joined by californium.
On the enhanced sampling over energy barriers in molecular dynamics simulations.
Gao, Yi Qin; Yang, Lijiang
2006-09-21
We present here calculations of free energies of multidimensional systems using an efficient sampling method. The method uses a transformed potential energy surface, which allows an efficient sampling of both low and high energy spaces and accelerates transitions over barriers. It allows efficient sampling of the configuration space over and only over the desired energy range(s). It does not require predetermined or selected reaction coordinate(s). We apply this method to study the dynamics of slow barrier crossing processes in a disaccharide and a dipeptide system.
The COSMIC-DANCE project: Unravelling the origin of the mass function
NASA Astrophysics Data System (ADS)
Bouy, H.; Bertin, E.; Sarro, L. M.; Barrado, D.; Berihuete, A.; Olivares, J.; Moraux, E.; Bouvier, J.; Tamura, M.; Cuillandre, J.-C.; Beletsky, Y.; Wright, N.; Huelamo, N.; Allen, L.; Solano, E.; Brandner, B.
2017-03-01
The COSMIC-DANCE project is an observational program aiming at understanding the origin and evolution of ultracool objects by measuring the mass function and internal dynamics of young nearby associations down to the fragmentation limit. The least massive members of young nearby associations are identified using modern statistical methods in a multi-dimensional space made of optical and infrared luminosities and colors and proper motions. The photometry and astrometry are obtained by combining ground and in some case space based archival observations with new observations, covering between one and two decades.
Multidimensional assessment of vocal changes in benign vocal fold lesions after voice therapy.
Schindler, Antonio; Mozzanica, Francesco; Maruzzi, Patrizia; Atac, Murat; De Cristofaro, Valeria; Ottaviani, Francesco
2013-06-01
To evaluate through a multidimensional protocol voice changes after voice therapy in patients with benign vocal fold lesions. 65 consecutive patients affected by benign vocal fold lesions were enrolled. Depending on videolaryngostroboscopy the patients were divided into 3 groups: 23 patients with Reinke's oedema, 22 patients with vocal fold cysts and 20 patients with gelatinous polyp. Each subject received 10 voice therapy sessions and was evaluated, before and after voice therapy, through a multidimensional protocol including videolaryngostroboscopy, perception, acoustics, aerodynamics and self-rating by the patient. Data were compared using Wilcoxon signed-rank test. Kruskal-Wallis test was used to analyse the mean variation difference between the three groups of patients. Mann-Whitney test was used for post hoc analysis. Only in 11 cases videolaryngostroboscopy revealed an improvement of the initial pathology. However a significant improvement was observed in perceptual, acoustic and self-assessment ratings in the 3 groups of patients. In particular the parameters of G, R and A of the GIRBAS scale, and the noise to harmonic ratio, Jitter and shimmer scores improved after rehabilitation. A significant improvement of all the parameters of Voice Handicap Index after rehabilitation treatment was found. No significant difference among the three groups of patients was visible, except for self-assessment ratings. Voice therapy may provide a significant improvement in perceptual, acoustic and self-assessed voice quality in patients with benign glottal lesions. Utilization of voice therapy may allow some patients to avoid surgical intervention. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
"Science SQL" as a Building Block for Flexible, Standards-based Data Infrastructures
NASA Astrophysics Data System (ADS)
Baumann, Peter
2016-04-01
We have learnt to live with the pain of separating data and metadata into non-interoperable silos. For metadata, we enjoy the flexibility of databases, be they relational, graph, or some other NoSQL. Contrasting this, users still "drown in files" as an unstructured, low-level archiving paradigm. It is time to bridge this chasm which once was technologically induced, but today can be overcome. One building block towards a common re-integrated information space is to support massive multi-dimensional spatio-temporal arrays. These "datacubes" appear as sensor, image, simulation, and statistics data in all science and engineering domains, and beyond. For example, 2-D satellilte imagery, 2-D x/y/t image timeseries and x/y/z geophysical voxel data, and 4-D x/y/z/t climate data contribute to today's data deluge in the Earth sciences. Virtual observatories in the Space sciences routinely generate Petabytes of such data. Life sciences deal with microarray data, confocal microscopy, human brain data, which all fall into the same category. The ISO SQL/MDA (Multi-Dimensional Arrays) candidate standard is extending SQL with modelling and query support for n-D arrays ("datacubes") in a flexible, domain-neutral way. This heralds a new generation of services with new quality parameters, such as flexibility, ease of access, embedding into well-known user tools, and scalability mechanisms that remain completely transparent to users. Technology like the EU rasdaman ("raster data manager") Array Database system can support all of the above examples simultaneously, with one technology. This is practically proven: As of today, rasdaman is in operational use on hundreds of Terabytes of satellite image timeseries datacubes, with transparent query distribution across more than 1,000 nodes. Therefore, Array Databases offering SQL/MDA constitute a natural common building block for next-generation data infrastructures. Being initiator and editor of the standard we present principles, implementation facets, and application examples as a basis for further discussion. Further, we highlight recent implementation progress in parallelization, data distribution, and query optimization showing their effects on real-life use cases.
Typlt, Marei; Englitz, Bernhard; Sonntag, Mandy; Dehmel, Susanne; Kopp-Scheinpflug, Cornelia; Ruebsamen, Rudolf
2012-01-01
Multiple parallel auditory pathways ascend from the cochlear nucleus. It is generally accepted that the origin of these pathways are distinct groups of neurons differing in their anatomical and physiological properties. In extracellular in vivo recordings these neurons are typically classified on the basis of their peri-stimulus time histogram. In the present study we reconsider the question of classification of neurons in the anteroventral cochlear nucleus (AVCN) by taking a wider range of response properties into account. The study aims at a better understanding of the AVCN's functional organization and its significance as the source of different ascending auditory pathways. The analyses were based on 223 neurons recorded in the AVCN of the Mongolian gerbil. The range of analysed parameters encompassed spontaneous activity, frequency coding, sound level coding, as well as temporal coding. In order to categorize the unit sample without any presumptions as to the relevance of certain response parameters, hierarchical cluster analysis and additional principal component analysis were employed which both allow a classification on the basis of a multitude of parameters simultaneously. Even with the presently considered wider range of parameters, high number of neurons and more advanced analytical methods, no clear boundaries emerged which would separate the neurons based on their physiology. At the current resolution of the analysis, we therefore conclude that the AVCN units more likely constitute a multi-dimensional continuum with different physiological characteristics manifested at different poles. However, more complex stimuli could be useful to uncover physiological differences in future studies. PMID:22253838
NASA Astrophysics Data System (ADS)
Chai, Qing-Zhen; Zhao, Wei-Juan; Liu, Min-Liang; Wang, Hua-Lei
2018-05-01
Static fission barriers for 95 even-even transuranium nuclei with charge number Z = 94–118 have been systematically investigated by means of pairing self-consistent Woods-Saxon-Strutinsky calculations using the potential energy surface approach in multidimensional (β 2, γ, β 4) deformation space. Taking the heavier 252Cf nucleus (with the available fission barrier from experiment) as an example, the formation of the fission barrier and the influence of macroscopic, shell and pairing correction energies on it are analyzed. The results of the present calculated β 2 values and barrier heights are compared with previous calculations and available experiments. The role of triaxiality in the region of the first saddle is discussed. It is found that the second fission barrier is also considerably affected by the triaxial deformation degree of freedom in some nuclei (e.g., the Z=112–118 isotopes). Based on the potential energy curves, general trends of the evolution of the fission barrier heights and widths as a function of the nucleon numbers are investigated. In addition, the effects of Woods-Saxon potential parameter modifications (e.g., the strength of the spin-orbit coupling and the nuclear surface diffuseness) on the fission barrier are briefly discussed. Supported by National Natural Science Foundation of China (11675148, 11505157), the Project of Youth Backbone Teachers of Colleges and Universities of Henan Province (2017GGJS008), the Foundation and Advanced Technology Research Program of Henan Province (162300410222), the Outstanding Young Talent Research Fund of Zhengzhou University (1521317002) and the Physics Research and Development Program of Zhengzhou University (32410017)
Rajamani, Deepa; Bhasin, Manoj K
2016-05-03
Pancreatic cancer is an aggressive cancer with dismal prognosis, urgently necessitating better biomarkers to improve therapeutic options and early diagnosis. Traditional approaches of biomarker detection that consider only one aspect of the biological continuum like gene expression alone are limited in their scope and lack robustness in identifying the key regulators of the disease. We have adopted a multidimensional approach involving the cross-talk between the omics spaces to identify key regulators of disease progression. Multidimensional domain-specific disease signatures were obtained using rank-based meta-analysis of individual omics profiles (mRNA, miRNA, DNA methylation) related to pancreatic ductal adenocarcinoma (PDAC). These domain-specific PDAC signatures were integrated to identify genes that were affected across multiple dimensions of omics space in PDAC (genes under multiple regulatory controls, GMCs). To further pin down the regulators of PDAC pathophysiology, a systems-level network was generated from knowledge-based interaction information applied to the above identified GMCs. Key regulators were identified from the GMC network based on network statistics and their functional importance was validated using gene set enrichment analysis and survival analysis. Rank-based meta-analysis identified 5391 genes, 109 miRNAs and 2081 methylation-sites significantly differentially expressed in PDAC (false discovery rate ≤ 0.05). Bimodal integration of meta-analysis signatures revealed 1150 and 715 genes regulated by miRNAs and methylation, respectively. Further analysis identified 189 altered genes that are commonly regulated by miRNA and methylation, hence considered GMCs. Systems-level analysis of the scale-free GMCs network identified eight potential key regulator hubs, namely E2F3, HMGA2, RASA1, IRS1, NUAK1, ACTN1, SKI and DLL1, associated with important pathways driving cancer progression. Survival analysis on individual key regulators revealed that higher expression of IRS1 and DLL1 and lower expression of HMGA2, ACTN1 and SKI were associated with better survival probabilities. It is evident from the results that our hierarchical systems-level multidimensional analysis approach has been successful in isolating the converging regulatory modules and associated key regulatory molecules that are potential biomarkers for pancreatic cancer progression.
Manycore Performance-Portability: Kokkos Multidimensional Array Library
Edwards, H. Carter; Sunderland, Daniel; Porter, Vicki; ...
2012-01-01
Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs), and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1) manycore compute devices each with its own memory space, (2) data parallel kernels and (3) multidimensional arrays. Kernel executionmore » performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1) separating data access patterns from computational kernels through a multidimensional array API and (2) introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].« less
Application of musical timbre discrimination features to active sonar classification
NASA Astrophysics Data System (ADS)
Young, Victor W.; Hines, Paul C.; Pecknold, Sean
2005-04-01
In musical acoustics significant effort has been devoted to uncovering the physical basis of timbre perception. Most investigations into timbre rely on multidimensional scaling (MDS), in which different musical sounds are arranged as points in multidimensional space. The Euclidean distance between points corresponds to the perceptual distance between sounds and the multidimensional axes are linked to measurable properties of the sounds. MDS has identified numerous temporal and spectral features believed to be important to timbre perception. There is reason to believe that some of these features may have wider application in the disparate field of underwater acoustics, since anecdotal evidence suggests active sonar returns from metallic objects sound different than natural clutter returns when auralized by human operators. This is particularly encouraging since attempts to develop robust automatic classifiers capable of target-clutter discrimination over a wide range of operational conditions have met with limited success. Spectral features relevant to target-clutter discrimination are believed to include click-pitch and envelope irregularity; relevant temporal features are believed to include duration, sub-band attack/decay time, and time separation pitch. Preliminary results from an investigation into the role of these timbre features in target-clutter discrimination will be presented. [Work supported by NSERC and GDC.
A Person Fit Test for IRT Models for Polytomous Items
ERIC Educational Resources Information Center
Glas, C. A. W.; Dagohoy, Anna Villa T.
2007-01-01
A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability parameters. It is shown that the Lagrange multiplier…
Multi-dimensional free-electron laser simulation codes : a comparison study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biedron, S. G.; Chae, Y. C.; Dejus, R. J.
A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.
Multi-Dimensional Free-Electron Laser Simulation Codes: A Comparison Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuhn, Heinz-Dieter
A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.
Baglai, Anna; Gargano, Andrea F G; Jordens, Jan; Mengerink, Ynze; Honing, Maarten; van der Wal, Sjoerd; Schoenmakers, Peter J
2017-12-29
Recent advancements in separation science have resulted in the commercialization of multidimensional separation systems that provide higher peak capacities and, hence, enable a more-detailed characterization of complex mixtures. In particular, two powerful analytical tools are increasingly used by analytical scientists, namely online comprehensive two-dimensional liquid chromatography (LC×LC, having a second-dimension separation in the liquid phase) and liquid chromatography-ion mobility-spectrometry (LC-IMS, second dimension separation in the gas phase). The goal of the current study was a general assessment of the liquid-chromatography-trapped-ion-mobility-mass spectrometry (LC-TIMS-MS) and comprehensive two-dimensional liquid chromatography-mass spectrometry (LC×LC-MS) platforms for untargeted lipid mapping in human plasma. For the first time trapped-ion-mobility spectrometry (TIMS) was employed for the separation of the major lipid classes and ion-mobility-derived collision-cross-section values were determined for a number of lipid standards. The general effects of a number of influencing parameters have been inspected and possible directions for improvements are discussed. We aimed to provide a general indication and practical guidelines for the analyst to choose an efficient multidimensional separation platform according to the particular requirements of the application. Analysis time, orthogonality, peak capacity, and an indicative measure for the resolving power are discussed as main characteristics for multidimensional separation systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Brooks, Tessa L Durham; Miller, Nathan D; Spalding, Edgar P
2010-01-01
Plant development is genetically determined but it is also plastic, a fundamental duality that can be investigated provided large number of measurements can be made in various conditions. Plasticity of gravitropism in wild-type Arabidopsis (Arabidopsis thaliana) seedling roots was investigated using automated image acquisition and analysis. A bank of computer-controlled charge-coupled device cameras acquired images with high spatiotemporal resolution. Custom image analysis algorithms extracted time course measurements of tip angle and growth rate. Twenty-two discrete conditions defined by seedling age (2, 3, or 4 d), seed size (extra small, small, medium, or large), and growth medium composition (simple or rich) formed the condition space sampled with 1,216 trials. Computational analyses including dimension reduction by principal components analysis, classification by k-means clustering, and differentiation by wavelet convolution showed distinct response patterns within the condition space, i.e. response plasticity. For example, 2-d-old roots (regardless of seed size) displayed a response time course similar to those of roots from large seeds (regardless of age). Enriching the growth medium with nutrients suppressed response plasticity along the seed size and age axes, possibly by ameliorating a mineral deficiency, although analysis of seeds did not identify any elements with low levels on a per weight basis. Characterizing relationships between growth rate and tip swing rate as a function of condition cast gravitropism in a multidimensional response space that provides new mechanistic insights as well as conceptually setting the stage for mutational analysis of plasticity in general and root gravitropism in particular.
Durham Brooks, Tessa L.; Miller, Nathan D.; Spalding, Edgar P.
2010-01-01
Plant development is genetically determined but it is also plastic, a fundamental duality that can be investigated provided large number of measurements can be made in various conditions. Plasticity of gravitropism in wild-type Arabidopsis (Arabidopsis thaliana) seedling roots was investigated using automated image acquisition and analysis. A bank of computer-controlled charge-coupled device cameras acquired images with high spatiotemporal resolution. Custom image analysis algorithms extracted time course measurements of tip angle and growth rate. Twenty-two discrete conditions defined by seedling age (2, 3, or 4 d), seed size (extra small, small, medium, or large), and growth medium composition (simple or rich) formed the condition space sampled with 1,216 trials. Computational analyses including dimension reduction by principal components analysis, classification by k-means clustering, and differentiation by wavelet convolution showed distinct response patterns within the condition space, i.e. response plasticity. For example, 2-d-old roots (regardless of seed size) displayed a response time course similar to those of roots from large seeds (regardless of age). Enriching the growth medium with nutrients suppressed response plasticity along the seed size and age axes, possibly by ameliorating a mineral deficiency, although analysis of seeds did not identify any elements with low levels on a per weight basis. Characterizing relationships between growth rate and tip swing rate as a function of condition cast gravitropism in a multidimensional response space that provides new mechanistic insights as well as conceptually setting the stage for mutational analysis of plasticity in general and root gravitropism in particular. PMID:19923240
Visualizing Structure and Dynamics of Disaccharide Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, J. F.; Beckham, G. T.; Himmel, M. E.
2012-01-01
We examine the effect of several solvent models on the conformational properties and dynamics of disaccharides such as cellobiose and lactose. Significant variation in timescale for large scale conformational transformations are observed. Molecular dynamics simulation provides enough detail to enable insight through visualization of multidimensional data sets. We present a new way to visualize conformational space for disaccharides with Ramachandran plots.
ERIC Educational Resources Information Center
Sivan, Atara; Cohen, Arie; Chan, Dennis W.; Kwan, Yee Wan
2017-01-01
The Questionnaire on Teacher Interaction (QTI) is a teacher--student relationship measure whose underlying two-dimensional structure is represented in a circumplex model with eight sectors. Using Smallest Space Analysis (SSA), this study examined the circumplex structure of the Chinese version of the QTI among a convenience sample of 731…
ERIC Educational Resources Information Center
Carbon, Claus-Christian
2010-01-01
Participants with personal and without personal experiences with the Earth as a sphere estimated large-scale distances between six cities located on different continents. Cognitive distances were submitted to a specific multidimensional scaling algorithm in the 3D Euclidean space with the constraint that all cities had to lie on the same sphere. A…
ERIC Educational Resources Information Center
Vera, J. Fernando; Macias, Rodrigo; Heiser, Willem J.
2009-01-01
In this paper, we propose a cluster-MDS model for two-way one-mode continuous rating dissimilarity data. The model aims at partitioning the objects into classes and simultaneously representing the cluster centers in a low-dimensional space. Under the normal distribution assumption, a latent class model is developed in terms of the set of…
The Integration of Delta Prime (f)in a Multidimensional Space
NASA Technical Reports Server (NTRS)
Farassat, F.
1999-01-01
Consideration is given to the thickness noise term of the Ffowcs Williams-Hawkings equation when the time derivative is taken explicitly. An interpretation is presented of the integral I = function phi(x)delta-prime(f) dx, where it is initially assumed that the absolute value of Del-f is not equal to 1 on the surface f = 0.
The Role of Student Loan Programs in Higher Education Policy in the United States
ERIC Educational Resources Information Center
Amatya, Sachi
2009-01-01
The increasing cost of higher education, coupled with the inability of federal and state governments to sustain parallel increases in levels of funding for student financial aid, has led to significant growth of student loans. This project analyzes the multidimensional student loans space in the US. This project also compares and contrasts some of…
ERIC Educational Resources Information Center
Hyman, Ruth Bernstein
There still remains in our social institutions and individual lives a considerable splitting between feminine and masculine gender distinctions. The present study determined the dimensionality of the space of 53 admirable personality traits hypothesized to relate to femininity-masculinity and creativity, and assessed preferences of females versus…
NASA Astrophysics Data System (ADS)
Mohebbi, Akbar
2018-02-01
In this paper we propose two fast and accurate numerical methods for the solution of multidimensional space fractional Ginzburg-Landau equation (FGLE). In the presented methods, to avoid solving a nonlinear system of algebraic equations and to increase the accuracy and efficiency of method, we split the complex problem into simpler sub-problems using the split-step idea. For a homogeneous FGLE, we propose a method which has fourth-order of accuracy in time component and spectral accuracy in space variable and for nonhomogeneous one, we introduce another scheme based on the Crank-Nicolson approach which has second-order of accuracy in time variable. Due to using the Fourier spectral method for fractional Laplacian operator, the resulting schemes are fully diagonal and easy to code. Numerical results are reported in terms of accuracy, computational order and CPU time to demonstrate the accuracy and efficiency of the proposed methods and to compare the results with the analytical solutions. The results show that the present methods are accurate and require low CPU time. It is illustrated that the numerical results are in good agreement with the theoretical ones.
Shape determination and control for large space structures
NASA Technical Reports Server (NTRS)
Weeks, C. J.
1981-01-01
An integral operator approach is used to derive solutions to static shape determination and control problems associated with large space structures. Problem assumptions include a linear self-adjoint system model, observations and control forces at discrete points, and performance criteria for the comparison of estimates or control forms. Results are illustrated by simulations in the one dimensional case with a flexible beam model, and in the multidimensional case with a finite model of a large space antenna. Modal expansions for terms in the solution algorithms are presented, using modes from the static or associated dynamic mode. These expansions provide approximated solutions in the event that a used form analytical solution to the system boundary value problem is not available.
Multidimensional extended spatial evolutionary games.
Krześlak, Michał; Świerniak, Andrzej
2016-02-01
The goal of this paper is to study the classical hawk-dove model using mixed spatial evolutionary games (MSEG). In these games, played on a lattice, an additional spatial layer is introduced for dependence on more complex parameters and simulation of changes in the environment. Furthermore, diverse polymorphic equilibrium points dependent on cell reproduction, model parameters, and their simulation are discussed. Our analysis demonstrates the sensitivity properties of MSEGs and possibilities for further development. We discuss applications of MSEGs, particularly algorithms for modelling cell interactions during the development of tumours. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dissociation of performance and subjective measures of workload
NASA Technical Reports Server (NTRS)
Yeh, Yei-Yu; Wickens, Christopher D.
1988-01-01
A theory is presented to identify sources that produce dissociations between performance and subjective measures of workload. The theory states that performance is determined by (1) amount of resources invested, (2) resource efficiency, and (3) degree of competition for common resources in a multidimensional space described in the multiple-resources model. Subjective perception of workload, multidimensional in nature, increases with greater amounts of resource investment and with greater demands on working memory. Performance and subjective workload measures dissociate when greater resources are invested to improve performance of a resource-limited task; when demands on working memory are increased by time-sharing between concurrent tasks or between display elements; and when performance is sensitive to resource competition and subjective measures are more sensitive to total investment. These dissociation findings and their implications are discussed and directions for future research are suggested.
A Robust Absorbing Boundary Condition for Compressible Flows
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; orgenson, Philip C. E.
2005-01-01
An absorbing non-reflecting boundary condition (NRBC) for practical computations in fluid dynamics and aeroacoustics is presented with theoretical proof. This paper is a continuation and improvement of a previous paper by the author. The absorbing NRBC technique is based on a first principle of non reflecting, which contains the essential physics that a plane wave solution of the Euler equations remains intact across the boundary. The technique is theoretically shown to work for a large class of finite volume approaches. When combined with the hyperbolic conservation laws, the NRBC is simple, robust and truly multi-dimensional; no additional implementation is needed except the prescribed physical boundary conditions. Several numerical examples in multi-dimensional spaces using two different finite volume schemes are illustrated to demonstrate its robustness in practical computations. Limitations and remedies of the technique are also discussed.
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present the first fifth order, semi-discrete central upwind method for approximating solutions of multi-dimensional Hamilton-Jacobi equations. Unlike most of the commonly used high order upwind schemes, our scheme is formulated as a Godunov-type scheme. The scheme is based on the fluxes of Kurganov-Tadmor and Kurganov-Tadmor-Petrova, and is derived for an arbitrary number of space dimensions. A theorem establishing the monotonicity of these fluxes is provided. The spacial discretization is based on a weighted essentially non-oscillatory reconstruction of the derivative. The accuracy and stability properties of our scheme are demonstrated in a variety of examples. A comparison between our method and other fifth-order schemes for Hamilton-Jacobi equations shows that our method exhibits smaller errors without any increase in the complexity of the computations.
Multidimensional fractional Schrödinger equation
NASA Astrophysics Data System (ADS)
Rodrigues, M. M.; Vieira, N.
2012-11-01
This work is intended to investigate the multi-dimensional space-time fractional Schrödinger equation of the form (CDt0+αu)(t,x) = iħ/2m(C∇βu)(t,x), with ħ the Planck's constant divided by 2π, m is the mass and u(t,x) is a wave function of the particle. Here (CDt0+α,C∇β are operators of the Caputo fractional derivatives, where α ∈]0,1] and β ∈]1,2]. The wave function is obtained using Laplace and Fourier transforms methods and a symbolic operational form of solutions in terms of the Mittag-Leffler functions is exhibited. It is presented an expression for the wave function and for the quantum mechanical probability density. Using Banach fixed point theorem, the existence and uniqueness of solutions is studied for this kind of fractional differential equations.
Differentially Private Synthesization of Multi-Dimensional Data using Copula Functions
Li, Haoran; Xiong, Li; Jiang, Xiaoqian
2014-01-01
Differential privacy has recently emerged in private statistical data release as one of the strongest privacy guarantees. Most of the existing techniques that generate differentially private histograms or synthetic data only work well for single dimensional or low-dimensional histograms. They become problematic for high dimensional and large domain data due to increased perturbation error and computation complexity. In this paper, we propose DPCopula, a differentially private data synthesization technique using Copula functions for multi-dimensional data. The core of our method is to compute a differentially private copula function from which we can sample synthetic data. Copula functions are used to describe the dependence between multivariate random vectors and allow us to build the multivariate joint distribution using one-dimensional marginal distributions. We present two methods for estimating the parameters of the copula functions with differential privacy: maximum likelihood estimation and Kendall’s τ estimation. We present formal proofs for the privacy guarantee as well as the convergence property of our methods. Extensive experiments using both real datasets and synthetic datasets demonstrate that DPCopula generates highly accurate synthetic multi-dimensional data with significantly better utility than state-of-the-art techniques. PMID:25405241
NASA Astrophysics Data System (ADS)
Lee, Lindsay; Mann, Graham; Carslaw, Ken; Toohey, Matthew; Aquila, Valentina
2016-04-01
The World Climate Research Program's SPARC initiative has a new international activity "Stratospheric Sulphur and its Role in Climate" (SSiRC) to better understand changes in stratospheric aerosol and precursor gaseous sulphur species. One component of SSiRC involves an intercomparison "ISA-MIP" of composition-climate models that simulate the stratospheric aerosol layer interactively. Within PoEMS each modelling group will run a "perturbed physics ensemble" (PPE) of interactive stratospheric aerosol (ISA) simulations of the Pinatubo eruption, varying several uncertain parameters associated with the eruption's SO2 emissions and model processes. A powerful new technique to quantify and attribute sources of uncertainty in complex global models is described by Lee et al. (2011, ACP). The analysis uses Gaussian emulation to derive a probability density function (pdf) of predicted quantities, essentially interpolating the PPE results in multi-dimensional parameter space. Once trained on the ensemble, a Monte Carlo simulation with the fast Gaussian emulator enabling a full variance-based sensitivity analysis. The approach has already been used effectively by Carslaw et al., (2013, Nature) to quantify the uncertainty in the cloud albedo effect forcing from a 3D global aerosol-microphysics model allowing to compare the sensitivy of different predicted quantities to uncertainties in natural and anthropogenic emissions types, and structural parameters in the models. Within ISA-MIP, each group will carry out a PPE of runs, with the subsequent analysis with the emulator assessing the uncertainty in the volcanic forcings predicted by each model. In this poster presentation we will give an outline of the "PoEMS" analysis, describing the uncertain parameters to be varied and the relevance to further understanding differences identified in previous international stratospheric aerosol assessments.
Sotomayor, Gonzalo; Hampel, Henrietta; Vázquez, Raúl F
2018-03-01
A non-supervised (k-means) and a supervised (k-Nearest Neighbour in combination with genetic algorithm optimisation, k-NN/GA) pattern recognition algorithms were applied for evaluating and interpreting a large complex matrix of water quality (WQ) data collected during five years (2008, 2010-2013) in the Paute river basin (southern Ecuador). 21 physical, chemical and microbiological parameters collected at 80 different WQ sampling stations were examined. At first, the k-means algorithm was carried out to identify classes of sampling stations regarding their associated WQ status by considering three internal validation indexes, i.e., Silhouette coefficient, Davies-Bouldin and Caliński-Harabasz. As a result, two WQ classes were identified, representing low (C1) and high (C2) pollution. The k-NN/GA algorithm was applied on the available data to construct a classification model with the two WQ classes, previously defined by the k-means algorithm, as the dependent variables and the 21 physical, chemical and microbiological parameters being the independent ones. This algorithm led to a significant reduction of the multidimensional space of independent variables to only nine, which are likely to explain most of the structure of the two identified WQ classes. These parameters are, namely, electric conductivity, faecal coliforms, dissolved oxygen, chlorides, total hardness, nitrate, total alkalinity, biochemical oxygen demand and turbidity. Further, the land use cover of the study basin revealed a very good agreement with the WQ spatial distribution suggested by the k-means algorithm, confirming the credibility of the main results of the used WQ data mining approach. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multidimensional study on quality of life in children with type 1 diabetes.
Ausili, E; Tabacco, F; Focarelli, B; Padua, L; Crea, F; Caliandro, P; Pazzaglia, C; Marietti, G; Rendeli, C
2007-01-01
To study the Health Related Quality of Life (HRQoL) and metabolic assessment in 33 children affected with type 1 diabetes (18 males, 15 females; mean age 10.3 years). We used the Child Health Questionnaire-Parental Form 50 items (CHQ-PF50), measurements of metabolic control and we related them to patient management and family status. Quality of life (QoL) in diabetic children was worse than in the healthy sample. Interestingly, mean and last glycosylated hemoglobin (mean HbAlc r: -.4410 p < .01 and last HbAlc r: -.4012 p < .01), age of patients (r: -.4428; p < .009) and number of glycaemia controls (r: -.37, p < .03) were the most important parameters related to HRQoL parameters. This multidimensional study stressed that HRQoL is influenced by the metabolic assessment. Moreover, the report examined the parental perception of QoL in children with chronic diseases. Higher number of glycaemia controls/day, better metabolic control, lower age of children and earlier onset of diabetes produced better physical and psychological aspects of QoL. In comparison with adolescent patients, in children with diabetes, factors as number of insulin injections and daily snacks, and the level of education of the mother were not so important to influence QoL. Unexpectedly, in this sample, life habits, family features, and anthropometric parameters did not correlate with specific domains of QoL.
Collaborative Sharing of Multidimensional Space-time Data Using HydroShare
NASA Astrophysics Data System (ADS)
Gan, T.; Tarboton, D. G.; Horsburgh, J. S.; Dash, P. K.; Idaszak, R.; Yi, H.; Blanton, B.
2015-12-01
HydroShare is a collaborative environment being developed for sharing hydrological data and models. It includes capability to upload data in many formats as resources that can be shared. The HydroShare data model for resources uses a specific format for the representation of each type of data and specifies metadata common to all resource types as well as metadata unique to specific resource types. The Network Common Data Form (NetCDF) was chosen as the format for multidimensional space-time data in HydroShare. NetCDF is widely used in hydrological and other geoscience modeling because it contains self-describing metadata and supports the creation of array-oriented datasets that may include three spatial dimensions, a time dimension and other user defined dimensions. For example, NetCDF may be used to represent precipitation or surface air temperature fields that have two dimensions in space and one dimension in time. This presentation will illustrate how NetCDF files are used in HydroShare. When a NetCDF file is loaded into HydroShare, header information is extracted using the "ncdump" utility. Python functions developed for the Django web framework on which HydroShare is based, extract science metadata present in the NetCDF file, saving the user from having to enter it. Where the file follows Climate Forecast (CF) convention and Attribute Convention for Dataset Discovery (ACDD) standards, metadata is thus automatically populated. Users also have the ability to add metadata to the resource that may not have been present in the original NetCDF file. HydroShare's metadata editing functionality then writes this science metadata back into the NetCDF file to maintain consistency between the science metadata in HydroShare and the metadata in the NetCDF file. This further helps researchers easily add metadata information following the CF and ACDD conventions. Additional data inspection and subsetting functions were developed, taking advantage of Python and command line libraries for working with NetCDF files. We describe the design and implementation of these features and illustrate how NetCDF files from a modeling application may be curated in HydroShare and thus enhance reproducibility of the associated research. We also discuss future development planned for multidimensional space-time data in HydroShare.
Analysis of crew functions as an aid in Space Station interior layout
NASA Technical Reports Server (NTRS)
Steinberg, A. L.; Tullis, Thomas S.; Bied, Barbra
1986-01-01
The Space Station must be designed to facilitate all of the functions that its crew will perform, both on-duty and off-duty, as efficiently and comfortably as possible. This paper examines the functions to be performed by the Space Station crew in order to make inferences about the design of an interior layout that optimizes crew productivity. Twenty-seven crew functions were defined, as well as five criteria for assessing relationships among all pairs of those functions. Hierarchical clustering and multidimensional scaling techniques were used to visually summarize the relationships. A key result was the identification of two dimensions for describing the configuration of crew functions: 'Private-Public' and 'Group-Individual'. Seven specific recommendations for Space Station interior layout were derived from the analyses.
Voigt, M; Briese, V; Pietzner, V; Kirchengast, S; Schneider, K T M; Straube, S; Jorch, G
2009-08-01
We aimed to examine the individual and combined effects of nine maternal parameters (biological, medical, and social) on rates of prematurity. Our objective was to provide obstetricians with a way of screening women for likely premature deliveries. We conducted a retrospective analysis on the data of about 2.3 million pregnancies taken from the German perinatal statistics of 1995-2000. Rates of prematurity were calculated with single and multi-dimensional analyses on the basis of nine maternal parameters (age, weight, height, number of previous live births, stillbirths, miscarriages and terminations of pregnancy, smoking status, previous premature delivery). The following combinations of parameters were investigated in particular: rates of prematurity according to the number of previous stillbirths, miscarriages, and terminations; rates of prematurity according to the number of previous live births and maternal age, height and weight. We also included daily cigarette consumption and previous premature deliveries in our analyses. The rate of prematurity (< or =36 weeks of gestation) in our population was 7.0%; the rate of moderately early premature deliveries (32-36 weeks) was 5.9%, and the rate of very early premature deliveries (< or =31 weeks) was 1.1%. Our multi-dimensional analyses revealed rates of prematurity (< or =36 weeks) between 5.1% and 27.5% depending on the combination of parameters. We found the highest rate of prematurity of 27.5% in women with the following combination of parameters: > or =1 stillbirth, > or =2 terminations of pregnancy and > or =2 miscarriages. A rather high risk of premature delivery (>11%) was also found for elderly (> or =40 years) grand multiparous women as well as small (< or =155 cm) and slim women (< or =45 kg). We have shown that certain combinations of maternal parameters are associated with a high risk of premature deliveries (>10%). The risk table that we present here may assist in predicting premature delivery. Georg Thieme Verlag KG Stuttgart.New York.
Shen, Lanxiao; Chen, Shan; Zhu, Xiaoyang; Han, Ce; Zheng, Xiaomin; Deng, Zhenxiang; Zhou, Yongqiang; Gong, Changfei; Xie, Congying; Jin, Xiance
2018-03-01
A multidimensional exploratory statistical method, canonical correlation analysis (CCA), was applied to evaluate the impact of complexity parameters on the plan quality and deliverability of volumetric-modulated arc therapy (VMAT) and to determine parameters in the generation of an ideal VMAT plan. Canonical correlations among complexity, quality and deliverability parameters of VMAT, as well as the contribution weights of different parameters were investigated with 71 two-arc VMAT nasopharyngeal cancer (NPC) patients, and further verified with 28 one-arc VMAT prostate cancer patients. The average MU and MU per control point (MU/CP) for two-arc VMAT plans were 702.6 ± 55.7 and 3.9 ± 0.3 versus 504.6 ± 99.2 and 5.6 ± 1.1 for one-arc VMAT plans, respectively. The individual volume-based 3D gamma passing rates of clinical target volume (γCTV) and planning target volume (γPTV) for NPC and prostate cancer patients were 85.7% ± 9.0% vs 92.6% ± 7.8%, and 88.0% ± 7.6% vs 91.2% ± 7.7%, respectively. Plan complexity parameters of NPC patients were correlated with plan quality (P = 0.047) and individual volume-based 3D gamma indices γ(IV) (P = 0.01), in which, MU/CP and segment area (SA) per control point (SA/CP) were weighted highly in correlation with γ(IV) , and SA/CP, percentage of CPs with SA < 5 × 5 cm2 (%SA < 5 × 5 cm2) and PTV volume were weighted highly in correlation with plan quality with coefficients of 0.98, 0.68 and -0.99, respectively. Further verification with one-arc VMAT plans demonstrated similar results. In conclusion, MU, SA-related parameters and PTV volume were found to have strong effects on the plan quality and deliverability.
Shen, Lanxiao; Chen, Shan; Zhu, Xiaoyang; Han, Ce; Zheng, Xiaomin; Deng, Zhenxiang; Zhou, Yongqiang; Gong, Changfei; Jin, Xiance
2018-01-01
Abstract A multidimensional exploratory statistical method, canonical correlation analysis (CCA), was applied to evaluate the impact of complexity parameters on the plan quality and deliverability of volumetric-modulated arc therapy (VMAT) and to determine parameters in the generation of an ideal VMAT plan. Canonical correlations among complexity, quality and deliverability parameters of VMAT, as well as the contribution weights of different parameters were investigated with 71 two-arc VMAT nasopharyngeal cancer (NPC) patients, and further verified with 28 one-arc VMAT prostate cancer patients. The average MU and MU per control point (MU/CP) for two-arc VMAT plans were 702.6 ± 55.7 and 3.9 ± 0.3 versus 504.6 ± 99.2 and 5.6 ± 1.1 for one-arc VMAT plans, respectively. The individual volume-based 3D gamma passing rates of clinical target volume (γCTV) and planning target volume (γPTV) for NPC and prostate cancer patients were 85.7% ± 9.0% vs 92.6% ± 7.8%, and 88.0% ± 7.6% vs 91.2% ± 7.7%, respectively. Plan complexity parameters of NPC patients were correlated with plan quality (P = 0.047) and individual volume-based 3D gamma indices γ(IV) (P = 0.01), in which, MU/CP and segment area (SA) per control point (SA/CP) were weighted highly in correlation with γ(IV) , and SA/CP, percentage of CPs with SA < 5 × 5 cm2 (%SA < 5 × 5 cm2) and PTV volume were weighted highly in correlation with plan quality with coefficients of 0.98, 0.68 and −0.99, respectively. Further verification with one-arc VMAT plans demonstrated similar results. In conclusion, MU, SA-related parameters and PTV volume were found to have strong effects on the plan quality and deliverability. PMID:29415196
Face-space: A unifying concept in face recognition research.
Valentine, Tim; Lewis, Michael B; Hills, Peter J
2016-10-01
The concept of a multidimensional psychological space, in which faces can be represented according to their perceived properties, is fundamental to the modern theorist in face processing. Yet the idea was not clearly expressed until 1991. The background that led to the development of face-space is explained, and its continuing influence on theories of face processing is discussed. Research that has explored the properties of the face-space and sought to understand caricature, including facial adaptation paradigms, is reviewed. Face-space as a theoretical framework for understanding the effect of ethnicity and the development of face recognition is evaluated. Finally, two applications of face-space in the forensic setting are discussed. From initially being presented as a model to explain distinctiveness, inversion, and the effect of ethnicity, face-space has become a central pillar in many aspects of face processing. It is currently being developed to help us understand adaptation effects with faces. While being in principle a simple concept, face-space has shaped, and continues to shape, our understanding of face perception.
Mediterranean space-time extremes of wind wave sea states
NASA Astrophysics Data System (ADS)
Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro; Marcello Falcieri, Francesco; Bonaldo, Davide; Bergamasco, Andrea; Benetazzo, Alvise
2014-05-01
Traditionally, wind wave sea states during storms have been observed, modeled, and predicted mostly in the time domain, i.e. at a fixed point. In fact, the standard statistical models used in ocean waves analysis rely on the implicit assumption of long-crested waves. Nevertheless, waves in storms are mainly short-crested. Hence, spatio-temporal features of the wave field are crucial to accurately model the sea state characteristics and to provide reliable predictions, particurly of wave extremes. Indeed, the experimental evidence provided by novel instrumentations, e.g. WASS (Wave Acquisition Stereo System), showed that the maximum sea surface elevation gathered in time over an area, i.e. the space-time extreme, is larger than that one measured in time at a point, i.e. the time extreme. Recently, stochastic models used to estimate maxima of multidimensional Gaussian random fields have been applied to ocean waves statistics. These models are based either on Piterbarg's theorem or Adler and Taylor's Euler Characteristics approach. Besides a probability of exceedance of a certain threshold, they can provide the expected space-time extreme of a sea state, as long as space-time wave features (i.e. some parameters of the directional variance density spectrum) are known. These models have been recently validated against WASS observation from fixed and moving platforms. In this context, our focus was modeling and predicting extremes of wind waves during storms. Thus, to intensively gather space-time extremes data over the Mediterranean region, we used directional spectra provided by the numerical wave model SWAN (Simulating WAves Nearshore). Therefore, we set up a 6x6 km2 resolution grid entailing most of the Mediterranean Sea and we forced it with COSMO-I7 high resolution (7x7 km2) hourly wind fields, within 2007-2013 period. To obtain the space-time features, i.e. the spectral parameters, at each grid node and over the 6 simulated years, we developed a modified version of the SWAN model, the SWAN Space-Time (SWAN-ST). SWAN-ST results were post-processed to obtain the expected space-time extremes over the model domain. To this end, we applied the stochastic model of Fedele, developed starting from Adler and Taylor's approach, which we found to be more accurate and versatile with respect to Piterbarg's theorem. Results we obtained provide an alternative sight on Mediterranean extreme wave climate, which could represent the first step towards operationl forecasting of space-time wave extremes, on the one hand, and the basis for a novel statistical standard wave model, on the other. These results may benefit marine designers, seafarers and other subjects operating at sea and exposed to the frequent and severe hazard represented by extreme wave conditions.
Data analytics using canonical correlation analysis and Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles
2017-07-01
A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.
A component-based system for agricultural drought monitoring by remote sensing.
Dong, Heng; Li, Jun; Yuan, Yanbin; You, Lin; Chen, Chao
2017-01-01
In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM) to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China's Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring.
A component-based system for agricultural drought monitoring by remote sensing
Yuan, Yanbin; You, Lin; Chen, Chao
2017-01-01
In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM) to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China’s Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring. PMID:29236700
NASA Astrophysics Data System (ADS)
Leka, K. D.; Barnes, Graham; Wagner, Eric
2018-04-01
A classification infrastructure built upon Discriminant Analysis (DA) has been developed at NorthWest Research Associates for examining the statistical differences between samples of two known populations. Originating to examine the physical differences between flare-quiet and flare-imminent solar active regions, we describe herein some details of the infrastructure including: parametrization of large datasets, schemes for handling "null" and "bad" data in multi-parameter analysis, application of non-parametric multi-dimensional DA, an extension through Bayes' theorem to probabilistic classification, and methods invoked for evaluating classifier success. The classifier infrastructure is applicable to a wide range of scientific questions in solar physics. We demonstrate its application to the question of distinguishing flare-imminent from flare-quiet solar active regions, updating results from the original publications that were based on different data and much smaller sample sizes. Finally, as a demonstration of "Research to Operations" efforts in the space-weather forecasting context, we present the Discriminant Analysis Flare Forecasting System (DAFFS), a near-real-time operationally-running solar flare forecasting tool that was developed from the research-directed infrastructure.
Trust estimation of the semantic web using semantic web clustering
NASA Astrophysics Data System (ADS)
Shirgahi, Hossein; Mohsenzadeh, Mehran; Haj Seyyed Javadi, Hamid
2017-05-01
Development of semantic web and social network is undeniable in the Internet world these days. Widespread nature of semantic web has been very challenging to assess the trust in this field. In recent years, extensive researches have been done to estimate the trust of semantic web. Since trust of semantic web is a multidimensional problem, in this paper, we used parameters of social network authority, the value of pages links authority and semantic authority to assess the trust. Due to the large space of semantic network, we considered the problem scope to the clusters of semantic subnetworks and obtained the trust of each cluster elements as local and calculated the trust of outside resources according to their local trusts and trust of clusters to each other. According to the experimental result, the proposed method shows more than 79% Fscore that is about 11.9% in average more than Eigen, Tidal and centralised trust methods. Mean of error in this proposed method is 12.936, that is 9.75% in average less than Eigen and Tidal trust methods.
Selective Data Acquisition in NMR. The Quantification of Anti-phase Scalar Couplings
NASA Astrophysics Data System (ADS)
Hodgkinson, P.; Holmes, K. J.; Hore, P. J.
Almost all time-domain NMR experiments employ "linear sampling," in which the NMR response is digitized at equally spaced times, with uniform signal averaging. Here, the possibilities of nonlinear sampling are explored using anti-phase doublets in the indirectly detected dimensions of multidimensional COSY-type experiments as an example. The Cramér-Rao lower bounds are used to evaluate and optimize experiments in which the sampling points, or the extent of signal averaging at each point, or both, are varied. The optimal nonlinear sampling for the estimation of the coupling constant J, by model fitting, turns out to involve just a few key time points, for example, at the first node ( t= 1/ J) of the sin(π Jt) modulation. Such sparse sampling patterns can be used to derive more practical strategies, in which the sampling or the signal averaging is distributed around the most significant time points. The improvements in the quantification of NMR parameters can be quite substantial especially when, as is often the case for indirectly detected dimensions, the total number of samples is limited by the time available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ki Ha; Becker, Alex; Framgos, William
1999-06-01
Non-invasive, high-resolution imaging of the shallow subsurface is needed for delineation of buried waste, detection of unexploded ordinance, verification and monitoring of containment structures, and other environmental applications. Electromagnetic measurements at frequencies between 1 and 100 MHz are important for such applications, because the induction number of many targets is small and the ability to determine the dielectric permittivity in addition to electrical conductivity of the subsurface is possible. Earlier workers were successful in developing systems for detecting anomalous areas, but no quantifiable information was accurately determined. For high-resolution imaging, accurate measurements are necessary so the field data can bemore » mapped into the space of the subsurface parameters. We are developing a non-invasive method for accurately imaging the electrical conductivity and dielectric permittivity of the shallow subsurface using the plane wave impedance approach. Electric and magnetic sensors are being tested in a known area against theoretical predictions, thereby insuring that the data collected with the high-frequency impedance (HFI) system will support high-resolution, multi-dimensional imaging techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ki Ha; Becker, Alex; Tseng, Hung-Wen
2002-11-20
Non-invasive, high-resolution imaging of the shallow subsurface is needed for delineation of buried waste, detection of unexploded ordinance, verification and monitoring of containment structures, and other environmental applications. Electromagnetic (EM) measurements at frequencies between 1 and 100 MHz are important for such applications, because the induction number of many targets is small and the ability to determine the dielectric permittivity in addition to electrical conductivity of the subsurface is possible. Earlier workers were successful in developing systems for detecting anomalous areas, but no quantifiable information was accurately determined. For high-resolution imaging, accurate measurements are necessary so the field data canmore » be mapped into the space of the subsurface parameters. We are developing a non-invasive method for accurately mapping the electrical conductivity and dielectric permittivity of the shallow subsurface using the EM impedance approach (Frangos, 2001; Lee and Becker, 2001; Song et al., 2002). Electric and magnetic sensors are being tested in a known area against theoretical predictions, thereby insuring that the data collected with the high-frequency impedance (HFI) system will support high-resolution, multi-dimensional imaging techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ki Ha; Becker, Alex
2000-06-01
Non-invasive, high-resolution imaging of the shallow subsurface is needed for delineation of buried waste, detection of unexploded ordinance, verification and monitoring of containment structures, and other environmental applications. Electromagnetic measurements at frequencies between 1 and 100 MHz are important for such applications, because the induction number of many targets is small and the ability to determine the dielectric permittivity in addition to electrical conductivity of the subsurface is possible. Earlier workers were successful in developing systems for detecting anomalous areas, but no quantifiable information was accurately determined. For high-resolution imaging, accurate measurements are necessary so the field data can bemore » mapped into the space of the subsurface parameters. We are developing a non-invasive method for accurately imaging the electrical conductivity and dielectric permittivity of the shallow subsurface using the plane wave impedance approach (Song et al., 1997). Electric and magnetic sensors are being tested in a known area against theoretical predictions, thereby insuring that the data collected with the high-frequency impedance (HFI) system will support high-resolution, multi-dimensional imaging techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ki Ha; Becker, Alex; Tseng, Hung-Wen
2001-06-10
Non-invasive, high-resolution imaging of the shallow subsurface is needed for delineation of buried waste, detection of unexploded ordinance, verification and monitoring of containment structures, and other environmental applications. Electromagnetic (EM) measurements at frequencies between 1 and 100 MHz are important for such applications, because the induction number of many targets is small and the ability to determine the dielectric permittivity in addition to electrical conductivity of the subsurface is possible. Earlier workers were successful in developing systems for detecting anomalous areas, but no quantifiable information was accurately determined. For high-resolution imaging, accurate measurements are necessary so the field data canmore » be mapped into the space of the subsurface parameters. We are developing a non-invasive method for accurately mapping the electrical conductivity and dielectric permittivity of the shallow subsurface using the EM impedance approach (Frangos, 2001; Lee and Becker, 2001). Electric and magnetic sensors are being tested in a known area against theoretical predictions, thereby insuring that the data collected with the high-frequency impedance (HFI) system will support high-resolution, multi-dimensional imaging techniques.« less
Genetic Algorithm-Based Optimization to Match Asteroid Energy Deposition Curves
NASA Technical Reports Server (NTRS)
Tarano, Ana; Mathias, Donovan; Wheeler, Lorien; Close, Sigrid
2018-01-01
An asteroid entering Earth's atmosphere deposits energy along its path due to thermal ablation and dissipative forces that can be measured by ground-based and spaceborne instruments. Inference of pre-entry asteroid properties and characterization of the atmospheric breakup is facilitated by using an analytic fragment-cloud model (FCM) in conjunction with a Genetic Algorithm (GA). This optimization technique is used to inversely solve for the asteroid's entry properties, such as diameter, density, strength, velocity, entry angle, and strength scaling, from simulations using FCM. The previous parameters' fitness evaluation involves minimizing error to ascertain the best match between the physics-based calculated energy deposition and the observed meteors. This steady-state GA provided sets of solutions agreeing with literature, such as the meteor from Chelyabinsk, Russia in 2013 and Tagish Lake, Canada in 2000, which were used as case studies in order to validate the optimization routine. The assisted exploration and exploitation of this multi-dimensional search space enables inference and uncertainty analysis that can inform studies of near-Earth asteroids and consequently improve risk assessment.
Ultrasonic attenuation and phase velocity of high-density polyethylene pipe material.
Egerton, J S; Lowe, M J S; Huthwaite, P; Halai, H V
2017-03-01
Knowledge of acoustic properties is crucial for ultrasonic or sonic imaging and signal detection in nondestructive evaluation (NDE), medical imaging, and seismology. Accurately and reliably obtaining these is particularly challenging for the NDE of high-density polyethylene (HDPE), such as is used in many water or gas pipes, because the properties vary greatly with frequency, temperature, direction and spatial location. Therefore the work reported here was undertaken in order to establish a basis for such a multiparameter description. The approach is general but the study specifically addresses HDPE and includes measured data values. Applicable to any such multiparameter acoustic properties dataset is a devised regression method that uses a neural network algorithm. This algorithm includes constraints to respect the Kramers-Kronig causality relationship between speed and attenuation of waves in a viscoelastic medium. These constrained acoustic properties are fully described in a multidimensional parameter space to vary with frequency, depth, temperature, and direction. The resulting uncertainties in acoustic properties dependence on the above variables are better than 4% and 2%, respectively, for attenuation and phase velocity and therefore can prevent major defect imaging errors.
Uniqueness of thermodynamic projector and kinetic basis of molecular individualism
NASA Astrophysics Data System (ADS)
Gorban, Alexander N.; Karlin, Iliya V.
2004-05-01
Three results are presented: First, we solve the problem of persistence of dissipation for reduction of kinetic models. Kinetic equations with thermodynamic Lyapunov functions are studied. Uniqueness of the thermodynamic projector is proven: There exists only one projector which transforms any vector field equipped with the given Lyapunov function into a vector field with the same Lyapunov function for a given anzatz manifold which is not tangent to the Lyapunov function levels. Second, we use the thermodynamic projector for developing the short memory approximation and coarse-graining for general nonlinear dynamic systems. We prove that in this approximation the entropy production increases. ( The theorem about entropy overproduction.) In example, we apply the thermodynamic projector to derive the equations of reduced kinetics for the Fokker-Planck equation. A new class of closures is developed, the kinetic multipeak polyhedra. Distributions of this type are expected in kinetic models with multidimensional instability as universally as the Gaussian distribution appears for stable systems. The number of possible relatively stable states of a nonequilibrium system grows as 2 m, and the number of macroscopic parameters is in order mn, where n is the dimension of configuration space, and m is the number of independent unstable directions in this space. The elaborated class of closures and equations pretends to describe the effects of “molecular individualism”. This is the third result.
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Englander, Arnold C.
2014-01-01
Trajectory optimization methods using monotonic basin hopping (MBH) have become well developed during the past decade [1, 2, 3, 4, 5, 6]. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing random variable (RV)s from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by J. Englander [3, 6]) significantly improves monotonic basin hopping (MBH) performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness. Efficiency is finding better solutions in less time. Robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive random walks (RWs) originally developed in the field of statistical physics.
DD-HDS: A method for visualization and exploration of high-dimensional data.
Lespinats, Sylvain; Verleysen, Michel; Giron, Alain; Fertil, Bernard
2007-09-01
Mapping high-dimensional data in a low-dimensional space, for example, for visualization, is a problem of increasingly major concern in data analysis. This paper presents data-driven high-dimensional scaling (DD-HDS), a nonlinear mapping method that follows the line of multidimensional scaling (MDS) approach, based on the preservation of distances between pairs of data. It improves the performance of existing competitors with respect to the representation of high-dimensional data, in two ways. It introduces (1) a specific weighting of distances between data taking into account the concentration of measure phenomenon and (2) a symmetric handling of short distances in the original and output spaces, avoiding false neighbor representations while still allowing some necessary tears in the original distribution. More precisely, the weighting is set according to the effective distribution of distances in the data set, with the exception of a single user-defined parameter setting the tradeoff between local neighborhood preservation and global mapping. The optimization of the stress criterion designed for the mapping is realized by "force-directed placement" (FDP). The mappings of low- and high-dimensional data sets are presented as illustrations of the features and advantages of the proposed algorithm. The weighting function specific to high-dimensional data and the symmetric handling of short distances can be easily incorporated in most distance preservation-based nonlinear dimensionality reduction methods.
Uncertainty in modeled upper ocean heat content change
NASA Astrophysics Data System (ADS)
Tokmakian, Robin; Challenor, Peter
2014-02-01
This paper examines the uncertainty in the change in the heat content in the ocean component of a general circulation model. We describe the design and implementation of our statistical methodology. Using an ensemble of model runs and an emulator, we produce an estimate of the full probability distribution function (PDF) for the change in upper ocean heat in an Atmosphere/Ocean General Circulation Model, the Community Climate System Model v. 3, across a multi-dimensional input space. We show how the emulator of the GCM's heat content change and hence, the PDF, can be validated and how implausible outcomes from the emulator can be identified when compared to observational estimates of the metric. In addition, the paper describes how the emulator outcomes and related uncertainty information might inform estimates of the same metric from a multi-model Coupled Model Intercomparison Project phase 3 ensemble. We illustrate how to (1) construct an ensemble based on experiment design methods, (2) construct and evaluate an emulator for a particular metric of a complex model, (3) validate the emulator using observational estimates and explore the input space with respect to implausible outcomes and (4) contribute to the understanding of uncertainties within a multi-model ensemble. Finally, we estimate the most likely value for heat content change and its uncertainty for the model, with respect to both observations and the uncertainty in the value for the input parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacvarov, D.C.
1981-01-01
A new method for probabilistic risk assessment of transmission line insulation flashovers caused by lightning strokes is presented. The utilized approach of applying the finite element method for probabilistic risk assessment is demonstrated to be very powerful. The reasons for this are two. First, the finite element method is inherently suitable for analysis of three dimensional spaces where the parameters, such as three variate probability densities of the lightning currents, are non-uniformly distributed. Second, the finite element method permits non-uniform discretization of the three dimensional probability spaces thus yielding high accuracy in critical regions, such as the area of themore » low probability events, while at the same time maintaining coarse discretization in the non-critical areas to keep the number of grid points and the size of the problem to a manageable low level. The finite element probabilistic risk assessment method presented here is based on a new multidimensional search algorithm. It utilizes an efficient iterative technique for finite element interpolation of the transmission line insulation flashover criteria computed with an electro-magnetic transients program. Compared to other available methods the new finite element probabilistic risk assessment method is significantly more accurate and approximately two orders of magnitude computationally more efficient. The method is especially suited for accurate assessment of rare, very low probability events.« less
Computational Aeroacoustics by the Space-time CE/SE Method
NASA Technical Reports Server (NTRS)
Loh, Ching Y.
2001-01-01
In recent years, a new numerical methodology for conservation laws-the Space-Time Conservation Element and Solution Element Method (CE/SE), was developed by Dr. Chang of NASA Glenn Research Center and collaborators. In nature, the new method may be categorized as a finite volume method, where the conservation element (CE) is equivalent to a finite control volume (or cell) and the solution element (SE) can be understood as the cell interface. However, due to its rigorous treatment of the fluxes and geometry, it is different from the existing schemes. The CE/SE scheme features: (1) space and time treated on the same footing, the integral equations of conservation laws are solve( for with second order accuracy, (2) high resolution, low dispersion and low dissipation, (3) novel, truly multi-dimensional, simple but effective non-reflecting boundary condition, (4) effortless implementation of computation, no numerical fix or parameter choice is needed, an( (5) robust enough to cover a wide spectrum of compressible flow: from weak linear acoustic waves to strong, discontinuous waves (shocks) appropriate for linear and nonlinear aeroacoustics. Currently, the CE/SE scheme has been developed to such a stage that a 3-13 unstructured CE/SE Navier-Stokes solver is already available. However, in the present paper, as a general introduction to the CE/SE method, only the 2-D unstructured Euler CE/SE solver is chosen as a prototype and is sketched in Section 2. Then applications of the CE/SE scheme to linear, nonlinear aeroacoustics and airframe noise are depicted in Sections 3, 4, and 5 respectively to demonstrate its robustness and capability.
Assessment of urban green space structures and their quality from a multidimensional perspective.
Daniels, Benjamin; Zaunbrecher, Barbara S; Paas, Bastian; Ottermanns, Richard; Ziefle, Martina; Roß-Nickoll, Martina
2018-02-15
Facing the growing amount of people living in cities and, at the same time, the need for a compact and sustainable urban development to mitigate urban sprawl, it becomes increasingly important that green spaces in compact cities are designed to meet the various needs within an urban environment. Urban green spaces have a multitude of functions: Maintaining ecological processes and resulting services, e.g. providing habitat for animals and plants, providing a beneficial city microclimate as well as recreational space for citizens. Regarding these requirements, currently existing assessment procedures for green spaces have some major shortcomings, which are discussed in this paper. It is argued why a more detailed spatial level as well as a distinction between natural and artificial varieties of structural elements is justified and needed and how the assessment of urban green spaces benefits from the multidimensional perspective that is applied. By analyzing a selection of structural elements from an ecological, microclimatic and social perspective, indicator values are derived and a new, holistic metrics 1 is proposed. The results of the integrated analysis led to two major findings: first, that for some elements, the evaluation differs to a great extent between the different perspectives (disciplines) and second, that natural and artificial varieties are, in most cases, evaluated considerably different from each other. The differences between the perspectives call for an integrative planning policy which acknowledges the varying contribution of a structural element to different purposes (ecological, microclimatic, social) as well as a discussion about the prioritization of those purposes. The differences in the evaluation of natural vs. artificial elements verify the assumption that indicators which consider only generic elements fail to account for those refinements and are thus less suitable for planning and assessment purposes. Implications, challenges and scenarios for the application of such a metrics are finally discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
The cancellous bone multiscale morphology-elasticity relationship.
Agić, Ante; Nikolić, Vasilije; Mijović, Budimir
2006-06-01
The cancellous bone effective properties relations are analysed on multiscale across two aspects; properties of representative volume element on micro scale and statistical measure of trabecular trajectory orientation on mesoscale. Anisotropy of the microstructure is described across fabric tensor measure with trajectory orientation tensor as bridging scale connection. The scatter measured data (elastic modulus, trajectory orientation, apparent density) from compression test are fitted by stochastic interpolation procedure. The engineering constants of the elasticity tensor are estimated by last square fitt procedure in multidimensional space by Nelder-Mead simplex. The multiaxial failure surface in strain space is constructed and interpolated by modified super-ellipsoid.
The method of trend analysis of parameters time series of gas-turbine engine state
NASA Astrophysics Data System (ADS)
Hvozdeva, I.; Myrhorod, V.; Derenh, Y.
2017-10-01
This research substantiates an approach to interval estimation of time series trend component. The well-known methods of spectral and trend analysis are used for multidimensional data arrays. The interval estimation of trend component is proposed for the time series whose autocorrelation matrix possesses a prevailing eigenvalue. The properties of time series autocorrelation matrix are identified.
ERIC Educational Resources Information Center
Haberman, Shelby J.
2013-01-01
A general program for item-response analysis is described that uses the stabilized Newton-Raphson algorithm. This program is written to be compliant with Fortran 2003 standards and is sufficiently general to handle independent variables, multidimensional ability parameters, and matrix sampling. The ability variables may be either polytomous or…
Applications of wavelet-based compression to multidimensional Earth science data
NASA Technical Reports Server (NTRS)
Bradley, Jonathan N.; Brislawn, Christopher M.
1993-01-01
A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.
NASA Astrophysics Data System (ADS)
VandeVondele, Joost; Rothlisberger, Ursula
2000-09-01
We present a method for calculating multidimensional free energy surfaces within the limited time scale of a first-principles molecular dynamics scheme. The sampling efficiency is enhanced using selected terms of a classical force field as a bias potential. This simple procedure yields a very substantial increase in sampling accuracy while retaining the high quality of the underlying ab initio potential surface and can thus be used for a parameter free calculation of free energy surfaces. The success of the method is demonstrated by the applications to two gas phase molecules, ethane and peroxynitrous acid, as test case systems. A statistical analysis of the results shows that the entire free energy landscape is well converged within a 40 ps simulation at 500 K, even for a system with barriers as high as 15 kcal/mol.
Proffitt, D R; Kaiser, M K; Whelan, S M
1990-07-01
In five experiments, assessments were made of people's understandings about the dynamics of wheels. It was found that undergraduates make highly erroneous dynamical judgments about the motions of this commonplace event, both in explicit problem-solving contexts and when viewing ongoing events. These problems were also presented to bicycle racers and high-school physics teachers; both groups were found to exhibit misunderstandings similar to those of naive undergraduates. Findings were related to our account of dynamical event complexity. The essence of this account is that people encounter difficulties when evaluating the dynamics of any mechanical system that has more than one dynamically relevant object parameter. A rotating wheel is multidimensional in this respect: in addition to the motion of its center of mass, its mass distribution is also of dynamical relevance. People do not spontaneously form the essential multidimensional quantities required to adequately evaluate wheel dynamics.
A method of using cluster analysis to study statistical dependence in multivariate data
NASA Technical Reports Server (NTRS)
Borucki, W. J.; Card, D. H.; Lyle, G. C.
1975-01-01
A technique is presented that uses both cluster analysis and a Monte Carlo significance test of clusters to discover associations between variables in multidimensional data. The method is applied to an example of a noisy function in three-dimensional space, to a sample from a mixture of three bivariate normal distributions, and to the well-known Fisher's Iris data.
Constraining the red shifts of TeV BL Lac objects
NASA Astrophysics Data System (ADS)
Qin, Longhua; Wang, Jiancheng; Yan, Dahai; Yang, Chuyuan; Yuan, Zunli; Zhou, Ming
2018-01-01
We present a model-dependent method to estimate the red shifts of three TeV BL Lac objects (BL Lacs) through fitting their (quasi-)simultaneous multi-waveband spectral energy distributions (SEDs) with a one-zone leptonic synchrotron self-Compton model. Considering the impact of electron energy distributions (EEDs) on the results, we use three types of EEDs to fit the SEDs: a power-law EED with exponential cut-off (PLC), a log-parabola (PLLP) EED and the broken power-law (BPL) EED. We also use a parameter α to describe the uncertainties of the extragalactic background light models, as in Abdo et al. We then use a Markov chain Monte Carlo method to explore the multi-dimensional parameter space and obtain the uncertainties of the model parameters based on the observational data. We apply our method to obtain the red shifts of three TeV BL Lac objects in the marginalized 68 per cent confidence, and find that the PLC EED does not fit the SEDs. For 3C66A, the red shift is 0.14-0.31 and 0.16-0.32 in the BPL and PLLP EEDs. For PKS1424+240, the red shift is 0.55-0.68 and 0.55-0.67 in the BPL and PLLP EEDs. For PG1553+113, the red shift is 0.22-0.48 and 0.22-0.39 in the BPL and PLLP EEDs. We also estimate the red shift of PKS1424+240 in the high stage to be 0.46-0.67 in the PLLP EED, roughly consistent with that in the low stage.
Peeters, Elisabeth; De Beer, Thomas; Vervaet, Chris; Remon, Jean-Paul
2015-04-01
Tableting is a complex process due to the large number of process parameters that can be varied. Knowledge and understanding of the influence of these parameters on the final product quality is of great importance for the industry, allowing economic efficiency and parametric release. The aim of this study was to investigate the influence of paddle speeds and fill depth at different tableting speeds on the weight and weight variability of tablets. Two excipients possessing different flow behavior, microcrystalline cellulose (MCC) and dibasic calcium phosphate dihydrate (DCP), were selected as model powders. Tablets were manufactured via a high-speed rotary tablet press using design of experiments (DoE). During each experiment also the volume of powder in the forced feeder was measured. Analysis of the DoE revealed that paddle speeds are of minor importance for tablet weight but significantly affect volume of powder inside the feeder in case of powders with excellent flowability (DCP). The opposite effect of paddle speed was observed for fairly flowing powders (MCC). Tableting speed played a role in weight and weight variability, whereas changing fill depth exclusively influenced tablet weight. The DoE approach allowed predicting the optimum combination of process parameters leading to minimum tablet weight variability. Monte Carlo simulations allowed assessing the probability to exceed the acceptable response limits if factor settings were varied around their optimum. This multi-dimensional combination and interaction of input variables leading to response criteria with acceptable probability reflected the design space.
Determining fundamental properties of matter created in ultrarelativistic heavy-ion collisions
NASA Astrophysics Data System (ADS)
Novak, J.; Novak, K.; Pratt, S.; Vredevoogd, J.; Coleman-Smith, C. E.; Wolpert, R. L.
2014-03-01
Posterior distributions for physical parameters describing relativistic heavy-ion collisions, such as the viscosity of the quark-gluon plasma, are extracted through a comparison of hydrodynamic-based transport models to experimental results from 100AGeV+100AGeV Au +Au collisions at the Relativistic Heavy Ion Collider. By simultaneously varying six parameters and by evaluating several classes of observables, we are able to explore the complex intertwined dependencies of observables on model parameters. The methods provide a full multidimensional posterior distribution for the model output, including a range of acceptable values for each parameter, and reveal correlations between them. The breadth of observables and the number of parameters considered here go beyond previous studies in this field. The statistical tools, which are based upon Gaussian process emulators, are tested in detail and should be extendable to larger data sets and a higher number of parameters.
A theoretical framework for the associations between identity and psychopathology.
Klimstra, Theo A; Denissen, Jaap J A
2017-11-01
Identity research largely emerged from clinical observations. Decades of empirical work advanced the field in refining existing approaches and adding new approaches. Furthermore, the existence of linkages of identity with psychopathology is now well established. Unfortunately, both the directionality of effects between identity aspects and psychopathology symptoms, and the mechanisms underlying associations are unclear. In the present paper, we present a new framework to inspire hypothesis-driven empirical research to overcome this limitation. The framework has a basic resemblance to theoretical models for the study of personality and psychopathology, so we provide examples of how these might apply to the study of identity. Next, we explain that unique features of identity may come into play in individuals suffering from psychopathology that are mostly related to the content of one's identity. These include pros and cons of identifying with one's diagnostic label. Finally, inspired by Hermans' dialogical self theory and principles derived from Piaget's, Swann's and Kelly's work, we delineate a framework with identity at the core of an individual multidimensional space. In this space, psychopathology symptoms have a known distance (representing relevance) to one's identity, and individual multidimensional spaces are connected to those of other individuals in one's social network. We discuss methodological (quantitative and qualitative, idiographic and nomothetic) and statistical procedures (multilevel models and network models) to test the framework. Resulting evidence can boost the field of identity research in demonstrating its high practical relevance for the emergence and conservation of psychopathology. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Phase-space reaction network on a multisaddle energy landscape: HCN isomerization.
Li, Chun-Biu; Matsunaga, Yasuhiro; Toda, Mikito; Komatsuzaki, Tamiki
2005-11-08
By using the HCN/CNH isomerization reaction as an illustrative vehicle of chemical reactions on multisaddle energy landscapes, we give explicit visualizations of molecular motions associated with a straight-through reaction tube in the phase space inside which all reactive trajectories pass from one basin to another, with eliminating recrossing trajectories in the configuration space. This visualization provides us with a chemical intuition of how chemical species "walk along" the reaction-rate slope in the multidimensional phase space compared with the intrinsic reaction path in the configuration space. The distinct nonergodic features in the two different HCN and CNH wells can be easily demonstrated by a section of Poincare surface of section in those potential minima, which predicts in a priori the pattern of trajectories residing in the potential well. We elucidate the global phase-space structure which gives rise to the non-Markovian dynamics or the dynamical correlation of sequential multisaddle chemical reactions. The phase-space structure relevant to the controllability of the product state in chemical reactions is also discussed.
Interpersonal distance modeling during fighting activities.
Dietrich, Gilles; Bredin, Jonathan; Kerlirzin, Yves
2010-10-01
The aim of this article is to elaborate a general framework for modeling dual opposition activities, or more generally, dual interaction. The main hypothesis is that opposition behavior can be measured directly from a global variable and that the relative distance between the two subjects can be this parameter. Moreover, this parameter should be considered as multidimensional parameter depending not only on the dynamics of the subjects but also on the "internal" parameters of the subjects, such as sociological and/or emotional states. Standard and simple mechanical formalization will be used to model this multifactorial distance. To illustrate such a general modeling methodology, this model was compared with actual data from an opposition activity like Japanese fencing (kendo). This model captures not only coupled coordination, but more generally interaction in two-subject activities.
Fast multi-dimensional NMR by minimal sampling
NASA Astrophysics Data System (ADS)
Kupče, Ēriks; Freeman, Ray
2008-03-01
A new scheme is proposed for very fast acquisition of three-dimensional NMR spectra based on minimal sampling, instead of the customary step-wise exploration of all of evolution space. The method relies on prior experiments to determine accurate values for the evolving frequencies and intensities from the two-dimensional 'first planes' recorded by setting t1 = 0 or t2 = 0. With this prior knowledge, the entire three-dimensional spectrum can be reconstructed by an additional measurement of the response at a single location (t1∗,t2∗) where t1∗ and t2∗ are fixed values of the evolution times. A key feature is the ability to resolve problems of overlap in the acquisition dimension. Applied to a small protein, agitoxin, the three-dimensional HNCO spectrum is obtained 35 times faster than systematic Cartesian sampling of the evolution domain. The extension to multi-dimensional spectroscopy is outlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedetti, R. L.; Lords, L. V.; Kiser, D. M.
1978-02-01
The SCORE-EVET code was developed to study multidimensional transient fluid flow in nuclear reactor fuel rod arrays. The conservation equations used were derived by volume averaging the transient compressible three-dimensional local continuum equations in Cartesian coordinates. No assumptions associated with subchannel flow have been incorporated into the derivation of the conservation equations. In addition to the three-dimensional fluid flow equations, the SCORE-EVET code ocntains: (a) a one-dimensional steady state solution scheme to initialize the flow field, (b) steady state and transient fuel rod conduction models, and (c) comprehensive correlation packages to describe fluid-to-fuel rod interfacial energy and momentum exchange. Velocitymore » and pressure boundary conditions can be specified as a function of time and space to model reactor transient conditions such as a hypothesized loss-of-coolant accident (LOCA) or flow blockage.« less
New generic indexing technology
NASA Technical Reports Server (NTRS)
Freeston, Michael
1996-01-01
There has been no fundamental change in the dynamic indexing methods supporting database systems since the invention of the B-tree twenty-five years ago. And yet the whole classical approach to dynamic database indexing has long since become inappropriate and increasingly inadequate. We are moving rapidly from the conventional one-dimensional world of fixed-structure text and numbers to a multi-dimensional world of variable structures, objects and images, in space and time. But, even before leaving the confines of conventional database indexing, the situation is highly unsatisfactory. In fact, our research has led us to question the basic assumptions of conventional database indexing. We have spent the past ten years studying the properties of multi-dimensional indexing methods, and in this paper we draw the strands of a number of developments together - some quite old, some very new, to show how we now have the basis for a new generic indexing technology for the next generation of database systems.
Zhang, Congqiang; Seow, Vui Yin; Chen, Xixian; Too, Heng-Phon
2018-05-11
Optimization of metabolic pathways consisting of large number of genes is challenging. Multivariate modular methods (MMMs) are currently available solutions, in which reduced regulatory complexities are achieved by grouping multiple genes into modules. However, these methods work well for balancing the inter-modules but not intra-modules. In addition, application of MMMs to the 15-step heterologous route of astaxanthin biosynthesis has met with limited success. Here, we expand the solution space of MMMs and develop a multidimensional heuristic process (MHP). MHP can simultaneously balance different modules by varying promoter strength and coordinating intra-module activities by using ribosome binding sites (RBSs) and enzyme variants. Consequently, MHP increases enantiopure 3S,3'S-astaxanthin production to 184 mg l -1 day -1 or 320 mg l -1 . Similarly, MHP improves the yields of nerolidol and linalool. MHP may be useful for optimizing other complex biochemical pathways.
Nie, Feilin; Kunciw, Dominique L.; Wilcke, David; Stokes, Jamie E.; Galloway, Warren R. J. D.; Bartlett, Sean; Sore, Hannah F.
2016-01-01
Abstract Synthetic macrocycles are an attractive area in drug discovery. However, their use has been hindered by a lack of versatile platforms for the generation of structurally (and thus shape) diverse macrocycle libraries. Herein, we describe a new concept in library synthesis, termed multidimensional diversity‐oriented synthesis, and its application towards macrocycles. This enabled the step‐efficient generation of a library of 45 novel, structurally diverse, and highly‐functionalized macrocycles based around a broad range of scaffolds and incorporating a wide variety of biologically relevant structural motifs. The synthesis strategy exploited the diverse reactivity of aza‐ylides and imines, and featured eight different macrocyclization methods, two of which were novel. Computational analyses reveal a broad coverage of molecular shape space by the library and provides insight into how the various diversity‐generating steps of the synthesis strategy impact on molecular shape. PMID:27484830
Multidimensional Processing and Visual Rendering of Complex 3D Biomedical Images
NASA Technical Reports Server (NTRS)
Sams, Clarence F.
2016-01-01
The proposed technology uses advanced image analysis techniques to maximize the resolution and utility of medical imaging methods being used during spaceflight. We utilize COTS technology for medical imaging, but our applications require higher resolution assessment of the medical images than is routinely applied with nominal system software. By leveraging advanced data reduction and multidimensional imaging techniques utilized in analysis of Planetary Sciences and Cell Biology imaging, it is possible to significantly increase the information extracted from the onboard biomedical imaging systems. Year 1 focused on application of these techniques to the ocular images collected on ground test subjects and ISS crewmembers. Focus was on the choroidal vasculature and the structure of the optic disc. Methods allowed for increased resolution and quantitation of structural changes enabling detailed assessment of progression over time. These techniques enhance the monitoring and evaluation of crew vision issues during space flight.
NASA Technical Reports Server (NTRS)
Armstrong, Jeffrey B.; Simon, Donald L.
2012-01-01
Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.
Multidimensional, mapping-based complex wavelet transforms.
Fernandes, Felix C A; van Spaendonck, Rutger L C; Burrus, C Sidney
2005-01-01
Although the discrete wavelet transform (DWT) is a powerful tool for signal and image processing, it has three serious disadvantages: shift sensitivity, poor directionality, and lack of phase information. To overcome these disadvantages, we introduce multidimensional, mapping-based, complex wavelet transforms that consist of a mapping onto a complex function space followed by a DWT of the complex mapping. Unlike other popular transforms that also mitigate DWT shortcomings, the decoupled implementation of our transforms has two important advantages. First, the controllable redundancy of the mapping stage offers a balance between degree of shift sensitivity and transform redundancy. This allows us to create a directional, nonredundant, complex wavelet transform with potential benefits for image coding systems. To the best of our knowledge, no other complex wavelet transform is simultaneously directional and nonredundant. The second advantage of our approach is the flexibility to use any DWT in the transform implementation. As an example, we exploit this flexibility to create the complex double-density DWT: a shift-insensitive, directional, complex wavelet transform with a low redundancy of (3M - 1)/(2M - 1) in M dimensions. No other transform achieves all these properties at a lower redundancy, to the best of our knowledge. By exploiting the advantages of our multidimensional, mapping-based complex wavelet transforms in seismic signal-processing applications, we have demonstrated state-of-the-art results.
Modular Spectral Inference Framework Applied to Young Stars and Brown Dwarfs
NASA Technical Reports Server (NTRS)
Gully-Santiago, Michael A.; Marley, Mark S.
2017-01-01
In practice, synthetic spectral models are imperfect, causing inaccurate estimates of stellar parameters. Using forward modeling and statistical inference, we derive accurate stellar parameters for a given observed spectrum by emulating a grid of precomputed spectra to track uncertainties. Spectral inference as applied to brown dwarfs re: Synthetic spectral models (Marley et al 1996 and 2014) via the newest grid spans a massive multi-dimensional grid applied to IGRINS spectra, improving atmospheric models for JWST. When applied to young stars(10Myr) with large starpots, they can be measured spectroscopically, especially in the near-IR with IGRINS.
Adaptive Detection and Parameter Estimation for Multidimensional Signal Models
1989-04-19
first of Equations (3-3), it follows that H = fH (3-12) p BpP Moreover, with the help of Equations (Al-8) of Appendix I and Equation (3-6). we find that...7-29) 127 Substituting these results, we find that II + ZBSBBZB +Y T- YJ =+ Zi~t ÷ B SBR ZBI By introducing the definitions -t +BS1 ZB V E Y Ct
1991-08-01
Bona, Burke, Grundbaum, Hasagawa, Horton, Krichever, Kruskal, Kuznetsov , Lax, McLaughlin, Mikhailov., Rubenchik, Sabatier, Tabor, Zabusky) for their...British Telecom Lab., GB Fibers Oleg Bogoyavlenskij Breaking Solitons Steklov Mathematical Institute USSR Marco Boiti Real and Virtual Multidimensional...Beyond Rutgers University, USA Boris Kupershmidt Relativistic Analogs of Lax Equations Tennessee Space Institute, USA E.A. Kuznetsov Weak MHD Turbulence
Compressed Semi-Discrete Central-Upwind Schemes for Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Kurganov, Alexander; Levy, Doron; Petrova, Guergana
2003-01-01
We introduce a new family of Godunov-type semi-discrete central schemes for multidimensional Hamilton-Jacobi equations. These schemes are a less dissipative generalization of the central-upwind schemes that have been recently proposed in series of works. We provide the details of the new family of methods in one, two, and three space dimensions, and then verify their expected low-dissipative property in a variety of examples.
Multivariate analysis of light scattering spectra of liquid dairy products
NASA Astrophysics Data System (ADS)
Khodasevich, M. A.
2010-05-01
Visible light scattering spectra from the surface layer of samples of commercial liquid dairy products are recorded with a colorimeter. The principal component method is used to analyze these spectra. Vectors representing the samples of dairy products in a multidimensional space of spectral counts are projected onto a three-dimensional subspace of principal components. The magnitudes of these projections are found to depend on the type of dairy product.
A software package for the data-independent management of multidimensional data
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.; Gough, Michel L.
1987-01-01
The Common Data Format (CDF), a structure which provides true data independence for applications software and has been developed at the National Space Science Data Center, is discussed. The background to the CDF is reviewed, and the CDF is described. The conceptual organization of the CDF is discussed, and a sample CDF structure is shown and described. The implementation of CDF, its status, and its applications are examined.
Implicit face prototype learning from geometric information.
Or, Charles C-F; Wilson, Hugh R
2013-04-19
There is evidence that humans implicitly learn an average or prototype of previously studied faces, as the unseen face prototype is falsely recognized as having been learned (Solso & McCarthy, 1981). Here we investigated the extent and nature of face prototype formation where observers' memory was tested after they studied synthetic faces defined purely in geometric terms in a multidimensional face space. We found a strong prototype effect: The basic results showed that the unseen prototype averaged from the studied faces was falsely identified as learned at a rate of 86.3%, whereas individual studied faces were identified correctly 66.3% of the time and the distractors were incorrectly identified as having been learned only 32.4% of the time. This prototype learning lasted at least 1 week. Face prototype learning occurred even when the studied faces were further from the unseen prototype than the median variation in the population. Prototype memory formation was evident in addition to memory formation of studied face exemplars as demonstrated in our models. Additional studies showed that the prototype effect can be generalized across viewpoints, and head shape and internal features separately contribute to prototype formation. Thus, implicit face prototype extraction in a multidimensional space is a very general aspect of geometric face learning. Copyright © 2013 Elsevier Ltd. All rights reserved.
Convergence in the Bilingual Lexicon: A Pre-registered Replication of Previous Studies.
White, Anne; Malt, Barbara C; Storms, Gert
2016-01-01
Naming patterns of bilinguals have been found to converge and form a new intermediate language system from elements of both the bilinguals' languages. This converged naming pattern differs from the monolingual naming patterns of both a bilingual's languages. We conducted a pre-registered replication study of experiments addressing the question whether there is a convergence between a bilingual's both lexicons. The replication used an enlarged set of stimuli of common household containers, providing generalizability, and more reliable representations of the semantic domain. Both an analysis at the group-level and at the individual level of the correlations between naming patterns reject the two-pattern hypothesis that poses that bilinguals use two monolingual-like naming patterns, one for each of their two languages. However, the results of the original study and the replication comply with the one-pattern hypothesis, which poses that bilinguals converge the naming patterns of their two languages and form a compromise. Since this convergence is only partial the naming pattern in bilinguals corresponds to a moderate version of the one-pattern hypothesis. These findings are further confirmed by a representation of the semantic domain in a multidimensional space and the finding of shorter distances between bilingual category centers than monolingual category centers in this multidimensional space both in the original and in the replication study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allu, Srikanth; Velamur Asokan, Badri; Shelton, William A
A generalized three dimensional computational model based on unied formulation of electrode- electrolyte-electrode system of a electric double layer supercapacitor has been developed. The model accounts for charge transport across the solid-liquid system. This formulation based on volume averaging process is a widely used concept for the multiphase ow equations ([28] [36]) and is analogous to porous media theory typically employed for electrochemical systems [22] [39] [12]. This formulation is extended to the electrochemical equations for a supercapacitor in a consistent fashion, which allows for a single-domain approach with no need for explicit interfacial boundary conditions as previously employed ([38]).more » In this model it is easy to introduce the spatio-temporal variations, anisotropies of physical properties and it is also conducive for introducing any upscaled parameters from lower length{scale simulations and experiments. Due to the irregular geometric congurations including porous electrode, the charge transport and subsequent performance characteristics of the super-capacitor can be easily captured in higher dimensions. A generalized model of this nature also provides insight into the applicability of 1D models ([38]) and where multidimensional eects need to be considered. In addition, simple sensitivity analysis on key input parameters is performed in order to ascertain the dependence of the charge and discharge processes on these parameters. Finally, we demonstarted how this new formulation can be applied to non-planar supercapacitors« less
NASA Astrophysics Data System (ADS)
Ratib, Osman; Rosset, Antoine; Dahlbom, Magnus; Czernin, Johannes
2005-04-01
Display and interpretation of multi dimensional data obtained from the combination of 3D data acquired from different modalities (such as PET-CT) require complex software tools allowing the user to navigate and modify the different image parameters. With faster scanners it is now possible to acquire dynamic images of a beating heart or the transit of a contrast agent adding a fifth dimension to the data. We developed a DICOM-compliant software for real time navigation in very large sets of 5 dimensional data based on an intuitive multidimensional jog-wheel widely used by the video-editing industry. The software, provided under open source licensing, allows interactive, single-handed, navigation through 3D images while adjusting blending of image modalities, image contrast and intensity and the rate of cine display of dynamic images. In this study we focused our effort on the user interface and means for interactively navigating in these large data sets while easily and rapidly changing multiple parameters such as image position, contrast, intensity, blending of colors, magnification etc. Conventional mouse-driven user interface requiring the user to manipulate cursors and sliders on the screen are too cumbersome and slow. We evaluated several hardware devices and identified a category of multipurpose jogwheel device that is used in the video-editing industry that is particularly suitable for rapidly navigating in five dimensions while adjusting several display parameters interactively. The application of this tool will be demonstrated in cardiac PET-CT imaging and functional cardiac MRI studies.
Pranal, Thibaut; Pereira, Bruno; Berthelin, Pauline; Roszyk, Laurence; Godet, Thomas; Chabanne, Russell; Eisenmann, Nathanael; Lautrette, Alexandre; Belville, Corinne; Blondonnet, Raiko; Cayot, Sophie; Gillart, Thierry; Skrzypczak, Yvan; Souweine, Bertrand; Bouvier, Damien; Blanchon, Loic; Sapin, Vincent; Constantin, Jean-Michel; Jabaudon, Matthieu
2018-01-01
Although soluble forms of the receptor for advanced glycation end products (RAGE) have been recently proposed as biomarkers in multiple acute or chronic diseases, few studies evaluated the influence of usual clinical and biological parameters, or of patient characteristics and comorbidities, on circulating levels of soluble RAGE in the intensive care unit (ICU) setting. To determine, among clinical and biological parameters that are usually recorded upon ICU admission, which variables, if any, could be associated with plasma levels of soluble RAGE. Data for this ancillary study were prospectively obtained from adult patients with at least one ARDS risk factor upon ICU admission enrolled in a large multicenter observational study. At ICU admission, plasma levels of total soluble RAGE (sRAGE) and endogenous secretory (es)RAGE were measured by duplicate ELISA and baseline patient characteristics, comorbidities, and usual clinical and biological indices were recorded. After univariate analyses, significant variables were used in multivariate, multidimensional analyses. 294 patients were included in this ancillary study, among whom 62% were admitted for medical reasons, including septic shock (11%), coma (11%), and pneumonia (6%). Although some variables were associated with plasma levels of RAGE soluble forms in univariate analysis, multidimensional analyses showed no significant association between admission parameters and baseline plasma sRAGE or esRAGE. We found no obvious association between circulating levels of soluble RAGE and clinical and biological indices that are usually recorded upon ICU admission. This trial is registered with NCT02070536.
Calibration of the Test of Relational Reasoning.
Dumas, Denis; Alexander, Patricia A
2016-10-01
Relational reasoning, or the ability to discern meaningful patterns within a stream of information, is a critical cognitive ability associated with academic and professional success. Importantly, relational reasoning has been described as taking multiple forms, depending on the type of higher order relations being drawn between and among concepts. However, the reliable and valid measurement of such a multidimensional construct of relational reasoning has been elusive. The Test of Relational Reasoning (TORR) was designed to tap 4 forms of relational reasoning (i.e., analogy, anomaly, antinomy, and antithesis). In this investigation, the TORR was calibrated and scored using multidimensional item response theory in a large, representative undergraduate sample. The bifactor model was identified as the best-fitting model, and used to estimate item parameters and construct reliability. To improve the usefulness of the TORR to educators, scaled scores were also calculated and presented. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Ibuki, Takero; Suzuki, Sei; Inoue, Jun-ichi
We investigate cross-correlations between typical Japanese stocks collected through Yahoo!Japan website ( http://finance.yahoo.co.jp/ ). By making use of multi-dimensional scaling (MDS) for the cross-correlation matrices, we draw two-dimensional scattered plots in which each point corresponds to each stock. To make a clustering for these data plots, we utilize the mixture of Gaussians to fit the data set to several Gaussian densities. By minimizing the so-called Akaike Information Criterion (AIC) with respect to parameters in the mixture, we attempt to specify the best possible mixture of Gaussians. It might be naturally assumed that all the two-dimensional data points of stocks shrink into a single small region when some economic crisis takes place. The justification of this assumption is numerically checked for the empirical Japanese stock data, for instance, those around 11 March 2011.
Airborne multidimensional integrated remote sensing system
NASA Astrophysics Data System (ADS)
Xu, Weiming; Wang, Jianyu; Shu, Rong; He, Zhiping; Ma, Yanhua
2006-12-01
In this paper, we present a kind of airborne multidimensional integrated remote sensing system that consists of an imaging spectrometer, a three-line scanner, a laser ranger, a position & orientation subsystem and a stabilizer PAV30. The imaging spectrometer is composed of two sets of identical push-broom high spectral imager with a field of view of 22°, which provides a field of view of 42°. The spectral range of the imaging spectrometer is from 420nm to 900nm, and its spectral resolution is 5nm. The three-line scanner is composed of two pieces of panchromatic CCD and a RGB CCD with 20° stereo angle and 10cm GSD(Ground Sample Distance) with 1000m flying height. The laser ranger can provide height data of three points every other four scanning lines of the spectral imager and those three points are calibrated to match the corresponding pixels of the spectral imager. The post-processing attitude accuracy of POS/AV 510 used as the position & orientation subsystem, which is the aerial special exterior parameters measuring product of Canadian Applanix Corporation, is 0.005° combined with base station data. The airborne multidimensional integrated remote sensing system was implemented successfully, performed the first flying experiment on April, 2005, and obtained satisfying data.
Vigelius, Matthias; Meyer, Bernd
2012-01-01
For many biological applications, a macroscopic (deterministic) treatment of reaction-drift-diffusion systems is insufficient. Instead, one has to properly handle the stochastic nature of the problem and generate true sample paths of the underlying probability distribution. Unfortunately, stochastic algorithms are computationally expensive and, in most cases, the large number of participating particles renders the relevant parameter regimes inaccessible. In an attempt to address this problem we present a genuine stochastic, multi-dimensional algorithm that solves the inhomogeneous, non-linear, drift-diffusion problem on a mesoscopic level. Our method improves on existing implementations in being multi-dimensional and handling inhomogeneous drift and diffusion. The algorithm is well suited for an implementation on data-parallel hardware architectures such as general-purpose graphics processing units (GPUs). We integrate the method into an operator-splitting approach that decouples chemical reactions from the spatial evolution. We demonstrate the validity and applicability of our algorithm with a comprehensive suite of standard test problems that also serve to quantify the numerical accuracy of the method. We provide a freely available, fully functional GPU implementation. Integration into Inchman, a user-friendly web service, that allows researchers to perform parallel simulations of reaction-drift-diffusion systems on GPU clusters is underway. PMID:22506001
Investigating Pharmacological Similarity by Charting Chemical Space.
Buonfiglio, Rosa; Engkvist, Ola; Várkonyi, Péter; Henz, Astrid; Vikeved, Elisabet; Backlund, Anders; Kogej, Thierry
2015-11-23
In this study, biologically relevant areas of the chemical space were analyzed using ChemGPS-NP. This application enables comparing groups of ligands within a multidimensional space based on principle components derived from physicochemical descriptors. Also, 3D visualization of the ChemGPS-NP global map can be used to conveniently evaluate bioactive compound similarity and visually distinguish between different types or groups of compounds. To further establish ChemGPS-NP as a method to accurately represent the chemical space, a comparison with structure-based fingerprint has been performed. Interesting complementarities between the two descriptions of molecules were observed. It has been shown that the accuracy of describing molecules with physicochemical descriptors like in ChemGPS-NP is similar to the accuracy of structural fingerprints in retrieving bioactive molecules. Lastly, pharmacological similarity of structurally diverse compounds has been investigated in ChemGPS-NP space. These results further strengthen the case of using ChemGPS-NP as a tool to explore and visualize chemical space.
Face-infringement space: the frame of reference of the ventral intraparietal area.
McCollum, Gin; Klam, François; Graf, Werner
2012-07-01
Experimental studies have shown that responses of ventral intraparietal area (VIP) neurons specialize in head movements and the environment near the head. VIP neurons respond to visual, auditory, and tactile stimuli, smooth pursuit eye movements, and passive and active movements of the head. This study demonstrates mathematical structure on a higher organizational level created within VIP by the integration of a complete set of variables covering face-infringement. Rather than positing dynamics in an a priori defined coordinate system such as those of physical space, we assemble neuronal receptive fields to find out what space of variables VIP neurons together cover. Section 1 presents a view of neurons as multidimensional mathematical objects. Each VIP neuron occupies or is responsive to a region in a sensorimotor phase space, thus unifying variables relevant to the disparate sensory modalities and movements. Convergence on one neuron joins variables functionally, as space and time are joined in relativistic physics to form a unified spacetime. The space of position and motion together forms a neuronal phase space, bridging neurophysiology and the physics of face-infringement. After a brief review of the experimental literature, the neuronal phase space natural to VIP is sequentially characterized, based on experimental data. Responses of neurons indicate variables that may serve as axes of neural reference frames, and neuronal responses have been so used in this study. The space of sensory and movement variables covered by VIP receptive fields joins visual and auditory space to body-bound sensory modalities: somatosensation and the inertial senses. This joining of allocentric and egocentric modalities is in keeping with the known relationship of the parietal lobe to the sense of self in space and to hemineglect, in both humans and monkeys. Following this inductive step, variables are formalized in terms of the mathematics of graph theory to deduce which combinations are complete as a multidimensional neural structure that provides the organism with a complete set of options regarding objects impacting the face, such as acceptance, pursuit, and avoidance. We consider four basic variable types: position and motion of the face and of an external object. Formalizing the four types of variables allows us to generalize to any sensory system and to determine the necessary and sufficient conditions for a neural center (for example, a cortical region) to provide a face-infringement space. We demonstrate that VIP includes at least one such face-infringement space.
Das, Swagatam; Biswas, Subhodip; Panigrahi, Bijaya K; Kundu, Souvik; Basu, Debabrota
2014-10-01
This paper presents a novel search metaheuristic inspired from the physical interpretation of the optic flow of information in honeybees about the spatial surroundings that help them orient themselves and navigate through search space while foraging. The interpreted behavior combined with the minimal foraging is simulated by the artificial bee colony algorithm to develop a robust search technique that exhibits elevated performance in multidimensional objective space. Through detailed experimental study and rigorous analysis, we highlight the statistical superiority enjoyed by our algorithm over a wide variety of functions as compared to some highly competitive state-of-the-art methods.
Monks, K; Molnár, I; Rieger, H-J; Bogáti, B; Szabó, E
2012-04-06
Robust HPLC separations lead to fewer analysis failures and better method transfer as well as providing an assurance of quality. This work presents the systematic development of an optimal, robust, fast UHPLC method for the simultaneous assay of two APIs of an eye drop sample and their impurities, in accordance with Quality by Design principles. Chromatography software is employed to effectively generate design spaces (Method Operable Design Regions), which are subsequently employed to determine the final method conditions and to evaluate robustness prior to validation. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Stökl, A.
2008-11-01
Context: In spite of all the advances in multi-dimensional hydrodynamics, investigations of stellar evolution and stellar pulsations still depend on one-dimensional computations. This paper devises an alternative to the mixing-length theory or turbulence models usually adopted in modelling convective transport in such studies. Aims: The present work attempts to develop a time-dependent description of convection, which reflects the essential physics of convection and that is only moderately dependent on numerical parameters and far less time consuming than existing multi-dimensional hydrodynamics computations. Methods: Assuming that the most extensive convective patterns generate the majority of convective transport, the convective velocity field is described using two parallel, radial columns to represent up- and downstream flows. Horizontal exchange, in the form of fluid flow and radiation, over their connecting interface couples the two columns and allows a simple circulating motion. The main parameters of this convective description have straightforward geometrical meanings, namely the diameter of the columns (corresponding to the size of the convective cells) and the ratio of the cross-section between up- and downdrafts. For this geometrical setup, the time-dependent solution of the equations of radiation hydrodynamics is computed from an implicit scheme that has the advantage of being unaffected by the Courant-Friedrichs-Lewy time-step limit. This implementation is part of the TAPIR-Code (short for The adaptive, implicit RHD-Code). Results: To demonstrate the approach, results for convection zones in Cepheids are presented. The convective energy transport and convective velocities agree with expectations for Cepheids and the scheme reproduces both the kinetic energy flux and convective overshoot. A study of the parameter influence shows that the type of solution derived for these stars is in fact fairly robust with respect to the constitutive numerical parameters.
A Framework for Robust Multivariable Optimization of Integrated Circuits in Space Applications
NASA Technical Reports Server (NTRS)
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application Specific Integrated Circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way which facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as framework of software modules, templates and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation. Templates provide a starting point for both while toolbox functions minimize the code required. Once a test bench has been coded to optimize a particular circuit, it is also used to verify the final design. The combination of test bench and cost function can then serve as a template for similar circuits or be re-used to migrate the design to different processes by re-running it with the new process specific device models. The system has been used in the design of time to digital converters for laser ranging and time-of-flight mass spectrometry to optimize analog, mixed signal and digital circuits such as charge sensitive amplifiers, comparators, delay elements, radiation tolerant dual interlocked (DICE) flip-flops and two of three voter gates.
NASA Astrophysics Data System (ADS)
Fidelman, Uri
The hemispheric paradigm verifies Kant's suggestion that time and space are our subjective modes of perceiving experience. Time and space are two modes of organizing the sensory input by the l- and right-hemispheric neural mechanisms, respectively. The neural structures of the l- and right-hemispheric mechanisms force our consciousness to perceive time as one-dimensional and propagating from the past towards the future, and space as a simultaneously perceived multidimensional structure. The introduction of temporal propagation from the future towards the past by Feynman and other physicists caused the transfer of the concept time from the l hemisphere (which cannot perceive this change of the temporal direction) to the right one. This transfer requires and allows for the introduction of additional temporal axes in order to solve paradoxes in physics.
Spatially organized «vertical city» as a synthesis of tall buildings and airships
NASA Astrophysics Data System (ADS)
Gagulina, Olga; Matovnikov, Sergei
2018-03-01
The paper explores the compact city concept based on the «spatial» urban development principles and describes the prerequisites and possible methods to move from «horizontal» planning to «vertical» urban environments. It highlights the close connection between urban space, high-rise city landscape and conveyance options and sets out the ideas for upgrading the existing architectural and urban planning principles. It also conceptualizes the use of airships to create additional spatial connections between urban structure elements and high-rise buildings. Functional changes are considered in creating both urban environment and internal space of tall buildings, and the environmental aspects of the new spatial model are brought to light. The paper delineates the prospects for making a truly «spatial» multidimensional city space.
An Application of a Multidimensional Extension of the Two-Parameter Logistic Latent Trait Model.
1983-08-01
theory, models, technical issues, and applications. Review of Educational Research, 1978, 48, 467-510. Marco, G. L. Item characteristic curve...solutions to three intractable testing problems. Journal of Educational Measurement, 1977, 14, 139-160. McKinley, R. L. and Reckase, M. D. A successful...application of latent trait theory to tailored achievement testing (Research Report 80-1). Columbia: University of Missouri, Department of Educational
Towards a rational theory for CFD global stability
NASA Technical Reports Server (NTRS)
Baker, A. J.; Iannelli, G. S.
1989-01-01
The fundamental notion of the consistent stability of semidiscrete analogues of evolution PDEs is explored. Lyapunov's direct method is used to develop CFD semidiscrete algorithms which yield the TVD constraint as a special case. A general formula for supplying dissipation parameters for arbitrary multidimensional conservation law systems is proposed. The reliability of the method is demonstrated by the results of two numerical tests for representative Euler shocked flows.
Attosecond twin-pulse control by generalized kinetic heterodyne mixing.
Raith, Philipp; Ott, Christian; Pfeifer, Thomas
2011-01-15
Attosecond double-pulse (twin-pulse) production in high-order harmonic generation is manipulated by a combination of two-color and carrier-envelope phase-control methods. As we show in numerical simulations, both relative amplitude and phase of the double pulse can be independently set by making use of multidimensional parameter control. Two technical implementation routes are discussed: kinetic heterodyning using second-harmonic generation and split-spectrum phase-step control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shabbir, A., E-mail: aqsa.shabbir@ugent.be; Noterdaeme, J. M.; Max-Planck-Institut für Plasmaphysik, Garching D-85748
2014-11-15
Information visualization aimed at facilitating human perception is an important tool for the interpretation of experiments on the basis of complex multidimensional data characterizing the operational space of fusion devices. This work describes a method for visualizing the operational space on a two-dimensional map and applies it to the discrimination of type I and type III edge-localized modes (ELMs) from a series of carbon-wall ELMy discharges at JET. The approach accounts for stochastic uncertainties that play an important role in fusion data sets, by modeling measurements with probability distributions in a metric space. The method is aimed at contributing tomore » physical understanding of ELMs as well as their control. Furthermore, it is a general method that can be applied to the modeling of various other plasma phenomena as well.« less
Wirth, Sylvia; Baraduc, Pierre; Planté, Aurélie; Pinède, Serge; Duhamel, Jean-René
2017-01-01
To elucidate how gaze informs the construction of mental space during wayfinding in visual species like primates, we jointly examined navigation behavior, visual exploration, and hippocampal activity as macaque monkeys searched a virtual reality maze for a reward. Cells sensitive to place also responded to one or more variables like head direction, point of gaze, or task context. Many cells fired at the sight (and in anticipation) of a single landmark in a viewpoint- or task-dependent manner, simultaneously encoding the animal’s logical situation within a set of actions leading to the goal. Overall, hippocampal activity was best fit by a fine-grained state space comprising current position, view, and action contexts. Our findings indicate that counterparts of rodent place cells in primates embody multidimensional, task-situated knowledge pertaining to the target of gaze, therein supporting self-awareness in the construction of space. PMID:28241007
Categorical dimensions of human odor descriptor space revealed by non-negative matrix factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chennubhotla, Chakra; Castro, Jason
2013-01-01
In contrast to most other sensory modalities, the basic perceptual dimensions of olfaction remain un- clear. Here, we use non-negative matrix factorization (NMF) - a dimensionality reduction technique - to uncover structure in a panel of odor profiles, with each odor defined as a point in multi-dimensional descriptor space. The properties of NMF are favorable for the analysis of such lexical and perceptual data, and lead to a high-dimensional account of odor space. We further provide evidence that odor di- mensions apply categorically. That is, odor space is not occupied homogenously, but rather in a discrete and intrinsically clustered manner.more » We discuss the potential implications of these results for the neural coding of odors, as well as for developing classifiers on larger datasets that may be useful for predicting perceptual qualities from chemical structures.« less
Beyond the continuum: a multi-dimensional phase space for neutral-niche community assembly.
Latombe, Guillaume; Hui, Cang; McGeoch, Melodie A
2015-12-22
Neutral and niche processes are generally considered to interact in natural communities along a continuum, exhibiting community patterns bounded by pure neutral and pure niche processes. The continuum concept uses niche separation, an attribute of the community, to test the hypothesis that communities are bounded by pure niche or pure neutral conditions. It does not accommodate interactions via feedback between processes and the environment. By contrast, we introduce the Community Assembly Phase Space (CAPS), a multi-dimensional space that uses community processes (such as dispersal and niche selection) to define the limiting neutral and niche conditions and to test the continuum hypothesis. We compare the outputs of modelled communities in a heterogeneous landscape, assembled by pure neutral, pure niche and composite processes. Differences in patterns under different combinations of processes in CAPS reveal hidden complexity in neutral-niche community dynamics. The neutral-niche continuum only holds for strong dispersal limitation and niche separation. For weaker dispersal limitation and niche separation, neutral and niche processes amplify each other via feedback with the environment. This generates patterns that lie well beyond those predicted by a continuum. Inferences drawn from patterns about community assembly processes can therefore be misguided when based on the continuum perspective. CAPS also demonstrates the complementary information value of different patterns for inferring community processes and captures the complexity of community assembly. It provides a general tool for studying the processes structuring communities and can be applied to address a range of questions in community and metacommunity ecology. © 2015 The Author(s).
Beyond the continuum: a multi-dimensional phase space for neutral–niche community assembly
Latombe, Guillaume; McGeoch, Melodie A.
2015-01-01
Neutral and niche processes are generally considered to interact in natural communities along a continuum, exhibiting community patterns bounded by pure neutral and pure niche processes. The continuum concept uses niche separation, an attribute of the community, to test the hypothesis that communities are bounded by pure niche or pure neutral conditions. It does not accommodate interactions via feedback between processes and the environment. By contrast, we introduce the Community Assembly Phase Space (CAPS), a multi-dimensional space that uses community processes (such as dispersal and niche selection) to define the limiting neutral and niche conditions and to test the continuum hypothesis. We compare the outputs of modelled communities in a heterogeneous landscape, assembled by pure neutral, pure niche and composite processes. Differences in patterns under different combinations of processes in CAPS reveal hidden complexity in neutral–niche community dynamics. The neutral–niche continuum only holds for strong dispersal limitation and niche separation. For weaker dispersal limitation and niche separation, neutral and niche processes amplify each other via feedback with the environment. This generates patterns that lie well beyond those predicted by a continuum. Inferences drawn from patterns about community assembly processes can therefore be misguided when based on the continuum perspective. CAPS also demonstrates the complementary information value of different patterns for inferring community processes and captures the complexity of community assembly. It provides a general tool for studying the processes structuring communities and can be applied to address a range of questions in community and metacommunity ecology. PMID:26702047
The use of minimal spanning trees in particle physics
Rainbolt, J. Lovelace; Schmitt, M.
2017-02-14
Minimal spanning trees (MSTs) have been used in cosmology and astronomy to distinguish distributions of points in a multi-dimensional space. They are essentially unknown in particle physics, however. We briefly define MSTs and illustrate their properties through a series of examples. We show how they might be applied to study a typical event sample from a collider experiment and conclude that MSTs may prove useful in distinguishing different classes of events.
The use of minimal spanning trees in particle physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rainbolt, J. Lovelace; Schmitt, M.
Minimal spanning trees (MSTs) have been used in cosmology and astronomy to distinguish distributions of points in a multi-dimensional space. They are essentially unknown in particle physics, however. We briefly define MSTs and illustrate their properties through a series of examples. We show how they might be applied to study a typical event sample from a collider experiment and conclude that MSTs may prove useful in distinguishing different classes of events.
NASA Technical Reports Server (NTRS)
Biegel, Bryan A. (Technical Monitor); Sandstrom, Timothy A.; Henze, Chris; Levit, Creon
2003-01-01
This paper presents the hyperwall, a visualization cluster that uses coordinated visualizations for interactive exploration of multidimensional data and simulations. The system strongly leverages the human eye-brain system with a generous 7x7 array offlat panel LCD screens powered by a beowulf clustel: With each screen backed by a workstation class PC, graphic and compute intensive applications can be applied to a broad range of data. Navigational tools are presented that allow for investigation of high dimensional spaces.
Experimental demonstration of a flexible time-domain quantum channel.
Xing, Xingxing; Feizpour, Amir; Hayat, Alex; Steinberg, Aephraim M
2014-10-20
We present an experimental realization of a flexible quantum channel where the Hilbert space dimensionality can be controlled electronically. Using electro-optical modulators (EOM) and narrow-band optical filters, quantum information is encoded and decoded in the temporal degrees of freedom of photons from a long-coherence-time single-photon source. Our results demonstrate the feasibility of a generic scheme for encoding and transmitting multidimensional quantum information over the existing fiber-optical telecommunications infrastructure.
Fully adaptive propagation of the quantum-classical Liouville equation
NASA Astrophysics Data System (ADS)
Horenko, Illia; Weiser, Martin; Schmidt, Burkhard; Schütte, Christof
2004-05-01
In mixed quantum-classical molecular dynamics few but important degrees of freedom of a dynamical system are modeled quantum-mechanically while the remaining ones are treated within the classical approximation. Rothe methods established in the theory of partial differential equations are used to control both temporal and spatial discretization errors on grounds of a global tolerance criterion. The TRAIL (trapezoidal rule for adaptive integration of Liouville dynamics) scheme [I. Horenko and M. Weiser, J. Comput. Chem. 24, 1921 (2003)] has been extended to account for nonadiabatic effects in molecular dynamics described by the quantum-classical Liouville equation. In the context of particle methods, the quality of the spatial approximation of the phase-space distributions is maximized while the numerical condition of the least-squares problem for the parameters of particles is minimized. The resulting dynamical scheme is based on a simultaneous propagation of moving particles (Gaussian and Dirac deltalike trajectories) in phase space employing a fully adaptive strategy to upgrade Dirac to Gaussian particles and, vice versa, downgrading Gaussians to Dirac-type trajectories. This allows for the combination of Monte-Carlo-based strategies for the sampling of densities and coherences in multidimensional problems with deterministic treatment of nonadiabatic effects. Numerical examples demonstrate the application of the method to spin-boson systems in different dimensionality. Nonadiabatic effects occurring at conical intersections are treated in the diabatic representation. By decreasing the global tolerance, the numerical solution obtained from the TRAIL scheme are shown to converge towards exact results.
Fully adaptive propagation of the quantum-classical Liouville equation.
Horenko, Illia; Weiser, Martin; Schmidt, Burkhard; Schütte, Christof
2004-05-15
In mixed quantum-classical molecular dynamics few but important degrees of freedom of a dynamical system are modeled quantum-mechanically while the remaining ones are treated within the classical approximation. Rothe methods established in the theory of partial differential equations are used to control both temporal and spatial discretization errors on grounds of a global tolerance criterion. The TRAIL (trapezoidal rule for adaptive integration of Liouville dynamics) scheme [I. Horenko and M. Weiser, J. Comput. Chem. 24, 1921 (2003)] has been extended to account for nonadiabatic effects in molecular dynamics described by the quantum-classical Liouville equation. In the context of particle methods, the quality of the spatial approximation of the phase-space distributions is maximized while the numerical condition of the least-squares problem for the parameters of particles is minimized. The resulting dynamical scheme is based on a simultaneous propagation of moving particles (Gaussian and Dirac deltalike trajectories) in phase space employing a fully adaptive strategy to upgrade Dirac to Gaussian particles and, vice versa, downgrading Gaussians to Dirac-type trajectories. This allows for the combination of Monte-Carlo-based strategies for the sampling of densities and coherences in multidimensional problems with deterministic treatment of nonadiabatic effects. Numerical examples demonstrate the application of the method to spin-boson systems in different dimensionality. Nonadiabatic effects occurring at conical intersections are treated in the diabatic representation. By decreasing the global tolerance, the numerical solution obtained from the TRAIL scheme are shown to converge towards exact results.
Enhanced momentum feedback from clustered supernovae
NASA Astrophysics Data System (ADS)
Gentry, Eric S.; Krumholz, Mark R.; Dekel, Avishai; Madau, Piero
2017-02-01
Young stars typically form in star clusters, so the supernovae (SNe) they produce are clustered in space and time. This clustering of SNe may alter the momentum per SN deposited in the interstellar medium (ISM) by affecting the local ISM density, which in turn affects the cooling rate. We study the effect of multiple SNe using idealized 1D hydrodynamic simulations which explore a large parameter space of the number of SNe, and the background gas density and metallicity. The results are provided as a table and an analytic fitting formula. We find that for clusters with up to ˜100 SNe, the asymptotic momentum scales superlinearly with the number of SNe, resulting in a momentum per SN which can be an order of magnitude larger than for a single SN, with a maximum efficiency for clusters with 10-100 SNe. We argue that additional physical processes not included in our simulations - self-gravity, breakout from a galactic disc, and galactic shear - can slightly reduce the momentum enhancement from clustering, but the average momentum per SN still remains a factor of 4 larger than the isolated SN value when averaged over a realistic cluster mass function for a star-forming galaxy. We conclude with a discussion of the possible role of mixing between hot and cold gas, induced by multidimensional instabilities or pre-existing density variations, as a limiting factor in the build-up of momentum by clustered SNe, and suggest future numerical experiments to explore these effects.
[The emotional characteristics of the sounding word].
Videneeva, N M; Khludova, O O; Vartanov, A V
2000-01-01
The four-dimensional spherical emotional space has been obtained by multi-dimensional scaling of subjective differences between the emotional expressions in sound samples (the words "Yes" and "No" pronounced in different emotional conditions). Euclidean space axes are interpreted as the following neural mechanisms. The first two dimensions are related with the estimation of a sign of emotional condition: the dimension 1--pleasant/unpleasant, useful or not, the dimension 2--an extent of information certainty. The third and the fourth axes are associated with the incentive. The dimension 3 encodes active (anger) or passive (fear) defensive reaction, and the dimension 4 corresponds to achievement. Three angles of four-dimensional hypersphere: the one between the axes 1 and 2, the second between the axes 3 and 4, the third between these two planes determine subjectively experienced emotion characteristics such as described by Vundt emotion modality (pleasure-unpleaure), excitation-quietness-suppression, and tension-relaxation, respectively. Thus, the first and the second angles regulate the modality of ten basic emotions: five emotions determined by a situation and five emotions determined by personal activity. In case of another system of angular parameters (three angles between the axes 4 and 1, 3 and 2, and the angle between the respective planes), another system of emotion classification, which is usually described in the studies of facial expressions (Shlosberg's and Izmaĭlov's circular system) and semantics (Osgood) can be realized: emotion modality or sign (regulates 6 basic emotions), emotion activity or brightness (excitation-rest) and emotion saturation (strength of emotion expression).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powalka, Mathieu; Lançon, Ariane; Duc, Pierre-Alain
Large samples of globular clusters (GC) with precise multi-wavelength photometry are becoming increasingly available and can be used to constrain the formation history of galaxies. We present the results of an analysis of Milky Way (MW) and Virgo core GCs based on 5 optical-near-infrared colors and 10 synthetic stellar population models. For the MW GCs, the models tend to agree on photometric ages and metallicities, with values similar to those obtained with previous studies. When used with Virgo core GCs, for which photometry is provided by the Next Generation Virgo cluster Survey (NGVS), the same models generically return younger ages.more » This is a consequence of the systematic differences observed between the locus occupied by Virgo core GCs and models in panchromatic color space. Only extreme fine-tuning of the adjustable parameters available to us can make the majority of the best-fit ages old. Although we cannot exclude that the formation history of the Virgo core may lead to more conspicuous populations of relatively young GCs than in other environments, we emphasize that the intrinsic properties of the Virgo GCs are likely to differ systematically from those assumed in the models. Thus, the large wavelength coverage and photometric quality of modern GC samples, such as those used here, is not by itself sufficient to better constrain the GC formation histories. Models matching the environment-dependent characteristics of GCs in multi-dimensional color space are needed to improve the situation.« less
Linear Power Spectra in Cold+Hot Dark Matter Models: Analytical Approximations and Applications
NASA Astrophysics Data System (ADS)
Ma, Chung-Pei
1996-11-01
This paper presents simple analytic approximations to the linear power spectra, linear growth rates, and rms mass fluctuations for both components in a family of cold + hot dark matter (CDM + HDM) models that are of current cosmological interest. The formulas are valid for a wide range of wavenumbers, neutrino fractions, redshifts, and Hubble constants: k ≤ 1O h Mpc-1, 0.05 ≤ Ωv le; 0.3 0 ≤ z ≤ 15, and 0.5 ≤ h ≤ 0.8. A new, redshift-dependent shape parameter, Γv = a½Ωvh2, is introduced to simplify the multidimensional parameter space and to characterize the effect of massive neutrinos on the power spectrum. The physical origin of Γv lies in the neutrino free-streaming process, and the analytic approximations can be simplified to depend only on this variable and Ωv. Linear calculations with these power spectra as input are performed to compare the predictions of Ωv ≤ 0.3 models with observational constraints from the reconstructed linear power spectrum and cluster abundance. The usual assumption of an exact scale-invariant primordial power spectrum is relaxed to allow a spectral index of 0.8 ≤ n ≤ 1. It is found that a slight tilt of n = 0.9 (no tensor mode) or n = 0.95 (with tensor mode) in 0.t-0.2 CDM + HDM models gives a power spectrum similar to that of an open CDM model with a shape parameter Γ = 0.25, providing good agreement with the power spectrum reconstructed by Peacock & Dodds and the observed cluster abundance at low redshifts. Late galaxy formation at high redshifts, however, will be a more severe problem in tilted models.
NASA Astrophysics Data System (ADS)
Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter
2017-02-01
It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.
De Martino, Federico; Gentile, Francesco; Esposito, Fabrizio; Balsi, Marco; Di Salle, Francesco; Goebel, Rainer; Formisano, Elia
2007-01-01
We present a general method for the classification of independent components (ICs) extracted from functional MRI (fMRI) data sets. The method consists of two steps. In the first step, each fMRI-IC is associated with an IC-fingerprint, i.e., a representation of the component in a multidimensional space of parameters. These parameters are post hoc estimates of global properties of the ICs and are largely independent of a specific experimental design and stimulus timing. In the second step a machine learning algorithm automatically separates the IC-fingerprints into six general classes after preliminary training performed on a small subset of expert-labeled components. We illustrate this approach in a multisubject fMRI study employing visual structure-from-motion stimuli encoding faces and control random shapes. We show that: (1) IC-fingerprints are a valuable tool for the inspection, characterization and selection of fMRI-ICs and (2) automatic classifications of fMRI-ICs in new subjects present a high correspondence with those obtained by expert visual inspection of the components. Importantly, our classification procedure highlights several neurophysiologically interesting processes. The most intriguing of which is reflected, with high intra- and inter-subject reproducibility, in one IC exhibiting a transiently task-related activation in the 'face' region of the primary sensorimotor cortex. This suggests that in addition to or as part of the mirror system, somatotopic regions of the sensorimotor cortex are involved in disambiguating the perception of a moving body part. Finally, we show that the same classification algorithm can be successfully applied, without re-training, to fMRI collected using acquisition parameters, stimulation modality and timing considerably different from those used for training.
Learning surface molecular structures via machine vision
NASA Astrophysics Data System (ADS)
Ziatdinov, Maxim; Maksov, Artem; Kalinin, Sergei V.
2017-08-01
Recent advances in high resolution scanning transmission electron and scanning probe microscopies have allowed researchers to perform measurements of materials structural parameters and functional properties in real space with a picometre precision. In many technologically relevant atomic and/or molecular systems, however, the information of interest is distributed spatially in a non-uniform manner and may have a complex multi-dimensional nature. One of the critical issues, therefore, lies in being able to accurately identify (`read out') all the individual building blocks in different atomic/molecular architectures, as well as more complex patterns that these blocks may form, on a scale of hundreds and thousands of individual atomic/molecular units. Here we employ machine vision to read and recognize complex molecular assemblies on surfaces. Specifically, we combine Markov random field model and convolutional neural networks to classify structural and rotational states of all individual building blocks in molecular assembly on the metallic surface visualized in high-resolution scanning tunneling microscopy measurements. We show how the obtained full decoding of the system allows us to directly construct a pair density function—a centerpiece in analysis of disorder-property relationship paradigm—as well as to analyze spatial correlations between multiple order parameters at the nanoscale, and elucidate reaction pathway involving molecular conformation changes. The method represents a significant shift in our way of analyzing atomic and/or molecular resolved microscopic images and can be applied to variety of other microscopic measurements of structural, electronic, and magnetic orders in different condensed matter systems.
The psychomechanics of simulated sound sources: Material properties of impacted bars
NASA Astrophysics Data System (ADS)
McAdams, Stephen; Chaigne, Antoine; Roussarie, Vincent
2004-03-01
Sound can convey information about the materials composing an object that are often not directly available to the visual system. Material and geometric properties of synthesized impacted bars with a tube resonator were varied, their perceptual structure was inferred from multidimensional scaling of dissimilarity judgments, and the psychophysical relations between the two were quantified. Constant cross-section bars varying in mass density and viscoelastic damping coefficient were synthesized with a physical model in experiment 1. A two-dimensional perceptual space resulted, and the dimensions were correlated with the mechanical parameters after applying a power-law transformation. Variable cross-section bars varying in length and viscoelastic damping coefficient were synthesized in experiment 2 with two sets of lengths creating high- and low-pitched bars. In the low-pitched bars, there was a coupling between the bar and the resonator that modified the decay characteristics. Perceptual dimensions again corresponded to the mechanical parameters. A set of potential temporal, spectral, and spectrotemporal correlates of the auditory representation were derived from the signal. The dimensions related to mass density and bar length were correlated with the frequency of the lowest partial and are related to pitch perception. The correlate most likely to represent the viscoelastic damping coefficient across all three stimulus sets is a linear combination of a decay constant derived from the temporal envelope and the spectral center of gravity derived from a cochlear representation of the signal. These results attest to the perceptual salience of energy-loss phenomena in sound source behavior.
Statistical Estimation of Orbital Debris Populations with a Spectrum of Object Size
NASA Technical Reports Server (NTRS)
Xu, Y. -l; Horstman, M.; Krisko, P. H.; Liou, J. -C; Matney, M.; Stansbery, E. G.; Stokely, C. L.; Whitlock, D.
2008-01-01
Orbital debris is a real concern for the safe operations of satellites. In general, the hazard of debris impact is a function of the size and spatial distributions of the debris populations. To describe and characterize the debris environment as reliably as possible, the current NASA Orbital Debris Engineering Model (ORDEM2000) is being upgraded to a new version based on new and better quality data. The data-driven ORDEM model covers a wide range of object sizes from 10 microns to greater than 1 meter. This paper reviews the statistical process for the estimation of the debris populations in the new ORDEM upgrade, and discusses the representation of large-size (greater than or equal to 1 m and greater than or equal to 10 cm) populations by SSN catalog objects and the validation of the statistical approach. Also, it presents results for the populations with sizes of greater than or equal to 3.3 cm, greater than or equal to 1 cm, greater than or equal to 100 micrometers, and greater than or equal to 10 micrometers. The orbital debris populations used in the new version of ORDEM are inferred from data based upon appropriate reference (or benchmark) populations instead of the binning of the multi-dimensional orbital-element space. This paper describes all of the major steps used in the population-inference procedure for each size-range. Detailed discussions on data analysis, parameter definition, the correlation between parameters and data, and uncertainty assessment are included.
GPS Technologies as a Tool to Detect the Pre-Earthquake Signals Associated with Strong Earthquakes
NASA Astrophysics Data System (ADS)
Pulinets, S. A.; Krankowski, A.; Hernandez-Pajares, M.; Liu, J. Y. G.; Hattori, K.; Davidenko, D.; Ouzounov, D.
2015-12-01
The existence of ionospheric anomalies before earthquakes is now widely accepted. These phenomena started to be considered by GPS community to mitigate the GPS signal degradation over the territories of the earthquake preparation. The question is still open if they could be useful for seismology and for short-term earthquake forecast. More than decade of intensive studies proved that ionospheric anomalies registered before earthquakes are initiated by processes in the boundary layer of atmosphere over earthquake preparation zone and are induced in the ionosphere by electromagnetic coupling through the Global Electric Circuit. Multiparameter approach based on the Lithosphere-Atmosphere-Ionosphere Coupling model demonstrated that earthquake forecast is possible only if we consider the final stage of earthquake preparation in the multidimensional space where every dimension is one from many precursors in ensemble, and they are synergistically connected. We demonstrate approaches developed in different countries (Russia, Taiwan, Japan, Spain, and Poland) within the framework of the ISSI and ESA projects) to identify the ionospheric precursors. They are also useful to determine the all three parameters necessary for the earthquake forecast: impending earthquake epicenter position, expectation time and magnitude. These parameters are calculated using different technologies of GPS signal processing: time series, correlation, spectral analysis, ionospheric tomography, wave propagation, etc. Obtained results from different teams demonstrate the high level of statistical significance and physical justification what gives us reason to suggest these methodologies for practical validation.
Big Data Analytics for Prostate Radiotherapy
Coates, James; Souhami, Luis; El Naqa, Issam
2016-01-01
Radiation therapy is a first-line treatment option for localized prostate cancer and radiation-induced normal tissue damage are often the main limiting factor for modern radiotherapy regimens. Conversely, under-dosing of target volumes in an attempt to spare adjacent healthy tissues limits the likelihood of achieving local, long-term control. Thus, the ability to generate personalized data-driven risk profiles for radiotherapy outcomes would provide valuable prognostic information to help guide both clinicians and patients alike. Big data applied to radiation oncology promises to deliver better understanding of outcomes by harvesting and integrating heterogeneous data types, including patient-specific clinical parameters, treatment-related dose–volume metrics, and biological risk factors. When taken together, such variables make up the basis for a multi-dimensional space (the “RadoncSpace”) in which the presented modeling techniques search in order to identify significant predictors. Herein, we review outcome modeling and big data-mining techniques for both tumor control and radiotherapy-induced normal tissue effects. We apply many of the presented modeling approaches onto a cohort of hypofractionated prostate cancer patients taking into account different data types and a large heterogeneous mix of physical and biological parameters. Cross-validation techniques are also reviewed for the refinement of the proposed framework architecture and checking individual model performance. We conclude by considering advanced modeling techniques that borrow concepts from big data analytics, such as machine learning and artificial intelligence, before discussing the potential future impact of systems radiobiology approaches. PMID:27379211
Hörmander multipliers on two-dimensional dyadic Hardy spaces
NASA Astrophysics Data System (ADS)
Daly, J.; Fridli, S.
2008-12-01
In this paper we are interested in conditions on the coefficients of a two-dimensional Walsh multiplier operator that imply the operator is bounded on certain of the Hardy type spaces Hp, 0
Jiang, Canping; Flansburg, Lisa; Ghose, Sanchayita; Jorjorian, Paul; Shukla, Abhinav A
2010-12-15
The concept of design space has been taking root under the quality by design paradigm as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. This paper outlines the development of a design space for a hydrophobic interaction chromatography (HIC) process step. The design space included the impact of raw material lot-to-lot variability and variations in the feed stream from cell culture. A failure modes and effects analysis was employed as the basis for the process characterization exercise. During mapping of the process design space, the multi-dimensional combination of operational variables were studied to quantify the impact on process performance in terms of yield and product quality. Variability in resin hydrophobicity was found to have a significant influence on step yield and high-molecular weight aggregate clearance through the HIC step. A robust operating window was identified for this process step that enabled a higher step yield while ensuring acceptable product quality. © 2010 Wiley Periodicals, Inc.
Chemical Space: Big Data Challenge for Molecular Diversity.
Awale, Mahendra; Visini, Ricardo; Probst, Daniel; Arús-Pous, Josep; Reymond, Jean-Louis
2017-10-25
Chemical space describes all possible molecules as well as multi-dimensional conceptual spaces representing the structural diversity of these molecules. Part of this chemical space is available in public databases ranging from thousands to billions of compounds. Exploiting these databases for drug discovery represents a typical big data problem limited by computational power, data storage and data access capacity. Here we review recent developments of our laboratory, including progress in the chemical universe databases (GDB) and the fragment subset FDB-17, tools for ligand-based virtual screening by nearest neighbor searches, such as our multi-fingerprint browser for the ZINC database to select purchasable screening compounds, and their application to discover potent and selective inhibitors for calcium channel TRPV6 and Aurora A kinase, the polypharmacology browser (PPB) for predicting off-target effects, and finally interactive 3D-chemical space visualization using our online tools WebDrugCS and WebMolCS. All resources described in this paper are available for public use at www.gdb.unibe.ch.
Laboratory tools and e-learning elements in training of acousto-optics
NASA Astrophysics Data System (ADS)
Barócsi, Attila; Lenk, Sándor; Ujhelyi, Ferenc; Majoros, Tamás.; Maák, Paál.
2015-10-01
Due to the acousto-optic (AO) effect, the refractive index of an optical interaction medium is perturbed by an acoustic wave induced in the medium that builds up a phase grating that will diffract the incident light beam if the condition of constructive interference is satisfied. All parameters, such as magnitude, period or phase of the grating can be controlled that allows the construction of useful devices (modulators, switches, one or multi-dimensional deflectors, spectrum analyzers, tunable filters, frequency shifters, etc.) The research and training of acousto-optics have a long-term tradition at our department. In this presentation, we introduce the related laboratory exercises fitted into an e-learning frame. The BSc level exercise utilizes a laser source and an AO cell to demonstrate the effect and principal AO functions explaining signal processing terms such as amplitude or frequency modulation, modulation depth and Fourier transformation ending up in building a free space sound transmitting and demodulation system. The setup for MSc level utilizes an AO filter with mono- and polychromatic light sources to learn about spectral analysis and synthesis. Smart phones can be used to generate signal inputs or outputs for both setups as well as to help students' preparation and reporting.
Computer-aided biochemical programming of synthetic microreactors as diagnostic devices.
Courbet, Alexis; Amar, Patrick; Fages, François; Renard, Eric; Molina, Franck
2018-04-26
Biological systems have evolved efficient sensing and decision-making mechanisms to maximize fitness in changing molecular environments. Synthetic biologists have exploited these capabilities to engineer control on information and energy processing in living cells. While engineered organisms pose important technological and ethical challenges, de novo assembly of non-living biomolecular devices could offer promising avenues toward various real-world applications. However, assembling biochemical parts into functional information processing systems has remained challenging due to extensive multidimensional parameter spaces that must be sampled comprehensively in order to identify robust, specification compliant molecular implementations. We introduce a systematic methodology based on automated computational design and microfluidics enabling the programming of synthetic cell-like microreactors embedding biochemical logic circuits, or protosensors , to perform accurate biosensing and biocomputing operations in vitro according to temporal logic specifications. We show that proof-of-concept protosensors integrating diagnostic algorithms detect specific patterns of biomarkers in human clinical samples. Protosensors may enable novel approaches to medicine and represent a step toward autonomous micromachines capable of precise interfacing of human physiology or other complex biological environments, ecosystems, or industrial bioprocesses. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.
One-dimensional cuts through multidimensional potential-energy surfaces by tunable x rays
NASA Astrophysics Data System (ADS)
Eckert, Sebastian; da Cruz, Vinícius Vaz; Gel'mukhanov, Faris; Ertan, Emelie; Ignatova, Nina; Polyutov, Sergey; Couto, Rafael C.; Fondell, Mattis; Dantz, Marcus; Kennedy, Brian; Schmitt, Thorsten; Pietzsch, Annette; Odelius, Michael; Föhlisch, Alexander
2018-05-01
The concept of the potential-energy surface (PES) and directional reaction coordinates is the backbone of our description of chemical reaction mechanisms. Although the eigenenergies of the nuclear Hamiltonian uniquely link a PES to its spectrum, this information is in general experimentally inaccessible in large polyatomic systems. This is due to (near) degenerate rovibrational levels across the parameter space of all degrees of freedom, which effectively forms a pseudospectrum given by the centers of gravity of groups of close-lying vibrational levels. We show here that resonant inelastic x-ray scattering (RIXS) constitutes an ideal probe for revealing one-dimensional cuts through the ground-state PES of molecular systems, even far away from the equilibrium geometry, where the independent-mode picture is broken. We strictly link the center of gravity of close-lying vibrational peaks in RIXS to a pseudospectrum which is shown to coincide with the eigenvalues of an effective one-dimensional Hamiltonian along the propagation coordinate of the core-excited wave packet. This concept, combined with directional and site selectivity of the core-excited states, allows us to experimentally extract cuts through the ground-state PES along three complementary directions for the showcase H2O molecule.
Optical image encryption method based on incoherent imaging and polarized light encoding
NASA Astrophysics Data System (ADS)
Wang, Q.; Xiong, D.; Alfalou, A.; Brosseau, C.
2018-05-01
We propose an incoherent encoding system for image encryption based on a polarized encoding method combined with an incoherent imaging. Incoherent imaging is the core component of this proposal, in which the incoherent point-spread function (PSF) of the imaging system serves as the main key to encode the input intensity distribution thanks to a convolution operation. An array of retarders and polarizers is placed on the input plane of the imaging structure to encrypt the polarized state of light based on Mueller polarization calculus. The proposal makes full use of randomness of polarization parameters and incoherent PSF so that a multidimensional key space is generated to deal with illegal attacks. Mueller polarization calculus and incoherent illumination of imaging structure ensure that only intensity information is manipulated. Another key advantage is that complicated processing and recording related to a complex-valued signal are avoided. The encoded information is just an intensity distribution, which is advantageous for data storage and transition because information expansion accompanying conventional encryption methods is also avoided. The decryption procedure can be performed digitally or using optoelectronic devices. Numerical simulation tests demonstrate the validity of the proposed scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ki Ha; Becker, Alex; Tseng, Hung-Wen
2004-06-16
Non-invasive, high-resolution imaging of the shallow subsurface is needed for delineation of buried waste, detection of unexploded ordinance, verification and monitoring of containment structures, and other environmental applications. Electromagnetic (EM) measurements at frequencies between 0.1 and 100 MHz are important for such applications, because the induction number of many targets is small and the ability to determine the dielectric permittivity in addition to electrical conductivity of the subsurface is possible. Earlier workers were successful in developing systems for detecting anomalous areas, but no quantifiable information was accurately determined. For high-resolution imaging, accurate measurements are necessary so the field data canmore » be mapped into the space of the subsurface parameters. We are developing a non-invasive method for accurately mapping the electrical conductivity and dielectric permittivity of the shallow subsurface using the EM impedance approach (Frangos, 2001; Lee and Becker, 2001; Song et al., 2002, Tseng et al., 2003). Electric and magnetic sensors are being tested and calibrated on sea water and in a known area against theoretical predictions, thereby insuring that the data collected with the high-frequency impedance (HFI) system will support high-resolution, multi-dimensional imaging techniques.« less
NASA Astrophysics Data System (ADS)
Raithel, Carolyn A.; Özel, Feryal; Psaltis, Dimitrios
2017-08-01
One of the key goals of observing neutron stars is to infer the equation of state (EoS) of the cold, ultradense matter in their interiors. Here, we present a Bayesian statistical method of inferring the pressures at five fixed densities, from a sample of mock neutron star masses and radii. We show that while five polytropic segments are needed for maximum flexibility in the absence of any prior knowledge of the EoS, regularizers are also necessary to ensure that simple underlying EoS are not over-parameterized. For ideal data with small measurement uncertainties, we show that the pressure at roughly twice the nuclear saturation density, {ρ }{sat}, can be inferred to within 0.3 dex for many realizations of potential sources of uncertainties. The pressures of more complicated EoS with significant phase transitions can also be inferred to within ˜30%. We also find that marginalizing the multi-dimensional parameter space of pressure to infer a mass-radius relation can lead to biases of nearly 1 km in radius, toward larger radii. Using the full, five-dimensional posterior likelihoods avoids this bias.
Auditory salience using natural soundscapes.
Huang, Nicholas; Elhilali, Mounya
2017-03-01
Salience describes the phenomenon by which an object stands out from a scene. While its underlying processes are extensively studied in vision, mechanisms of auditory salience remain largely unknown. Previous studies have used well-controlled auditory scenes to shed light on some of the acoustic attributes that drive the salience of sound events. Unfortunately, the use of constrained stimuli in addition to a lack of well-established benchmarks of salience judgments hampers the development of comprehensive theories of sensory-driven auditory attention. The present study explores auditory salience in a set of dynamic natural scenes. A behavioral measure of salience is collected by having human volunteers listen to two concurrent scenes and indicate continuously which one attracts their attention. By using natural scenes, the study takes a data-driven rather than experimenter-driven approach to exploring the parameters of auditory salience. The findings indicate that the space of auditory salience is multidimensional (spanning loudness, pitch, spectral shape, as well as other acoustic attributes), nonlinear and highly context-dependent. Importantly, the results indicate that contextual information about the entire scene over both short and long scales needs to be considered in order to properly account for perceptual judgments of salience.
Spatial averaging for small molecule diffusion in condensed phase environments
NASA Astrophysics Data System (ADS)
Plattner, Nuria; Doll, J. D.; Meuwly, Markus
2010-07-01
Spatial averaging is a new approach for sampling rare-event problems. The approach modifies the importance function which improves the sampling efficiency while keeping a defined relation to the original statistical distribution. In this work, spatial averaging is applied to multidimensional systems for typical problems arising in physical chemistry. They include (I) a CO molecule diffusing on an amorphous ice surface, (II) a hydrogen molecule probing favorable positions in amorphous ice, and (III) CO migration in myoglobin. The systems encompass a wide range of energy barriers and for all of them spatial averaging is found to outperform conventional Metropolis Monte Carlo. It is also found that optimal simulation parameters are surprisingly similar for the different systems studied, in particular, the radius of the point cloud over which the potential energy function is averaged. For H2 diffusing in amorphous ice it is found that facile migration is possible which is in agreement with previous suggestions from experiment. The free energy barriers involved are typically lower than 1 kcal/mol. Spatial averaging simulations for CO in myoglobin are able to locate all currently characterized metastable states. Overall, it is found that spatial averaging considerably improves the sampling of configurational space.
Bento-Torres, Natáli Valim Oliver; Rodrigues, Anderson Raiol; Côrtes, Maria Izabel Tentes; Bonci, Daniela Maria de Oliveira; Ventura, Dora Fix
2016-01-01
We have used the Farnsworth-Munsell 100-hue (FM 100) test and Mollon-Reffin (MR) test to evaluate the colour vision of 93 subjects, 30.4 ± 9.7 years old, who had red-green congenital colour vision deficiencies. All subjects lived in Belém (State of Pará, Brazil) and were selected by the State of Pará Traffic Department. Selection criteria comprised the absence of visual dysfunctions other than Daltonism and no history of systemic diseases that could impair the visual system performance. Results from colour vision deficient were compared with those from 127 normal trichromats, 29.3 ± 10.3 years old. For the MR test, measurements were taken around five points of the CIE 1976 colour space, along 20 directions irradiating from each point, in order to determine with high-resolution the corresponding colour discrimination ellipses (MacAdam ellipses). Three parameters were used to compare results obtained from different subjects: diameter of circle with same ellipse area, ratio between ellipse’s long and short axes, and ellipse long axis angle. For the FM 100 test, the parameters were: logarithm of the total number of mistakes and positions of mistakes in the FM diagram. Data were also simultaneously analysed in two or three dimensions as well as by using multidimensional cluster analysis. For the MR test, Mollon-Reffin Ellipse #3 (u’ = 0.225, v’ = 0.415) discriminated more efficiently than the other four ellipses between protans and deutans once it provided larger angular difference in the colour space between protan and deutan confusion lines. The MR test was more sensitive than the FM 100 test. It separated individuals by dysfunctional groups with greater precision, provided a more sophisticated quantitative analysis, and its use is appropriate for a more refined evaluation of different phenotypes of red-green colour vision deficiencies. PMID:27101124
Bento-Torres, Natáli Valim Oliver; Rodrigues, Anderson Raiol; Côrtes, Maria Izabel Tentes; Bonci, Daniela Maria de Oliveira; Ventura, Dora Fix; Silveira, Luiz Carlos de Lima
2016-01-01
We have used the Farnsworth-Munsell 100-hue (FM 100) test and Mollon-Reffin (MR) test to evaluate the colour vision of 93 subjects, 30.4 ± 9.7 years old, who had red-green congenital colour vision deficiencies. All subjects lived in Belém (State of Pará, Brazil) and were selected by the State of Pará Traffic Department. Selection criteria comprised the absence of visual dysfunctions other than Daltonism and no history of systemic diseases that could impair the visual system performance. Results from colour vision deficient were compared with those from 127 normal trichromats, 29.3 ± 10.3 years old. For the MR test, measurements were taken around five points of the CIE 1976 colour space, along 20 directions irradiating from each point, in order to determine with high-resolution the corresponding colour discrimination ellipses (MacAdam ellipses). Three parameters were used to compare results obtained from different subjects: diameter of circle with same ellipse area, ratio between ellipse's long and short axes, and ellipse long axis angle. For the FM 100 test, the parameters were: logarithm of the total number of mistakes and positions of mistakes in the FM diagram. Data were also simultaneously analysed in two or three dimensions as well as by using multidimensional cluster analysis. For the MR test, Mollon-Reffin Ellipse #3 (u' = 0.225, v' = 0.415) discriminated more efficiently than the other four ellipses between protans and deutans once it provided larger angular difference in the colour space between protan and deutan confusion lines. The MR test was more sensitive than the FM 100 test. It separated individuals by dysfunctional groups with greater precision, provided a more sophisticated quantitative analysis, and its use is appropriate for a more refined evaluation of different phenotypes of red-green colour vision deficiencies.
NASA Astrophysics Data System (ADS)
Anchordoqui, Luis A.; Barger, Vernon; Weiler, Thomas J.
2018-03-01
We argue that if ultrahigh-energy (E ≳1010GeV) cosmic rays are heavy nuclei (as indicated by existing data), then the pointing of cosmic rays to their nearest extragalactic sources is expected for 1010.6 ≲ E /GeV ≲1011. This is because for a nucleus of charge Ze and baryon number A, the bending of the cosmic ray decreases as Z / E with rising energy, so that pointing to nearby sources becomes possible in this particular energy range. In addition, the maximum energy of acceleration capability of the sources grows linearly in Z, while the energy loss per distance traveled decreases with increasing A. Each of these two points tend to favor heavy nuclei at the highest energies. The traditional bi-dimensional analyses, which simultaneously reproduce Auger data on the spectrum and nuclear composition, may not be capable of incorporating the relative importance of all these phenomena. In this paper we propose a multi-dimensional reconstruction of the individual emission spectra (in E, direction, and cross-correlation with nearby putative sources) to study the hypothesis that primaries are heavy nuclei subject to GZK photo-disintegration, and to determine the nature of the extragalactic sources. More specifically, we propose to combine information on nuclear composition and arrival direction to associate a potential clustering of events with a 3-dimensional position in the sky. Actually, both the source distance and maximum emission energy can be obtained through a multi-parameter likelihood analysis to accommodate the observed nuclear composition of each individual event in the cluster. We show that one can track the level of GZK interactions on an statistical basis by comparing the maximum energy at the source of each cluster. We also show that nucleus-emitting-sources exhibit a cepa stratis structure on Earth which could be pealed off by future space-missions, such as POEMMA. Finally, we demonstrate that metal-rich starburst galaxies are highly-plausible candidate sources, and we use them as an explicit example of our proposed multi-dimensional analysis.
Multidimensional poverty, household environment and short-term morbidity in India.
Dehury, Bidyadhar; Mohanty, Sanjay K
2017-01-01
Using the unit data from the second round of the Indian Human Development Survey (IHDS-II), 2011-2012, which covered 42,152 households, this paper examines the association between multidimensional poverty, household environmental deprivation and short-term morbidities (fever, cough and diarrhoea) in India. Poverty is measured in a multidimensional framework that includes the dimensions of education, health and income, while household environmental deprivation is defined as lack of access to improved sanitation, drinking water and cooking fuel. A composite index combining multidimensional poverty and household environmental deprivation has been computed, and households are classified as follows: multidimensional poor and living in a poor household environment, multidimensional non-poor and living in a poor household environment, multidimensional poor and living in a good household environment and multidimensional non-poor and living in a good household environment. Results suggest that about 23% of the population belonging to multidimensional poor households and living in a poor household environment had experienced short-term morbidities in a reference period of 30 days compared to 20% of the population belonging to multidimensional non-poor households and living in a poor household environment, 19% of the population belonging to multidimensional poor households and living in a good household environment and 15% of the population belonging to multidimensional non-poor households and living in a good household environment. Controlling for socioeconomic covariates, the odds of short-term morbidity was 1.47 [CI 1.40-1.53] among the multidimensional poor and living in a poor household environment, 1.28 [CI 1.21-1.37] among the multidimensional non-poor and living in a poor household environment and 1.21 [CI 1.64-1.28] among the multidimensional poor and living in a good household environment compared to the multidimensional non-poor and living in a good household environment. Results are robust across states and hold good for each of the three morbidities: fever, cough and diarrhoea. This establishes that along with poverty, household environmental conditions have a significant bearing on short-term morbidities in India. Public investment in sanitation, drinking water and cooking fuel can reduce the morbidity and improve the health of the population.
Clustering by reordering of similarity and Laplacian matrices: Application to galaxy clusters
NASA Astrophysics Data System (ADS)
Mahmoud, E.; Shoukry, A.; Takey, A.
2018-04-01
Similarity metrics, kernels and similarity-based algorithms have gained much attention due to their increasing applications in information retrieval, data mining, pattern recognition and machine learning. Similarity Graphs are often adopted as the underlying representation of similarity matrices and are at the origin of known clustering algorithms such as spectral clustering. Similarity matrices offer the advantage of working in object-object (two-dimensional) space where visualization of clusters similarities is available instead of object-features (multi-dimensional) space. In this paper, sparse ɛ-similarity graphs are constructed and decomposed into strong components using appropriate methods such as Dulmage-Mendelsohn permutation (DMperm) and/or Reverse Cuthill-McKee (RCM) algorithms. The obtained strong components correspond to groups (clusters) in the input (feature) space. Parameter ɛi is estimated locally, at each data point i from a corresponding narrow range of the number of nearest neighbors. Although more advanced clustering techniques are available, our method has the advantages of simplicity, better complexity and direct visualization of the clusters similarities in a two-dimensional space. Also, no prior information about the number of clusters is needed. We conducted our experiments on two and three dimensional, low and high-sized synthetic datasets as well as on an astronomical real-dataset. The results are verified graphically and analyzed using gap statistics over a range of neighbors to verify the robustness of the algorithm and the stability of the results. Combining the proposed algorithm with gap statistics provides a promising tool for solving clustering problems. An astronomical application is conducted for confirming the existence of 45 galaxy clusters around the X-ray positions of galaxy clusters in the redshift range [0.1..0.8]. We re-estimate the photometric redshifts of the identified galaxy clusters and obtain acceptable values compared to published spectroscopic redshifts with a 0.029 standard deviation of their differences.
Statistical Downscaling in Multi-dimensional Wave Climate Forecast
NASA Astrophysics Data System (ADS)
Camus, P.; Méndez, F. J.; Medina, R.; Losada, I. J.; Cofiño, A. S.; Gutiérrez, J. M.
2009-04-01
Wave climate at a particular site is defined by the statistical distribution of sea state parameters, such as significant wave height, mean wave period, mean wave direction, wind velocity, wind direction and storm surge. Nowadays, long-term time series of these parameters are available from reanalysis databases obtained by numerical models. The Self-Organizing Map (SOM) technique is applied to characterize multi-dimensional wave climate, obtaining the relevant "wave types" spanning the historical variability. This technique summarizes multi-dimension of wave climate in terms of a set of clusters projected in low-dimensional lattice with a spatial organization, providing Probability Density Functions (PDFs) on the lattice. On the other hand, wind and storm surge depend on instantaneous local large-scale sea level pressure (SLP) fields while waves depend on the recent history of these fields (say, 1 to 5 days). Thus, these variables are associated with large-scale atmospheric circulation patterns. In this work, a nearest-neighbors analog method is used to predict monthly multi-dimensional wave climate. This method establishes relationships between the large-scale atmospheric circulation patterns from numerical models (SLP fields as predictors) with local wave databases of observations (monthly wave climate SOM PDFs as predictand) to set up statistical models. A wave reanalysis database, developed by Puertos del Estado (Ministerio de Fomento), is considered as historical time series of local variables. The simultaneous SLP fields calculated by NCEP atmospheric reanalysis are used as predictors. Several applications with different size of sea level pressure grid and with different temporal domain resolution are compared to obtain the optimal statistical model that better represents the monthly wave climate at a particular site. In this work we examine the potential skill of this downscaling approach considering perfect-model conditions, but we will also analyze the suitability of this methodology to be used for seasonal forecast and for long-term climate change scenario projection of wave climate.
Simulations of Flame Acceleration and Deflagration-to-Detonation Transitions in Methane-Air Systems
2010-03-17
are neglected. 3. Model parameter calibration The one-step Arrhenius kinetics used in this model cannot ex- actly reproduce all properties of laminar...with obstacles are compared to previ- ously reported experimental data. The results obtained using the simple reaction model qualitatively, and in...have taken in developing a multidimensional numerical model to study explosions in large-scale systems containing mixtures of nat- ural gas and air
Black-hole production at LHC: Special features, problems, and expectations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savina, M. V., E-mail: savina@cern.ch
2011-03-15
A brief survey of the present-day status of the problem of multidimensional-black-hole production at accelerators according to models featuring large extra dimensions is given. The respective production cross section and the Hawking temperature and decay rate are estimated versus model parameters. Possible flaws and assumptions whose accurate inclusion can reduce significantly the probability of blackhole production at accelerators in relation to earlier optimistic estimates are also discussed.
Multidimensional vocal assessment after laser treatment for recurrent respiratory papillomatosis.
Kono, Takeyuki; Yabe, Haruna; Uno, Kosuke; Saito, Koichiro; Ogawa, Kaoru
2017-03-01
Recurrent respiratory papillomatosis (RRP) is a benign epithelial tumor that exhibits a high frequency of recurrence. This study assesses the vocal function after laser treatment for RRP, particularly in relation to the frequency of surgery. Retrospective study. Thirty RRP patients who underwent laser surgery that controlled the tumor were included. Preoperative and postoperative Grade, Roughness, Breathiness, Asthenia, and Strain Scale, videostroboscopic findings, aerodynamic and acoustic parameters, and self-assessment questionnaires were measured and compared with an age- and sex-matched control group. Subsequently, to evaluate the association between postoperative voice quality and the number of surgeries, the patients were divided into three groups (group 1: single surgery, group 2: 2-5 surgeries, group3: >6 surgeries), and comparative multidimensional vocal assessments were performed. The mean number of surgeries was 3.4 (range, 1-8). Although all patients exhibited poorer vocal function than the control group preoperatively, they showed improvement in postoperative subjective and objective parameters. However, four patients who underwent one surgery with relatively aggressive ablation exhibited vocal cord scarring and deteriorated objective parameters. All remaining patients showed voice quality that was on par with the control group. Subgroup analysis proved no association between post-therapeutic voice quality and the patient characteristics, including preoperative staging and the number of surgical treatments performed. RRP patients can achieve a close to normal voice with high satisfaction even after recurrent surgical treatment when ablation of a subepithelial lesion using sufficient laser energy is adequate. 3b Laryngoscope, 127:679-684, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Pranal, Thibaut; Pereira, Bruno; Berthelin, Pauline; Roszyk, Laurence; Chabanne, Russell; Eisenmann, Nathanael; Lautrette, Alexandre; Belville, Corinne; Blondonnet, Raiko; Gillart, Thierry; Skrzypczak, Yvan; Souweine, Bertrand; Bouvier, Damien; Constantin, Jean-Michel
2018-01-01
Rationale Although soluble forms of the receptor for advanced glycation end products (RAGE) have been recently proposed as biomarkers in multiple acute or chronic diseases, few studies evaluated the influence of usual clinical and biological parameters, or of patient characteristics and comorbidities, on circulating levels of soluble RAGE in the intensive care unit (ICU) setting. Objectives To determine, among clinical and biological parameters that are usually recorded upon ICU admission, which variables, if any, could be associated with plasma levels of soluble RAGE. Methods Data for this ancillary study were prospectively obtained from adult patients with at least one ARDS risk factor upon ICU admission enrolled in a large multicenter observational study. At ICU admission, plasma levels of total soluble RAGE (sRAGE) and endogenous secretory (es)RAGE were measured by duplicate ELISA and baseline patient characteristics, comorbidities, and usual clinical and biological indices were recorded. After univariate analyses, significant variables were used in multivariate, multidimensional analyses. Measurements and Main Results 294 patients were included in this ancillary study, among whom 62% were admitted for medical reasons, including septic shock (11%), coma (11%), and pneumonia (6%). Although some variables were associated with plasma levels of RAGE soluble forms in univariate analysis, multidimensional analyses showed no significant association between admission parameters and baseline plasma sRAGE or esRAGE. Conclusions We found no obvious association between circulating levels of soluble RAGE and clinical and biological indices that are usually recorded upon ICU admission. This trial is registered with NCT02070536. PMID:29861796
Descriptive Characteristics of Surface Water Quality in Hong Kong by a Self-Organising Map
An, Yan; Zou, Zhihong; Li, Ranran
2016-01-01
In this study, principal component analysis (PCA) and a self-organising map (SOM) were used to analyse a complex dataset obtained from the river water monitoring stations in the Tolo Harbor and Channel Water Control Zone (Hong Kong), covering the period of 2009–2011. PCA was initially applied to identify the principal components (PCs) among the nonlinear and complex surface water quality parameters. SOM followed PCA, and was implemented to analyze the complex relationships and behaviors of the parameters. The results reveal that PCA reduced the multidimensional parameters to four significant PCs which are combinations of the original ones. The positive and inverse relationships of the parameters were shown explicitly by pattern analysis in the component planes. It was found that PCA and SOM are efficient tools to capture and analyze the behavior of multivariable, complex, and nonlinear related surface water quality data. PMID:26761018
Descriptive Characteristics of Surface Water Quality in Hong Kong by a Self-Organising Map.
An, Yan; Zou, Zhihong; Li, Ranran
2016-01-08
In this study, principal component analysis (PCA) and a self-organising map (SOM) were used to analyse a complex dataset obtained from the river water monitoring stations in the Tolo Harbor and Channel Water Control Zone (Hong Kong), covering the period of 2009-2011. PCA was initially applied to identify the principal components (PCs) among the nonlinear and complex surface water quality parameters. SOM followed PCA, and was implemented to analyze the complex relationships and behaviors of the parameters. The results reveal that PCA reduced the multidimensional parameters to four significant PCs which are combinations of the original ones. The positive and inverse relationships of the parameters were shown explicitly by pattern analysis in the component planes. It was found that PCA and SOM are efficient tools to capture and analyze the behavior of multivariable, complex, and nonlinear related surface water quality data.
Improving the Accessibility and Use of NASA Earth Science Data
NASA Technical Reports Server (NTRS)
Tisdale, Matthew; Tisdale, Brian
2015-01-01
Many of the NASA Langley Atmospheric Science Data Center (ASDC) Distributed Active Archive Center (DAAC) multidimensional tropospheric and atmospheric chemistry data products are stored in HDF4, HDF5 or NetCDF format, which traditionally have been difficult to analyze and visualize with geospatial tools. With the rising demand from the diverse end-user communities for geospatial tools to handle multidimensional products, several applications, such as ArcGIS, have refined their software. Many geospatial applications now have new functionalities that enable the end user to: Store, serve, and perform analysis on each individual variable, its time dimension, and vertical dimension. Use NetCDF, GRIB, and HDF raster data formats across applications directly. Publish output within REST image services or WMS for time and space enabled web application development. During this webinar, participants will learn how to leverage geospatial applications such as ArcGIS, OPeNDAP and ncWMS in the production of Earth science information, and in increasing data accessibility and usability.
Multidimensional scaling of D15 caps: color-vision defects among tobacco smokers?
Bimler, David; Kirkland, John
2004-01-01
Tobacco smoke contains a range of toxins including carbon monoxide and cyanide. With specialized cells and high metabolic demands, the optic nerve and retina are vulnerable to toxic exposure. We examined the possible effects of smoking on color vision: specifically, whether smokers perceive a different pattern of suprathreshold color dissimilarities from nonsmokers. It is already known that smokers differ in threshold color discrimination, with elevated scores on the Roth 28-Hue Desaturated panel test. Groups of smokers and nonsmokers, matched for sex and age, followed a triadic procedure to compare dissimilarities among 32 pigmented stimuli (the caps of the saturated and desaturated versions of the D15 panel test). Multidimensional scaling was applied to quantify individual variations in the salience of the axes of color space. Despite the briefness, simplicity, and "low-tech" nature of the procedure, subtle but statistically significant differences did emerge: on average the smoking group were significantly less sensitive to red-green differences. This is consistent with some form of injury to the optic nerve.
Carbon, Claus-Christian
2010-07-01
Participants with personal and without personal experiences with the Earth as a sphere estimated large-scale distances between six cities located on different continents. Cognitive distances were submitted to a specific multidimensional scaling algorithm in the 3D Euclidean space with the constraint that all cities had to lie on the same sphere. A simulation was run that calculated respective 3D configurations of the city positions for a wide range of radii of the proposed sphere. People who had personally experienced the Earth as a sphere, at least once in their lifetime, showed a clear optimal solution of the multidimensional scaling (MDS) routine with a mean radius deviating only 8% from the actual radius of the Earth. In contrast, the calculated configurations for people without any personal experience with the Earth as a sphere were compatible with a cognitive concept of a flat Earth. 2010 Elsevier B.V. All rights reserved.
Michel, Pierre; Baumstarck, Karine; Ghattas, Badih; Pelletier, Jean; Loundou, Anderson; Boucekine, Mohamed; Auquier, Pascal; Boyer, Laurent
2016-04-01
The aim was to develop a multidimensional computerized adaptive short-form questionnaire, the MusiQoL-MCAT, from a fixed-length QoL questionnaire for multiple sclerosis.A total of 1992 patients were enrolled in this international cross-sectional study. The development of the MusiQoL-MCAT was based on the assessment of between-items MIRT model fit followed by real-data simulations. The MCAT algorithm was based on Bayesian maximum a posteriori estimation of latent traits and Kullback-Leibler information item selection. We examined several simulations based on a fixed number of items. Accuracy was assessed using correlations (r) between initial IRT scores and MCAT scores. Precision was assessed using the standard error measurement (SEM) and the root mean square error (RMSE).The multidimensional graded response model was used to estimate item parameters and IRT scores. Among the MCAT simulations, the 16-item version of the MusiQoL-MCAT was selected because the accuracy and precision became stable with 16 items with satisfactory levels (r ≥ 0.9, SEM ≤ 0.55, and RMSE ≤ 0.3). External validity of the MusiQoL-MCAT was satisfactory.The MusiQoL-MCAT presents satisfactory properties and can individually tailor QoL assessment to each patient, making it less burdensome to patients and better adapted for use in clinical practice.
Michel, Pierre; Baumstarck, Karine; Ghattas, Badih; Pelletier, Jean; Loundou, Anderson; Boucekine, Mohamed; Auquier, Pascal; Boyer, Laurent
2016-01-01
Abstract The aim was to develop a multidimensional computerized adaptive short-form questionnaire, the MusiQoL-MCAT, from a fixed-length QoL questionnaire for multiple sclerosis. A total of 1992 patients were enrolled in this international cross-sectional study. The development of the MusiQoL-MCAT was based on the assessment of between-items MIRT model fit followed by real-data simulations. The MCAT algorithm was based on Bayesian maximum a posteriori estimation of latent traits and Kullback–Leibler information item selection. We examined several simulations based on a fixed number of items. Accuracy was assessed using correlations (r) between initial IRT scores and MCAT scores. Precision was assessed using the standard error measurement (SEM) and the root mean square error (RMSE). The multidimensional graded response model was used to estimate item parameters and IRT scores. Among the MCAT simulations, the 16-item version of the MusiQoL-MCAT was selected because the accuracy and precision became stable with 16 items with satisfactory levels (r ≥ 0.9, SEM ≤ 0.55, and RMSE ≤ 0.3). External validity of the MusiQoL-MCAT was satisfactory. The MusiQoL-MCAT presents satisfactory properties and can individually tailor QoL assessment to each patient, making it less burdensome to patients and better adapted for use in clinical practice. PMID:27057832
Aur, Dorian; Vila-Rodriguez, Fidel
2017-01-01
Complexity measures for time series have been used in many applications to quantify the regularity of one dimensional time series, however many dynamical systems are spatially distributed multidimensional systems. We introduced Dynamic Cross-Entropy (DCE) a novel multidimensional complexity measure that quantifies the degree of regularity of EEG signals in selected frequency bands. Time series generated by discrete logistic equations with varying control parameter r are used to test DCE measures. Sliding window DCE analyses are able to reveal specific period doubling bifurcations that lead to chaos. A similar behavior can be observed in seizures triggered by electroconvulsive therapy (ECT). Sample entropy data show the level of signal complexity in different phases of the ictal ECT. The transition to irregular activity is preceded by the occurrence of cyclic regular behavior. A significant increase of DCE values in successive order from high frequencies in gamma to low frequencies in delta band reveals several phase transitions into less ordered states, possible chaos in the human brain. To our knowledge there are no reliable techniques able to reveal the transition to chaos in case of multidimensional times series. In addition, DCE based on sample entropy appears to be robust to EEG artifacts compared to DCE based on Shannon entropy. The applied technique may offer new approaches to better understand nonlinear brain activity. Copyright © 2016 Elsevier B.V. All rights reserved.
Multidimensional chromatography in food analysis.
Herrero, Miguel; Ibáñez, Elena; Cifuentes, Alejandro; Bernal, Jose
2009-10-23
In this work, the main developments and applications of multidimensional chromatographic techniques in food analysis are reviewed. Different aspects related to the existing couplings involving chromatographic techniques are examined. These couplings include multidimensional GC, multidimensional LC, multidimensional SFC as well as all their possible combinations. Main advantages and drawbacks of each coupling are critically discussed and their key applications in food analysis described.
Shift-Variant Multidimensional Systems.
1985-05-29
i=0,1,** *N-1 in (3.1), one will get 0() i_0,1,* ,N-1 which is nonnegative due to the Perron - Frobenius Theorem [24]. That is, the A nonnegativity ...and the current input. The state-space model was extended in order to model 2-D discrete LSV systems with support on a causality cone . Subsequently...formulated as a special system of linear equations with nonnegative coefficients whose solution is required to satisfy con- straints like nonnegativity in
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.
The report of these two hearings on high definition information systems begins by noting that they are digital, and that they are likely to handle computing, telecommunications, home security, computer imaging, storage, fiber optics networks, multi-dimensional libraries, and many other local, national, and international systems. (It is noted that…
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.; Nkonga, Boniface
2017-10-01
Just as the quality of a one-dimensional approximate Riemann solver is improved by the inclusion of internal sub-structure, the quality of a multidimensional Riemann solver is also similarly improved. Such multidimensional Riemann problems arise when multiple states come together at the vertex of a mesh. The interaction of the resulting one-dimensional Riemann problems gives rise to a strongly-interacting state. We wish to endow this strongly-interacting state with physically-motivated sub-structure. The fastest way of endowing such sub-structure consists of making a multidimensional extension of the HLLI Riemann solver for hyperbolic conservation laws. Presenting such a multidimensional analogue of the HLLI Riemann solver with linear sub-structure for use on structured meshes is the goal of this work. The multidimensional MuSIC Riemann solver documented here is universal in the sense that it can be applied to any hyperbolic conservation law. The multidimensional Riemann solver is made to be consistent with constraints that emerge naturally from the Galerkin projection of the self-similar states within the wave model. When the full eigenstructure in both directions is used in the present Riemann solver, it becomes a complete Riemann solver in a multidimensional sense. I.e., all the intermediate waves are represented in the multidimensional wave model. The work also presents, for the very first time, an important analysis of the dissipation characteristics of multidimensional Riemann solvers. The present Riemann solver results in the most efficient implementation of a multidimensional Riemann solver with sub-structure. Because it preserves stationary linearly degenerate waves, it might also help with well-balancing. Implementation-related details are presented in pointwise fashion for the one-dimensional HLLI Riemann solver as well as the multidimensional MuSIC Riemann solver.
Gao, Qiang; Dou, Lixiang; Belkacem, Abdelkader Nasreddine; Chen, Chao
2017-01-01
A novel hybrid brain-computer interface (BCI) based on the electroencephalogram (EEG) signal which consists of a motor imagery- (MI-) based online interactive brain-controlled switch, "teeth clenching" state detector, and a steady-state visual evoked potential- (SSVEP-) based BCI was proposed to provide multidimensional BCI control. MI-based BCI was used as single-pole double throw brain switch (SPDTBS). By combining the SPDTBS with 4-class SSEVP-based BCI, movement of robotic arm was controlled in three-dimensional (3D) space. In addition, muscle artifact (EMG) of "teeth clenching" condition recorded from EEG signal was detected and employed as interrupter, which can initialize the statement of SPDTBS. Real-time writing task was implemented to verify the reliability of the proposed noninvasive hybrid EEG-EMG-BCI. Eight subjects participated in this study and succeeded to manipulate a robotic arm in 3D space to write some English letters. The mean decoding accuracy of writing task was 0.93 ± 0.03. Four subjects achieved the optimal criteria of writing the word "HI" which is the minimum movement of robotic arm directions (15 steps). Other subjects had needed to take from 2 to 4 additional steps to finish the whole process. These results suggested that our proposed hybrid noninvasive EEG-EMG-BCI was robust and efficient for real-time multidimensional robotic arm control.
Gao, Qiang
2017-01-01
A novel hybrid brain-computer interface (BCI) based on the electroencephalogram (EEG) signal which consists of a motor imagery- (MI-) based online interactive brain-controlled switch, “teeth clenching” state detector, and a steady-state visual evoked potential- (SSVEP-) based BCI was proposed to provide multidimensional BCI control. MI-based BCI was used as single-pole double throw brain switch (SPDTBS). By combining the SPDTBS with 4-class SSEVP-based BCI, movement of robotic arm was controlled in three-dimensional (3D) space. In addition, muscle artifact (EMG) of “teeth clenching” condition recorded from EEG signal was detected and employed as interrupter, which can initialize the statement of SPDTBS. Real-time writing task was implemented to verify the reliability of the proposed noninvasive hybrid EEG-EMG-BCI. Eight subjects participated in this study and succeeded to manipulate a robotic arm in 3D space to write some English letters. The mean decoding accuracy of writing task was 0.93 ± 0.03. Four subjects achieved the optimal criteria of writing the word “HI” which is the minimum movement of robotic arm directions (15 steps). Other subjects had needed to take from 2 to 4 additional steps to finish the whole process. These results suggested that our proposed hybrid noninvasive EEG-EMG-BCI was robust and efficient for real-time multidimensional robotic arm control. PMID:28660211
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David
1987-01-01
The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.
Zanni, Martin Thomas; Damrauer, Niels H.
2010-07-20
A multidimensional spectrometer for the infrared, visible, and ultraviolet regions of the electromagnetic spectrum, and a method for making multidimensional spectroscopic measurements in the infrared, visible, and ultraviolet regions of the electromagnetic spectrum. The multidimensional spectrometer facilitates measurements of inter- and intra-molecular interactions.
Enabling Computational Nanotechnology through JavaGenes in a Cycle Scavenging Environment
NASA Technical Reports Server (NTRS)
Globus, Al; Menon, Madhu; Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)
2002-01-01
A genetic algorithm procedure is developed and implemented for fitting parameters for many-body inter-atomic force field functions for simulating nanotechnology atomistic applications using portable Java on cycle-scavenged heterogeneous workstations. Given a physics based analytic functional form for the force field, correlated parameters in a multi-dimensional environment are typically chosen to fit properties given either by experiments and/or by higher accuracy quantum mechanical simulations. The implementation automates this tedious procedure using an evolutionary computing algorithm operating on hundreds of cycle-scavenged computers. As a proof of concept, we demonstrate the procedure for evaluating the Stillinger-Weber (S-W) potential by (a) reproducing the published parameters for Si using S-W energies in the fitness function, and (b) evolving a "new" set of parameters using semi-empirical tightbinding energies in the fitness function. The "new" parameters are significantly better suited for Si cluster energies and forces as compared to even the published S-W potential.
McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J
2014-01-17
A new method for estimating the thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for use in thermodynamic modeling of GC×GC separations has been developed. The method is an alternative to the traditional isothermal separations required to fit a three-parameter thermodynamic model to retention data. Herein, a non-linear optimization technique is used to estimate the parameters from a series of temperature-programmed separations using the Nelder-Mead simplex algorithm. With this method, the time required to obtain estimates of thermodynamic parameters a series of analytes is significantly reduced. This new method allows for precise predictions of retention time with the average error being only 0.2s for 1D separations. Predictions for GC×GC separations were also in agreement with experimental measurements; having an average relative error of 0.37% for (1)tr and 2.1% for (2)tr. Copyright © 2013 Elsevier B.V. All rights reserved.
Belianinov, Alex; Vasudevan, Rama K; Strelcov, Evgheni; ...
2015-05-13
The development of electron, and scanning probe microscopies in the second half of the twentieth century have produced spectacular images of internal structure and composition of matter with, at nanometer, molecular, and atomic resolution. Largely, this progress was enabled by computer-assisted methods of microscope operation, data acquisition and analysis. The progress in imaging technologies in the beginning of the twenty first century has opened the proverbial floodgates of high-veracity information on structure and functionality. High resolution imaging now allows information on atomic positions with picometer precision, allowing for quantitative measurements of individual bond length and angles. Functional imaging often leadsmore » to multidimensional data sets containing partial or full information on properties of interest, acquired as a function of multiple parameters (time, temperature, or other external stimuli). Here, we review several recent applications of the big and deep data analysis methods to visualize, compress, and translate this data into physically and chemically relevant information from imaging data.« less
Time-varying nonstationary multivariate risk analysis using a dynamic Bayesian copula
NASA Astrophysics Data System (ADS)
Sarhadi, Ali; Burn, Donald H.; Concepción Ausín, María.; Wiper, Michael P.
2016-03-01
A time-varying risk analysis is proposed for an adaptive design framework in nonstationary conditions arising from climate change. A Bayesian, dynamic conditional copula is developed for modeling the time-varying dependence structure between mixed continuous and discrete multiattributes of multidimensional hydrometeorological phenomena. Joint Bayesian inference is carried out to fit the marginals and copula in an illustrative example using an adaptive, Gibbs Markov Chain Monte Carlo (MCMC) sampler. Posterior mean estimates and credible intervals are provided for the model parameters and the Deviance Information Criterion (DIC) is used to select the model that best captures different forms of nonstationarity over time. This study also introduces a fully Bayesian, time-varying joint return period for multivariate time-dependent risk analysis in nonstationary environments. The results demonstrate that the nature and the risk of extreme-climate multidimensional processes are changed over time under the impact of climate change, and accordingly the long-term decision making strategies should be updated based on the anomalies of the nonstationary environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belianinov, Alex; Vasudevan, Rama K; Strelcov, Evgheni
The development of electron, and scanning probe microscopies in the second half of the twentieth century have produced spectacular images of internal structure and composition of matter with, at nanometer, molecular, and atomic resolution. Largely, this progress was enabled by computer-assisted methods of microscope operation, data acquisition and analysis. The progress in imaging technologies in the beginning of the twenty first century has opened the proverbial floodgates of high-veracity information on structure and functionality. High resolution imaging now allows information on atomic positions with picometer precision, allowing for quantitative measurements of individual bond length and angles. Functional imaging often leadsmore » to multidimensional data sets containing partial or full information on properties of interest, acquired as a function of multiple parameters (time, temperature, or other external stimuli). Here, we review several recent applications of the big and deep data analysis methods to visualize, compress, and translate this data into physically and chemically relevant information from imaging data.« less
Agarwal, Rahul; Thakor, Nitish V; Sarma, Sridevi V; Massaquoi, Steve G
2015-06-24
The premotor cortex (PM) is known to be a site of visuo-somatosensory integration for the production of movement. We sought to better understand the ventral PM (PMv) by modeling its signal encoding in greater detail. Neuronal firing data was obtained from 110 PMv neurons in two male rhesus macaques executing four reach-grasp-manipulate tasks. We found that in the large majority of neurons (∼90%) the firing patterns across the four tasks could be explained by assuming that a high-dimensional position/configuration trajectory-like signal evolving ∼250 ms before movement was encoded within a multidimensional Gaussian field (MGF). Our findings are consistent with the possibility that PMv neurons process a visually specified reference command for the intended arm/hand position trajectory with respect to a proprioceptively or visually sensed initial configuration. The estimated MGF were (hyper) disc-like, such that each neuron's firing modulated strongly only with commands that evolved along a single direction within position/configuration space. Thus, many neurons appeared to be tuned to slices of this input signal space that as a collection appeared to well cover the space. The MGF encoding models appear to be consistent with the arm-referent, bell-shaped, visual target tuning curves and target selectivity patterns observed in PMV visual-motor neurons. These findings suggest that PMv may implement a lookup table-like mechanism that helps translate intended movement trajectory into time-varying patterns of activation in motor cortex and spinal cord. MGFs provide an improved nonlinear framework for potentially decoding visually specified, intended multijoint arm/hand trajectories well in advance of movement. Copyright © 2015 the authors 0270-6474/15/359508-18$15.00/0.
System theory as applied differential geometry. [linear system
NASA Technical Reports Server (NTRS)
Hermann, R.
1979-01-01
The invariants of input-output systems under the action of the feedback group was examined. The approach used the theory of Lie groups and concepts of modern differential geometry, and illustrated how the latter provides a basis for the discussion of the analytic structure of systems. Finite dimensional linear systems in a single independent variable are considered. Lessons of more general situations (e.g., distributed parameter and multidimensional systems) which are increasingly encountered as technology advances are presented.
Collision Dynamics of O(3P) + DMMP Using a Specific Reaction Parameters Potential Form
2012-01-27
CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18 . NUMBER OF PAGES 14 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified...b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39- 18 Collision Dynamics of O(3P...SRP POTENTIAL Although there have been great strides in developing accurate, multidimensional, global potential energy surfaces, 18 −24 mole- cules with
Evaluating Cellular Polyfunctionality with a Novel Polyfunctionality Index
Larsen, Martin; Sauce, Delphine; Arnaud, Laurent; Fastenackels, Solène; Appay, Victor; Gorochov, Guy
2012-01-01
Functional evaluation of naturally occurring or vaccination-induced T cell responses in mice, men and monkeys has in recent years advanced from single-parameter (e.g. IFN-γ-secretion) to much more complex multidimensional measurements. Co-secretion of multiple functional molecules (such as cytokines and chemokines) at the single-cell level is now measurable due primarily to major advances in multiparametric flow cytometry. The very extensive and complex datasets generated by this technology raise the demand for proper analytical tools that enable the analysis of combinatorial functional properties of T cells, hence polyfunctionality. Presently, multidimensional functional measures are analysed either by evaluating all combinations of parameters individually or by summing frequencies of combinations that include the same number of simultaneous functions. Often these evaluations are visualized as pie charts. Whereas pie charts effectively represent and compare average polyfunctionality profiles of particular T cell subsets or patient groups, they do not document the degree or variation of polyfunctionality within a group nor does it allow more sophisticated statistical analysis. Here we propose a novel polyfunctionality index that numerically evaluates the degree and variation of polyfuntionality, and enable comparative and correlative parametric and non-parametric statistical tests. Moreover, it allows the usage of more advanced statistical approaches, such as cluster analysis. We believe that the polyfunctionality index will render polyfunctionality an appropriate end-point measure in future studies of T cell responsiveness. PMID:22860124
NASA Astrophysics Data System (ADS)
Dündar, Furkan Semih
2018-01-01
We provide a theory of n-scales previously called as n dimensional time scales. In previous approaches to the theory of time scales, multi-dimensional scales were taken as product space of two time scales [1, 2]. n-scales make the mathematical structure more flexible and appropriate to real world applications in physics and related fields. Here we define an n-scale as an arbitrary closed subset of ℝn. Modified forward and backward jump operators, Δ-derivatives and Δ-integrals on n-scales are defined.
The nonconvex multi-dimensional Riemann problem for Hamilton-Jacobi equations
NASA Technical Reports Server (NTRS)
Osher, Stanley
1989-01-01
Simple inequalities for the Riemann problem for a Hamilton-Jacobi equation in N space dimension when neither the initial data nor the Hamiltonian need be convex (or concave) are presented. The initial data is globally continuous, affine in each orthant, with a possible jump in normal derivative across each coordinate plane, x sub i = 0. The inequalities become equalities wherever a maxmin equals a minmax and thus an exact closed form solution to this problem is then obtained.
Spherical Panoramas for Astrophysical Data Visualization
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2017-05-01
Data immersion has advantages in astrophysical visualization. Complex multi-dimensional data and phase spaces can be explored in a seamless and interactive viewing environment. Putting the user in the data is a first step toward immersive data analysis. We present a technique for creating 360° spherical panoramas with astrophysical data. The three-dimensional software package Blender and the Google Spatial Media module are used together to immerse users in data exploration. Several examples employing these methods exhibit how the technique works using different types of astronomical data.
Displays, instruments, and the multi-dimensional world of cartography
NASA Technical Reports Server (NTRS)
Mccleary, George F., Jr.
1989-01-01
Cartographers are creators and purveyors of maps. Maps are representations of space, geographical images of the environment. Maps organize spatial information for convenience, particularly for use in performing tasks which involve the environment. There are many different kinds of maps, and there are as many different uses of maps as there are spatial problems to be solved. Maps and the display instrument dichotomy are examined. Also examined are the categories of map use along with the characteristics of maps.
Essl, Franz; Dullinger, Stefan
2016-01-01
The search for traits that make alien species invasive has mostly concentrated on comparing successful invaders and different comparison groups with respect to average trait values. By contrast, little attention has been paid to trait variability among invaders. Here, we combine an analysis of trait differences between invasive and non-invasive species with a comparison of multidimensional trait variability within these two species groups. We collected data on biological and distributional traits for 1402 species of the native, non-woody vascular plant flora of Austria. We then compared the subsets of species recorded and not recorded as invasive aliens anywhere in the world, respectively, first, with respect to the sampled traits using univariate and multiple regression models; and, second, with respect to their multidimensional trait diversity by calculating functional richness and dispersion metrics. Attributes related to competitiveness (strategy type, nitrogen indicator value), habitat use (agricultural and ruderal habitats, occurrence under the montane belt), and propagule pressure (frequency) were most closely associated with invasiveness. However, even the best multiple model, including interactions, only explained a moderate fraction of the differences in invasive success. In addition, multidimensional variability in trait space was even larger among invasive than among non-invasive species. This pronounced variability suggests that invasive success has a considerable idiosyncratic component and is probably highly context specific. We conclude that basing risk assessment protocols on species trait profiles will probably face hardly reducible uncertainties. PMID:27187616
Klonner, Günther; Fischer, Stefan; Essl, Franz; Dullinger, Stefan
2016-01-01
The search for traits that make alien species invasive has mostly concentrated on comparing successful invaders and different comparison groups with respect to average trait values. By contrast, little attention has been paid to trait variability among invaders. Here, we combine an analysis of trait differences between invasive and non-invasive species with a comparison of multidimensional trait variability within these two species groups. We collected data on biological and distributional traits for 1402 species of the native, non-woody vascular plant flora of Austria. We then compared the subsets of species recorded and not recorded as invasive aliens anywhere in the world, respectively, first, with respect to the sampled traits using univariate and multiple regression models; and, second, with respect to their multidimensional trait diversity by calculating functional richness and dispersion metrics. Attributes related to competitiveness (strategy type, nitrogen indicator value), habitat use (agricultural and ruderal habitats, occurrence under the montane belt), and propagule pressure (frequency) were most closely associated with invasiveness. However, even the best multiple model, including interactions, only explained a moderate fraction of the differences in invasive success. In addition, multidimensional variability in trait space was even larger among invasive than among non-invasive species. This pronounced variability suggests that invasive success has a considerable idiosyncratic component and is probably highly context specific. We conclude that basing risk assessment protocols on species trait profiles will probably face hardly reducible uncertainties.
High resolution 4-D spectroscopy with sparse concentric shell sampling and FFT-CLEAN.
Coggins, Brian E; Zhou, Pei
2008-12-01
Recent efforts to reduce the measurement time for multidimensional NMR experiments have fostered the development of a variety of new procedures for sampling and data processing. We recently described concentric ring sampling for 3-D NMR experiments, which is superior to radial sampling as input for processing by a multidimensional discrete Fourier transform. Here, we report the extension of this approach to 4-D spectroscopy as Randomized Concentric Shell Sampling (RCSS), where sampling points for the indirect dimensions are positioned on concentric shells, and where random rotations in the angular space are used to avoid coherent artifacts. With simulations, we show that RCSS produces a very low level of artifacts, even with a very limited number of sampling points. The RCSS sampling patterns can be adapted to fine rectangular grids to permit use of the Fast Fourier Transform in data processing, without an apparent increase in the artifact level. These artifacts can be further reduced to the noise level using the iterative CLEAN algorithm developed in radioastronomy. We demonstrate these methods on the high resolution 4-D HCCH-TOCSY spectrum of protein G's B1 domain, using only 1.2% of the sampling that would be needed conventionally for this resolution. The use of a multidimensional FFT instead of the slow DFT for initial data processing and for subsequent CLEAN significantly reduces the calculation time, yielding an artifact level that is on par with the level of the true spectral noise.
High Resolution 4-D Spectroscopy with Sparse Concentric Shell Sampling and FFT-CLEAN
Coggins, Brian E.; Zhou, Pei
2009-01-01
SUMMARY Recent efforts to reduce the measurement time for multidimensional NMR experiments have fostered the development of a variety of new procedures for sampling and data processing. We recently described concentric ring sampling for 3-D NMR experiments, which is superior to radial sampling as input for processing by a multidimensional discrete Fourier transform. Here, we report the extension of this approach to 4-D spectroscopy as Randomized Concentric Shell Sampling (RCSS), where sampling points for the indirect dimensions are positioned on concentric shells, and where random rotations in the angular space are used to avoid coherent artifacts. With simulations, we show that RCSS produces a very low level of artifacts, even with a very limited number of sampling points. The RCSS sampling patterns can be adapted to fine rectangular grids to permit use of the Fast Fourier Transform in data processing, without an apparent increase in the artifact level. These artifacts can be further reduced to the noise level using the iterative CLEAN algorithm developed in radioastronomy. We demonstrate these methods on the high resolution 4-D HCCH-TOCSY spectrum of protein G's B1 domain, using only 1.2% of the sampling that would be needed conventionally for this resolution. The use of a multidimensional FFT instead of the slow DFT for initial data processing and for subsequent CLEAN significantly reduces the calculation time, yielding an artifact level that is on par with the level of the true spectral noise. PMID:18853260
A multidimensional anisotropic strength criterion based on Kelvin modes
NASA Astrophysics Data System (ADS)
Arramon, Yves Pierre
A new theory for the prediction of multiaxial strength of anisotropic elastic materials was proposed by Biegler and Mehrabadi (1993). This theory is based on the premise that the total elastic strain energy of an anisotropic material subjected to multiaxial stress can be decomposed into dilatational and deviatoric modes. A multidimensional strength criterion may thus be formulated by postulating that the failure would occur when the energy stored in one of these modes has reached a critical value. However, the logic employed by these authors to formulate a failure criterion based on this theory could not be extended to multiaxial stress. In this thesis, an alternate criterion is presented which redresses the biaxial restriction by reformulating the surfaces of constant modal energy as surfaces of constant eigenstress magnitude. The resulting failure envelope, in a multidimensional stress space, is piecewise smooth. Each facet of the envelope is expected to represent the locus of failure data by a particular Kelvin mode. It is further shown that the Kelvin mode theory alone provides an incomplete description of the failure of some materials, but that this weakness can be addressed by the introduction of a set of complementary modes. A revised theory which combines both Kelvin and complementary modes is thus proposed and applied seven example materials: an isotropic concrete, tetragonal paperboard, two orthotropic softwoods, two orthotropic hardwoods and an orthotropic cortical bone. The resulting failure envelopes for these examples were plotted and, with the exception of concrete, shown to produce intuitively correct failure predictions.
SciSpark's SRDD : A Scientific Resilient Distributed Dataset for Multidimensional Data
NASA Astrophysics Data System (ADS)
Palamuttam, R. S.; Wilson, B. D.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; McGibbney, L. J.; Ramirez, P.
2015-12-01
Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We have developed SciSpark, a robust Big Data framework, that extends ApacheTM Spark for scaling scientific computations. Apache Spark improves the map-reduce implementation in ApacheTM Hadoop for parallel computing on a cluster, by emphasizing in-memory computation, "spilling" to disk only as needed, and relying on lazy evaluation. Central to Spark is the Resilient Distributed Dataset (RDD), an in-memory distributed data structure that extends the functional paradigm provided by the Scala programming language. However, RDDs are ideal for tabular or unstructured data, and not for highly dimensional data. The SciSpark project introduces the Scientific Resilient Distributed Dataset (sRDD), a distributed-computing array structure which supports iterative scientific algorithms for multidimensional data. SciSpark processes data stored in NetCDF and HDF files by partitioning them across time or space and distributing the partitions among a cluster of compute nodes. We show usability and extensibility of SciSpark by implementing distributed algorithms for geospatial operations on large collections of multi-dimensional grids. In particular we address the problem of scaling an automated method for finding Mesoscale Convective Complexes. SciSpark provides a tensor interface to support the pluggability of different matrix libraries. We evaluate performance of the various matrix libraries in distributed pipelines, such as Nd4jTM and BreezeTM. We detail the architecture and design of SciSpark, our efforts to integrate climate science algorithms, parallel ingest and partitioning (sharding) of A-Train satellite observations from model grids. These solutions are encompassed in SciSpark, an open-source software framework for distributed computing on scientific data.
ERIC Educational Resources Information Center
Chen, Ping
2017-01-01
Calibration of new items online has been an important topic in item replenishment for multidimensional computerized adaptive testing (MCAT). Several online calibration methods have been proposed for MCAT, such as multidimensional "one expectation-maximization (EM) cycle" (M-OEM) and multidimensional "multiple EM cycles"…