Free-form geometric modeling by integrating parametric and implicit PDEs.
Du, Haixia; Qin, Hong
2007-01-01
Parametric PDE techniques, which use partial differential equations (PDEs) defined over a 2D or 3D parametric domain to model graphical objects and processes, can unify geometric attributes and functional constraints of the models. PDEs can also model implicit shapes defined by level sets of scalar intensity fields. In this paper, we present an approach that integrates parametric and implicit trivariate PDEs to define geometric solid models containing both geometric information and intensity distribution subject to flexible boundary conditions. The integrated formulation of second-order or fourth-order elliptic PDEs permits designers to manipulate PDE objects of complex geometry and/or arbitrary topology through direct sculpting and free-form modeling. We developed a PDE-based geometric modeling system for shape design and manipulation of PDE objects. The integration of implicit PDEs with parametric geometry offers more general and arbitrary shape blending and free-form modeling for objects with intensity attributes than pure geometric models.
Bim from Laser SCANS… not Just for Buildings: Nurbs-Based Parametric Modeling of a Medieval Bridge
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Banfi, F.; Brumana, R.; Previtali, M.; Roncoroni, F.
2016-06-01
Building Information Modelling is not limited to buildings. BIM technology includes civil infrastructures such as roads, dams, bridges, communications networks, water and wastewater networks and tunnels. This paper describes a novel methodology for the generation of a detailed BIM of a complex medieval bridge. The use of laser scans and images coupled with the development of algorithms able to handle irregular shapes allowed the creation of advanced parametric objects, which were assembled to obtain an accurate BIM. The lack of existing object libraries required the development of specific families for the different structural elements of the bridge. Finally, some applications aimed at assessing the stability and safety of the bridge are illustrated and discussed. The BIM of the bridge can incorporate this information towards a new "BIMonitoring" concept to preserve the geometric complexity provided by point clouds, obtaining a detailed BIM with object relationships and attributes.
Parametric boundary reconstruction algorithm for industrial CT metrology application.
Yin, Zhye; Khare, Kedar; De Man, Bruno
2009-01-01
High-energy X-ray computed tomography (CT) systems have been recently used to produce high-resolution images in various nondestructive testing and evaluation (NDT/NDE) applications. The accuracy of the dimensional information extracted from CT images is rapidly approaching the accuracy achieved with a coordinate measuring machine (CMM), the conventional approach to acquire the metrology information directly. On the other hand, CT systems generate the sinogram which is transformed mathematically to the pixel-based images. The dimensional information of the scanned object is extracted later by performing edge detection on reconstructed CT images. The dimensional accuracy of this approach is limited by the grid size of the pixel-based representation of CT images since the edge detection is performed on the pixel grid. Moreover, reconstructed CT images usually display various artifacts due to the underlying physical process and resulting object boundaries from the edge detection fail to represent the true boundaries of the scanned object. In this paper, a novel algorithm to reconstruct the boundaries of an object with uniform material composition and uniform density is presented. There are three major benefits in the proposed approach. First, since the boundary parameters are reconstructed instead of image pixels, the complexity of the reconstruction algorithm is significantly reduced. The iterative approach, which can be computationally intensive, will be practical with the parametric boundary reconstruction. Second, the object of interest in metrology can be represented more directly and accurately by the boundary parameters instead of the image pixels. By eliminating the extra edge detection step, the overall dimensional accuracy and process time can be improved. Third, since the parametric reconstruction approach shares the boundary representation with other conventional metrology modalities such as CMM, boundary information from other modalities can be directly incorporated as prior knowledge to improve the convergence of an iterative approach. In this paper, the feasibility of parametric boundary reconstruction algorithm is demonstrated with both simple and complex simulated objects. Finally, the proposed algorithm is applied to the experimental industrial CT system data.
Sorting of Streptomyces Cell Pellets Using a Complex Object Parametric Analyzer and Sorter
Petrus, Marloes L. C.; van Veluw, G. Jerre; Wösten, Han A. B.; Claessen, Dennis
2014-01-01
Streptomycetes are filamentous soil bacteria that are used in industry for the production of enzymes and antibiotics. When grown in bioreactors, these organisms form networks of interconnected hyphae, known as pellets, which are heterogeneous in size. Here we describe a method to analyze and sort mycelial pellets using a Complex Object Parametric Analyzer and Sorter (COPAS). Detailed instructions are given for the use of the instrument and the basic statistical analysis of the data. We furthermore describe how pellets can be sorted according to user-defined settings, which enables downstream processing such as the analysis of the RNA or protein content. Using this methodology the mechanism underlying heterogeneous growth can be tackled. This will be instrumental for improving streptomycetes as a cell factory, considering the fact that productivity correlates with pellet size. PMID:24561666
Parametric design and gridding through relational geometry
NASA Technical Reports Server (NTRS)
Letcher, John S., Jr.; Shook, D. Michael
1995-01-01
Relational Geometric Synthesis (RGS) is a new logical framework for building up precise definitions of complex geometric models from points, curves, surfaces and solids. RGS achieves unprecedented design flexibility by supporting a rich variety of useful curve and surface entities. During the design process, many qualitative and quantitative relationships between elementary objects may be captured and retained in a data structure equivalent to a directed graph, such that they can be utilized for automatically updating the complete model geometry following changes in the shape or location of an underlying object. Capture of relationships enables many new possibilities for parametric variations and optimization. Examples are given of panelization applications for submarines, sailing yachts, offshore structures, and propellers.
A unified framework for weighted parametric multiple test procedures.
Xi, Dong; Glimm, Ekkehard; Maurer, Willi; Bretz, Frank
2017-09-01
We describe a general framework for weighted parametric multiple test procedures based on the closure principle. We utilize general weighting strategies that can reflect complex study objectives and include many procedures in the literature as special cases. The proposed weighted parametric tests bridge the gap between rejection rules using either adjusted significance levels or adjusted p-values. This connection is made by allowing intersection hypotheses of the underlying closed test procedure to be tested at level smaller than α. This may be also necessary to take certain study situations into account. For such cases we introduce a subclass of exact α-level parametric tests that satisfy the consonance property. When the correlation is known only for certain subsets of the test statistics, a new procedure is proposed to fully utilize this knowledge within each subset. We illustrate the proposed weighted parametric tests using a clinical trial example and conduct a simulation study to investigate its operating characteristics. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Parametric embedding for class visualization.
Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B
2007-09-01
We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.
Software for Managing Parametric Studies
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; McCann, Karen M.; DeVivo, Adrian
2003-01-01
The Information Power Grid Virtual Laboratory (ILab) is a Practical Extraction and Reporting Language (PERL) graphical-user-interface computer program that generates shell scripts to facilitate parametric studies performed on the Grid. (The Grid denotes a worldwide network of supercomputers used for scientific and engineering computations involving data sets too large to fit on desktop computers.) Heretofore, parametric studies on the Grid have been impeded by the need to create control language scripts and edit input data files painstaking tasks that are necessary for managing multiple jobs on multiple computers. ILab reflects an object-oriented approach to automation of these tasks: All data and operations are organized into packages in order to accelerate development and debugging. A container or document object in ILab, called an experiment, contains all the information (data and file paths) necessary to define a complex series of repeated, sequenced, and/or branching processes. For convenience and to enable reuse, this object is serialized to and from disk storage. At run time, the current ILab experiment is used to generate required input files and shell scripts, create directories, copy data files, and then both initiate and monitor the execution of all computational processes.
Developing integrated parametric planning models for budgeting and managing complex projects
NASA Technical Reports Server (NTRS)
Etnyre, Vance A.; Black, Ken U.
1988-01-01
The applicability of integrated parametric models for the budgeting and management of complex projects is investigated. Methods for building a very flexible, interactive prototype for a project planning system, and software resources available for this purpose, are discussed and evaluated. The prototype is required to be sensitive to changing objectives, changing target dates, changing costs relationships, and changing budget constraints. To achieve the integration of costs and project and task durations, parametric cost functions are defined by a process of trapezoidal segmentation, where the total cost for the project is the sum of the various project cost segments, and each project cost segment is the integral of a linearly segmented cost loading function over a specific interval. The cost can thus be expressed algebraically. The prototype was designed using Lotus-123 as the primary software tool. This prototype implements a methodology for interactive project scheduling that provides a model of a system that meets most of the goals for the first phase of the study and some of the goals for the second phase.
NASA Astrophysics Data System (ADS)
Delogu, A.; Furini, F.
1991-09-01
Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.
Computations of Aerodynamic Performance Databases Using Output-Based Refinement
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2009-01-01
Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.
Revisiting Parametric Types and Virtual Classes
NASA Astrophysics Data System (ADS)
Madsen, Anders Bach; Ernst, Erik
This paper presents a conceptually oriented updated view on the relationship between parametric types and virtual classes. The traditional view is that parametric types excel at structurally oriented composition and decomposition, and virtual classes excel at specifying mutually recursive families of classes whose relationships are preserved in derived families. Conversely, while class families can be specified using a large number of F-bounded type parameters, this approach is complex and fragile; and it is difficult to use traditional virtual classes to specify object composition in a structural manner, because virtual classes are closely tied to nominal typing. This paper adds new insight about the dichotomy between these two approaches; it illustrates how virtual constraints and type refinements, as recently introduced in gbeta and Scala, enable structural treatment of virtual types; finally, it shows how a novel kind of dynamic type check can detect compatibility among entire families of classes.
Marginal Space Deep Learning: Efficient Architecture for Volumetric Image Parsing.
Ghesu, Florin C; Krubasik, Edward; Georgescu, Bogdan; Singh, Vivek; Yefeng Zheng; Hornegger, Joachim; Comaniciu, Dorin
2016-05-01
Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45.2% over the state-of-the-art. To our knowledge, this is the first successful demonstration of the DL potential to detection and segmentation in full 3D data with parametrized representations.
The neural basis of precise visual short-term memory for complex recognisable objects.
Veldsman, Michele; Mitchell, Daniel J; Cusack, Rhodri
2017-10-01
Recent evidence suggests that visual short-term memory (VSTM) capacity estimated using simple objects, such as colours and oriented bars, may not generalise well to more naturalistic stimuli. More visual detail can be stored in VSTM when complex, recognisable objects are maintained compared to simple objects. It is not yet known if it is recognisability that enhances memory precision, nor whether maintenance of recognisable objects is achieved with the same network of brain regions supporting maintenance of simple objects. We used a novel stimulus generation method to parametrically warp photographic images along a continuum, allowing separate estimation of the precision of memory representations and the number of items retained. The stimulus generation method was also designed to create unrecognisable, though perceptually matched, stimuli, to investigate the impact of recognisability on VSTM. We adapted the widely-used change detection and continuous report paradigms for use with complex, photographic images. Across three functional magnetic resonance imaging (fMRI) experiments, we demonstrated greater precision for recognisable objects in VSTM compared to unrecognisable objects. This clear behavioural advantage was not the result of recruitment of additional brain regions, or of stronger mean activity within the core network. Representational similarity analysis revealed greater variability across item repetitions in the representations of recognisable, compared to unrecognisable complex objects. We therefore propose that a richer range of neural representations support VSTM for complex recognisable objects. Copyright © 2017 Elsevier Inc. All rights reserved.
Gated frequency-resolved optical imaging with an optical parametric amplifier
Cameron, S.M.; Bliss, D.E.; Kimmel, M.W.; Neal, D.R.
1999-08-10
A system for detecting objects in a turbid media utilizes an optical parametric amplifier as an amplifying gate for received light from the media. An optical gating pulse from a second parametric amplifier permits the system to respond to and amplify only ballistic photons from the object in the media. 13 figs.
Gated frequency-resolved optical imaging with an optical parametric amplifier
Cameron, Stewart M.; Bliss, David E.; Kimmel, Mark W.; Neal, Daniel R.
1999-01-01
A system for detecting objects in a turbid media utilizes an optical parametric amplifier as an amplifying gate for received light from the media. An optical gating pulse from a second parametric amplifier permits the system to respond to and amplify only ballistic photons from the object in the media.
NASA Astrophysics Data System (ADS)
Zhang, Chuan; Wang, Xingyuan; Luo, Chao; Li, Junqiu; Wang, Chunpeng
2018-03-01
In this paper, we focus on the robust outer synchronization problem between two nonlinear complex networks with parametric disturbances and mixed time-varying delays. Firstly, a general complex network model is proposed. Besides the nonlinear couplings, the network model in this paper can possess parametric disturbances, internal time-varying delay, discrete time-varying delay and distributed time-varying delay. Then, according to the robust control strategy, linear matrix inequality and Lyapunov stability theory, several outer synchronization protocols are strictly derived. Simple linear matrix controllers are designed to driver the response network synchronize to the drive network. Additionally, our results can be applied on the complex networks without parametric disturbances. Finally, by utilizing the delayed Lorenz chaotic system as the dynamics of all nodes, simulation examples are given to demonstrate the effectiveness of our theoretical results.
Fuel cell on-site integrated energy system parametric analysis of a residential complex
NASA Technical Reports Server (NTRS)
Simons, S. N.
1977-01-01
A parametric energy-use analysis was performed for a large apartment complex served by a fuel cell on-site integrated energy system (OS/IES). The variables parameterized include operating characteristics for four phosphoric acid fuel cells, eight OS/IES energy recovery systems, and four climatic locations. The annual fuel consumption for selected parametric combinations are presented and a breakeven economic analysis is presented for one parametric combination. The results show fuel cell electrical efficiency and system component choice have the greatest effect on annual fuel consumption; fuel cell thermal efficiency and geographic location have less of an effect.
NASA Astrophysics Data System (ADS)
Bouaynaya, N.; Schonfeld, Dan
2005-03-01
Many real world applications in computer and multimedia such as augmented reality and environmental imaging require an elastic accurate contour around a tracked object. In the first part of the paper we introduce a novel tracking algorithm that combines a motion estimation technique with the Bayesian Importance Sampling framework. We use Adaptive Block Matching (ABM) as the motion estimation technique. We construct the proposal density from the estimated motion vector. The resulting algorithm requires a small number of particles for efficient tracking. The tracking is adaptive to different categories of motion even with a poor a priori knowledge of the system dynamics. Particulary off-line learning is not needed. A parametric representation of the object is used for tracking purposes. In the second part of the paper, we refine the tracking output from a parametric sample to an elastic contour around the object. We use a 1D active contour model based on a dynamic programming scheme to refine the output of the tracker. To improve the convergence of the active contour, we perform the optimization over a set of randomly perturbed initial conditions. Our experiments are applied to head tracking. We report promising tracking results in complex environments.
Current State of the Art Historic Building Information Modelling
NASA Astrophysics Data System (ADS)
Dore, C.; Murphy, M.
2017-08-01
In an extensive review of existing literature a number of observations were made in relation to the current approaches for recording and modelling existing buildings and environments: Data collection and pre-processing techniques are becoming increasingly automated to allow for near real-time data capture and fast processing of this data for later modelling applications. Current BIM software is almost completely focused on new buildings and has very limited tools and pre-defined libraries for modelling existing and historic buildings. The development of reusable parametric library objects for existing and historic buildings supports modelling with high levels of detail while decreasing the modelling time. Mapping these parametric objects to survey data, however, is still a time-consuming task that requires further research. Promising developments have been made towards automatic object recognition and feature extraction from point clouds for as-built BIM. However, results are currently limited to simple and planar features. Further work is required for automatic accurate and reliable reconstruction of complex geometries from point cloud data. Procedural modelling can provide an automated solution for generating 3D geometries but lacks the detail and accuracy required for most as-built applications in AEC and heritage fields.
Noise and analyzer-crystal angular position analysis for analyzer-based phase-contrast imaging
NASA Astrophysics Data System (ADS)
Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.
2014-04-01
The analyzer-based phase-contrast x-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile of the x-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this paper is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the multiple-image radiography, diffraction enhanced imaging and scatter diffraction enhanced imaging estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique.
Noise and Analyzer-Crystal Angular Position Analysis for Analyzer-Based Phase-Contrast Imaging
Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.
2014-01-01
The analyzer-based phase-contrast X-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile (AIP) of the X-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this manuscript is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the Multiple-Image Radiography (MIR), Diffraction Enhanced Imaging (DEI) and Scatter Diffraction Enhanced Imaging (S-DEI) estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique. PMID:24651402
NASA Astrophysics Data System (ADS)
Dore, C.; Murphy, M.
2013-02-01
This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.
Bim Automation: Advanced Modeling Generative Process for Complex Structures
NASA Astrophysics Data System (ADS)
Banfi, F.; Fai, S.; Brumana, R.
2017-08-01
The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.
Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.
Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben
2017-06-06
Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.
Ray-tracing method for creeping waves on arbitrarily shaped nonuniform rational B-splines surfaces.
Chen, Xi; He, Si-Yuan; Yu, Ding-Feng; Yin, Hong-Cheng; Hu, Wei-Dong; Zhu, Guo-Qiang
2013-04-01
An accurate creeping ray-tracing algorithm is presented in this paper to determine the tracks of creeping waves (or creeping rays) on arbitrarily shaped free-form parametric surfaces [nonuniform rational B-splines (NURBS) surfaces]. The main challenge in calculating the surface diffracted fields on NURBS surfaces is due to the difficulty in determining the geodesic paths along which the creeping rays propagate. On one single parametric surface patch, the geodesic paths need to be computed by solving the geodesic equations numerically. Furthermore, realistic objects are generally modeled as the union of several connected NURBS patches. Due to the discontinuity of the parameter between the patches, it is more complicated to compute geodesic paths on several connected patches than on one single patch. Thus, a creeping ray-tracing algorithm is presented in this paper to compute the geodesic paths of creeping rays on the complex objects that are modeled as the combination of several NURBS surface patches. In the algorithm, the creeping ray tracing on each surface patch is performed by solving the geodesic equations with a Runge-Kutta method. When the creeping ray propagates from one patch to another, a transition method is developed to handle the transition of the creeping ray tracing across the border between the patches. This creeping ray-tracing algorithm can meet practical requirements because it can be applied to the objects with complex shapes. The algorithm can also extend the applicability of NURBS for electromagnetic and optical applications. The validity and usefulness of the algorithm can be verified from the numerical results.
Quantitative estimation of source complexity in tsunami-source inversion
NASA Astrophysics Data System (ADS)
Dettmer, Jan; Cummins, Phil R.; Hawkins, Rhys; Jakir Hossen, M.
2016-04-01
This work analyses tsunami waveforms to infer the spatiotemporal evolution of sea-surface displacement (the tsunami source) caused by earthquakes or other sources. Since the method considers sea-surface displacement directly, no assumptions about the fault or seafloor deformation are required. While this approach has no ability to study seismic aspects of rupture, it greatly simplifies the tsunami source estimation, making it much less dependent on subjective fault and deformation assumptions. This results in a more accurate sea-surface displacement evolution in the source region. The spatial discretization is by wavelet decomposition represented by a trans-D Bayesian tree structure. Wavelet coefficients are sampled by a reversible jump algorithm and additional coefficients are only included when required by the data. Therefore, source complexity is consistent with data information (parsimonious) and the method can adapt locally in both time and space. Since the source complexity is unknown and locally adapts, no regularization is required, resulting in more meaningful displacement magnitudes. By estimating displacement uncertainties in a Bayesian framework we can study the effect of parametrization choice on the source estimate. Uncertainty arises from observation errors and limitations in the parametrization to fully explain the observations. As a result, parametrization choice is closely related to uncertainty estimation and profoundly affects inversion results. Therefore, parametrization selection should be included in the inference process. Our inversion method is based on Bayesian model selection, a process which includes the choice of parametrization in the inference process and makes it data driven. A trans-dimensional (trans-D) model for the spatio-temporal discretization is applied here to include model selection naturally and efficiently in the inference by sampling probabilistically over parameterizations. The trans-D process results in better uncertainty estimates since the parametrization adapts parsimoniously (in both time and space) according to the local data resolving power and the uncertainty about the parametrization choice is included in the uncertainty estimates. We apply the method to the tsunami waveforms recorded for the great 2011 Japan tsunami. All data are recorded on high-quality sensors (ocean-bottom pressure sensors, GPS gauges, and DART buoys). The sea-surface Green's functions are computed by JAGURS and include linear dispersion effects. By treating the noise level at each gauge as unknown, individual gauge contributions to the source estimate are appropriately and objectively weighted. The results show previously unreported detail of the source, quantify uncertainty spatially, and produce excellent data fits. The source estimate shows an elongated peak trench-ward from the hypo centre that closely follows the trench, indicating significant sea-floor deformation near the trench. Also notable is a bi-modal (negative to positive) displacement feature in the northern part of the source near the trench. The feature has ~2 m amplitude and is clearly resolved by the data with low uncertainties.
Feedforward object-vision models only tolerate small image variations compared to human
Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
2014-01-01
Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986
Bilateral Theta-Burst TMS to Influence Global Gestalt Perception
Ritzinger, Bernd; Huberle, Elisabeth; Karnath, Hans-Otto
2012-01-01
While early and higher visual areas along the ventral visual pathway in the inferotemporal cortex are critical for the recognition of individual objects, the neural representation of human perception of complex global visual scenes remains under debate. Stroke patients with a selective deficit in the perception of a complex global Gestalt with intact recognition of individual objects – a deficit termed simultanagnosia – greatly helped to study this question. Interestingly, simultanagnosia typically results from bilateral lesions of the temporo-parietal junction (TPJ). The present study aimed to verify the relevance of this area for human global Gestalt perception. We applied continuous theta-burst TMS either unilaterally (left or right) or bilateral simultaneously over TPJ. Healthy subjects were presented with hierarchically organized visual stimuli that allowed parametrical degrading of the object at the global level. Identification of the global Gestalt was significantly modulated only for the bilateral TPJ stimulation condition. Our results strengthen the view that global Gestalt perception in the human brain involves TPJ and is co-dependent on both hemispheres. PMID:23110106
Bilateral theta-burst TMS to influence global gestalt perception.
Ritzinger, Bernd; Huberle, Elisabeth; Karnath, Hans-Otto
2012-01-01
While early and higher visual areas along the ventral visual pathway in the inferotemporal cortex are critical for the recognition of individual objects, the neural representation of human perception of complex global visual scenes remains under debate. Stroke patients with a selective deficit in the perception of a complex global Gestalt with intact recognition of individual objects - a deficit termed simultanagnosia - greatly helped to study this question. Interestingly, simultanagnosia typically results from bilateral lesions of the temporo-parietal junction (TPJ). The present study aimed to verify the relevance of this area for human global Gestalt perception. We applied continuous theta-burst TMS either unilaterally (left or right) or bilateral simultaneously over TPJ. Healthy subjects were presented with hierarchically organized visual stimuli that allowed parametrical degrading of the object at the global level. Identification of the global Gestalt was significantly modulated only for the bilateral TPJ stimulation condition. Our results strengthen the view that global Gestalt perception in the human brain involves TPJ and is co-dependent on both hemispheres.
Parametric symmetries in exactly solvable real and PT symmetric complex potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadav, Rajesh Kumar, E-mail: rajeshastrophysics@gmail.com; Khare, Avinash, E-mail: khare@physics.unipune.ac.in; Bagchi, Bijan, E-mail: bbagchi123@gmail.com
In this paper, we discuss the parametric symmetries in different exactly solvable systems characterized by real or complex PT symmetric potentials. We focus our attention on the conventional potentials such as the generalized Pöschl Teller (GPT), Scarf-I, and PT symmetric Scarf-II which are invariant under certain parametric transformations. The resulting set of potentials is shown to yield a completely different behavior of the bound state solutions. Further, the supersymmetric partner potentials acquire different forms under such parametric transformations leading to new sets of exactly solvable real and PT symmetric complex potentials. These potentials are also observed to be shape invariantmore » (SI) in nature. We subsequently take up a study of the newly discovered rationally extended SI potentials, corresponding to the above mentioned conventional potentials, whose bound state solutions are associated with the exceptional orthogonal polynomials (EOPs). We discuss the transformations of the corresponding Casimir operator employing the properties of the so(2, 1) algebra.« less
NASA Technical Reports Server (NTRS)
Shishir, Pandya; Chaderjian, Neal; Ahmad, Jsaim; Kwak, Dochan (Technical Monitor)
2001-01-01
Flow simulations using the time-dependent Navier-Stokes equations remain a challenge for several reasons. Principal among them are the difficulty to accurately model complex flows, and the time needed to perform the computations. A parametric study of such complex problems is not considered practical due to the large cost associated with computing many time-dependent solutions. The computation time for each solution must be reduced in order to make a parametric study possible. With successful reduction of computation time, the issue of accuracy, and appropriateness of turbulence models will become more tractable.
On Parametrization of the Linear GL(4,C) and Unitary SU(4) Groups in Terms of Dirac Matrices
NASA Astrophysics Data System (ADS)
Red'Kov, Victor M.; Bogush, Andrei A.; Tokarevskaya, Natalia G.
2008-02-01
Parametrization of 4 × 4-matrices G of the complex linear group GL(4,C) in terms of four complex 4-vector parameters (k,m,n,l) is investigated. Additional restrictions separating some subgroups of GL(4,C) are given explicitly. In the given parametrization, the problem of inverting any 4 × 4 matrix G is solved. Expression for determinant of any matrix G is found: det G = F(k,m,n,l). Unitarity conditions G+ = G-1 have been formulated in the form of non-linear cubic algebraic equations including complex conjugation. Several simplest solutions of these unitarity equations have been found: three 2-parametric subgroups G1, G2, G3 - each of subgroups consists of two commuting Abelian unitary groups; 4-parametric unitary subgroup consis! ting of a product of a 3-parametric group isomorphic SU(2) and 1-parametric Abelian group. The Dirac basis of generators Λk, being of Gell-Mann type, substantially differs from the basis λi used in the literature on SU(4) group, formulas relating them are found - they permit to separate SU(3) subgroup in SU(4). Special way to list 15 Dirac generators of GL(4,C) can be used {Λk} = {μiÅνjÅ(μiVνj = KÅL ÅM )}, which permit to factorize SU(4) transformations according to S = eiaμ eibνeikKeilLeimM, where two first factors commute with each other and are isomorphic to SU(2) group, the three last ones are 3-parametric groups, each of them consisting of three Abelian commuting unitary subgroups. Besides, the structure of fifteen Dirac matrices Λk permits to separate twenty 3-parametric subgroups in SU(4) isomorphic to SU(2); those subgroups might be used as bigger elementary blocks in constructing of a general transformation SU(4). It is shown how one can specify the present approach for the pseudounitary group SU(2,2) and SU(3,1).
Parametric Transformation Analysis
NASA Technical Reports Server (NTRS)
Gary, G. Allan
2003-01-01
Because twisted coronal features are important proxies for predicting solar eruptive events, and, yet not clearly understood, we present new results to resolve the complex, non-potential magnetic field configurations of active regions. This research uses free-form deformation mathematics to generate the associated coronal magnetic field. We use a parametric representation of the magnetic field lines such that the field lines can be manipulated to match the structure of EUV and SXR coronal loops. The objective is to derive sigmoidal magnetic field solutions which allows the beta greater than 1 regions to be included, aligned and non-aligned electric currents to be calculated, and the Lorentz force to be determined. The advantage of our technique is that the solution is independent of the unknown upper and side boundary conditions, allows non-vanishing magnetic forces, and provides a global magnetic field solution, which contains high- and low-beta regimes and is consistent with all the coronal images of the region. We show that the mathematical description is unique and physical.
Sparkle model for AM1 calculation of lanthanide complexes: improved parameters for europium.
Rocha, Gerd B; Freire, Ricardo O; Da Costa, Nivan B; De Sá, Gilberto F; Simas, Alfredo M
2004-04-05
In the present work, we sought to improve our sparkle model for the calculation of lanthanide complexes, SMLC,in various ways: (i) inclusion of the europium atomic mass, (ii) reparametrization of the model within AM1 from a new response function including all distances of the coordination polyhedron for tris(acetylacetonate)(1,10-phenanthroline) europium(III), (iii) implementation of the model in the software package MOPAC93r2, and (iv) inclusion of spherical Gaussian functions in the expression which computes the core-core repulsion energy. The parametrization results indicate that SMLC II is superior to the previous version of the model because Gaussian functions proved essential if one requires a better description of the geometries of the complexes. In order to validate our parametrization, we carried out calculations on 96 europium(III) complexes, selected from Cambridge Structural Database 2003, and compared our predicted ground state geometries with the experimental ones. Our results show that this new parametrization of the SMLC model, with the inclusion of spherical Gaussian functions in the core-core repulsion energy, is better capable of predicting the Eu-ligand distances than the previous version. The unsigned mean error for all interatomic distances Eu-L, in all 96 complexes, which, for the original SMLC is 0.3564 A, is lowered to 0.1993 A when the model was parametrized with the inclusion of two Gaussian functions. Our results also indicate that this model is more applicable to europium complexes with beta-diketone ligands. As such, we conclude that this improved model can be considered a powerful tool for the study of lanthanide complexes and their applications, such as the modeling of light conversion molecular devices.
Ghost imaging via optical parametric amplification
NASA Astrophysics Data System (ADS)
Li, Hong-Guo; Zhang, De-Jian; Xu, De-Qin; Zhao, Qiu-Li; Wang, Sen; Wang, Hai-Bo; Xiong, Jun; Wang, Kaige
2015-10-01
We investigate theoretically and experimentally thermal light ghost imaging where the light transmitted through the object as the seed light is amplified by an optical parametric amplifier (OPA). In conventional lens imaging systems with OPA, the spectral bandwidth of OPA dominates the image resolution. Theoretically, we prove that in ghost imaging via optical parametric amplification (GIOPA) the bandwidth of OPA will not affect the image resolution. The experimental results show that for weak seed light the image quality in GIOPA is better than that of conventional ghost imaging. Our work may be valuable in remote sensing with ghost imaging technique, where the light passed through the object is weak after a long-distance propagation.
Mitochondrial network complexity emerges from fission/fusion dynamics.
Zamponi, Nahuel; Zamponi, Emiliano; Cannas, Sergio A; Billoni, Orlando V; Helguera, Pablo R; Chialvo, Dante R
2018-01-10
Mitochondrial networks exhibit a variety of complex behaviors, including coordinated cell-wide oscillations of energy states as well as a phase transition (depolarization) in response to oxidative stress. Since functional and structural properties are often interwinded, here we characterized the structure of mitochondrial networks in mouse embryonic fibroblasts using network tools and percolation theory. Subsequently we perturbed the system either by promoting the fusion of mitochondrial segments or by inducing mitochondrial fission. Quantitative analysis of mitochondrial clusters revealed that structural parameters of healthy mitochondria laid in between the extremes of highly fragmented and completely fusioned networks. We confirmed our results by contrasting our empirical findings with the predictions of a recently described computational model of mitochondrial network emergence based on fission-fusion kinetics. Altogether these results offer not only an objective methodology to parametrize the complexity of this organelle but also support the idea that mitochondrial networks behave as critical systems and undergo structural phase transitions.
Parametrization study of the land multiparameter VTI elastic waveform inversion
NASA Astrophysics Data System (ADS)
He, W.; Plessix, R.-É.; Singh, S.
2018-06-01
Multiparameter inversion of seismic data remains challenging due to the trade-off between the different elastic parameters and the non-uniqueness of the solution. The sensitivity of the seismic data to a given subsurface elastic parameter depends on the source and receiver ray/wave path orientations at the subsurface point. In a high-frequency approximation, this is commonly analysed through the study of the radiation patterns that indicate the sensitivity of each parameter versus the incoming (from the source) and outgoing (to the receiver) angles. In practice, this means that the inversion result becomes sensitive to the choice of parametrization, notably because the null-space of the inversion depends on this choice. We can use a least-overlapping parametrization that minimizes the overlaps between the radiation patterns, in this case each parameter is only sensitive in a restricted angle domain, or an overlapping parametrization that contains a parameter sensitive to all angles, in this case overlaps between the radiation parameters occur. Considering a multiparameter inversion in an elastic vertically transverse isotropic medium and a complex land geological setting, we show that the inversion with the least-overlapping parametrization gives less satisfactory results than with the overlapping parametrization. The difficulties come from the complex wave paths that make difficult to predict the areas of sensitivity of each parameter. This shows that the parametrization choice should not only be based on the radiation pattern analysis but also on the angular coverage at each subsurface point that depends on geology and the acquisition layout.
Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm
NASA Technical Reports Server (NTRS)
Bandyopadhyay, Alak
2010-01-01
Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions
Global, Multi-Objective Trajectory Optimization With Parametric Spreading
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Englander, Jacob A.; Phillips, Sean M.; Hughes, Kyle M.
2017-01-01
Mission design problems are often characterized by multiple, competing trajectory optimization objectives. Recent multi-objective trajectory optimization formulations enable generation of globally-optimal, Pareto solutions via a multi-objective genetic algorithm. A byproduct of these formulations is that clustering in design space can occur in evolving the population towards the Pareto front. This clustering can be a drawback, however, if parametric evaluations of design variables are desired. This effort addresses clustering by incorporating operators that encourage a uniform spread over specified design variables while maintaining Pareto front representation. The algorithm is demonstrated on a Neptune orbiter mission, and enhanced multidimensional visualization strategies are presented.
Definition of NASTRAN sets by use of parametric geometry
NASA Technical Reports Server (NTRS)
Baughn, Terry V.; Tiv, Mehran
1989-01-01
Many finite element preprocessors describe finite element model geometry with points, lines, surfaces and volumes. One method for describing these basic geometric entities is by use of parametric cubics which are useful for representing complex shapes. The lines, surfaces and volumes may be discretized for follow on finite element analysis. The ability to limit or selectively recover results from the finite element model is extremely important to the analyst. Equally important is the ability to easily apply boundary conditions. Although graphical preprocessors have made these tasks easier, model complexity may not lend itself to easily identify a group of grid points desired for data recovery or application of constraints. A methodology is presented which makes use of the assignment of grid point locations in parametric coordinates. The parametric coordinates provide a convenient ordering of the grid point locations and a method for retrieving the grid point ID's from the parent geometry. The selected grid points may then be used for the generation of the appropriate set and constraint cards.
Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nataf, J.M.; Winkelmann, F.
We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less
Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nataf, J.M.; Winkelmann, F.
We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less
Parametric Amplification For Detecting Weak Optical Signals
NASA Technical Reports Server (NTRS)
Hemmati, Hamid; Chen, Chien; Chakravarthi, Prakash
1996-01-01
Optical-communication receivers of proposed type implement high-sensitivity scheme of optical parametric amplification followed by direct detection for reception of extremely weak signals. Incorporates both optical parametric amplification and direct detection into optimized design enhancing effective signal-to-noise ratios during reception in photon-starved (photon-counting) regime. Eliminates need for complexity of heterodyne detection scheme and partly overcomes limitations imposed on older direct-detection schemes by noise generated in receivers and by limits on quantum efficiencies of photodetectors.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Graphing Powers and Roots of Complex Numbers.
ERIC Educational Resources Information Center
Embse, Charles Vonder
1993-01-01
Using De Moivre's theorem and a parametric graphing utility, examines powers and roots of complex numbers and allows students to establish connections between the visual and numerical representations of complex numbers. Provides a program to numerically verify the roots of complex numbers. (MDH)
Extracting 3D Parametric Curves from 2D Images of Helical Objects.
Willcocks, Chris G; Jackson, Philip T G; Nelson, Carl J; Obara, Boguslaw
2017-09-01
Helical objects occur in medicine, biology, cosmetics, nanotechnology, and engineering. Extracting a 3D parametric curve from a 2D image of a helical object has many practical applications, in particular being able to extract metrics such as tortuosity, frequency, and pitch. We present a method that is able to straighten the image object and derive a robust 3D helical curve from peaks in the object boundary. The algorithm has a small number of stable parameters that require little tuning, and the curve is validated against both synthetic and real-world data. The results show that the extracted 3D curve comes within close Hausdorff distance to the ground truth, and has near identical tortuosity for helical objects with a circular profile. Parameter insensitivity and robustness against high levels of image noise are demonstrated thoroughly and quantitatively.
Niphadkar, Madhura; Nagendra, Harini; Tarantino, Cristina; Adamo, Maria; Blonda, Palma
2017-01-01
The establishment of invasive alien species in varied habitats across the world is now recognized as a genuine threat to the preservation of biodiversity. Specifically, plant invasions in understory tropical forests are detrimental to the persistence of healthy ecosystems. Monitoring such invasions using Very High Resolution (VHR) satellite remote sensing has been shown to be valuable in designing management interventions for conservation of native habitats. Object-based classification methods are very helpful in identifying invasive plants in various habitats, by their inherent nature of imitating the ability of the human brain in pattern recognition. However, these methods have not been tested adequately in dense tropical mixed forests where invasion occurs in the understorey. This study compares a pixel-based and object-based classification method for mapping the understorey invasive shrub Lantana camara (Lantana) in a tropical mixed forest habitat in the Western Ghats biodiversity hotspot in India. Overall, a hierarchical approach of mapping top canopy at first, and then further processing for the understorey shrub, using measures such as texture and vegetation indices proved effective in separating out Lantana from other cover types. In the first method, we implement a simple parametric supervised classification for mapping cover types, and then process within these types for Lantana delineation. In the second method, we use an object-based segmentation algorithm to map cover types, and then perform further processing for separating Lantana. The improved ability of the object-based approach to delineate structurally distinct objects with characteristic spectral and spatial characteristics of their own, as well as with reference to their surroundings, allows for much flexibility in identifying invasive understorey shrubs among the complex vegetation of the tropical forest than that provided by the parametric classifier. Conservation practices in tropical mixed forests can benefit greatly by adopting methods which use high resolution remotely sensed data and advanced techniques to monitor the patterns and effective functioning of native ecosystems by periodically mapping disturbances such as invasion. PMID:28620400
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Banfi, F.; Brumana, R.; Oreni, D.; Previtali, M.; Roncoroni, F.
2015-08-01
This paper describes a procedure for the generation of a detailed HBIM which is then turned into a model for mobile apps based on augmented and virtual reality. Starting from laser point clouds, photogrammetric data and additional information, a geometric reconstruction with a high level of detail can be carried out by considering the basic requirements of BIM projects (parametric modelling, object relations, attributes). The work aims at demonstrating that a complex HBIM can be managed in portable devices to extract useful information not only for expert operators, but also towards a wider user community interested in cultural tourism.
Reliability-Based Control Design for Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.
Using Spatial Correlations of SPDC Sources for Increasing the Signal to Noise Ratio in Images
NASA Astrophysics Data System (ADS)
Ruíz, A. I.; Caudillo, R.; Velázquez, V. M.; Barrios, E.
2017-05-01
We experimentally show that, by using spatial correlations of photon pairs produced by Spontaneous Parametric Down-Conversion, it is possible to increase the Signal to Noise Ratio in images of objects illuminated with those photons; in comparison, objects illuminated with light from a laser present a minor ratio. Our simple experimental set-up was capable to produce an average improvement in signal to noise ratio of 11dB of Parametric Down-Converted light over laser light. This simple method can be easily implemented for obtaining high contrast images of faint objects and for transmitting information with low noise.
NASA Astrophysics Data System (ADS)
Sibileau, Alberto; Auricchio, Ferdinando; Morganti, Simone; Díez, Pedro
2018-01-01
Architectured materials (or metamaterials) are constituted by a unit-cell with a complex structural design repeated periodically forming a bulk material with emergent mechanical properties. One may obtain specific macro-scale (or bulk) properties in the resulting architectured material by properly designing the unit-cell. Typically, this is stated as an optimal design problem in which the parameters describing the shape and mechanical properties of the unit-cell are selected in order to produce the desired bulk characteristics. This is especially pertinent due to the ease manufacturing of these complex structures with 3D printers. The proper generalized decomposition provides explicit parametic solutions of parametric PDEs. Here, the same ideas are used to obtain parametric solutions of the algebraic equations arising from lattice structural models. Once the explicit parametric solution is available, the optimal design problem is a simple post-process. The same strategy is applied in the numerical illustrations, first to a unit-cell (and then homogenized with periodicity conditions), and in a second phase to the complete structure of a lattice material specimen.
Parametric Cost Analysis: A Design Function
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1989-01-01
Parametric cost analysis uses equations to map measurable system attributes into cost. The measures of the system attributes are called metrics. The equations are called cost estimating relationships (CER's), and are obtained by the analysis of cost and technical metric data of products analogous to those to be estimated. Examples of system metrics include mass, power, failure_rate, mean_time_to_repair, energy _consumed, payload_to_orbit, pointing_accuracy, manufacturing_complexity, number_of_fasteners, and percent_of_electronics_weight. The basic assumption is that a measurable relationship exists between system attributes and the cost of the system. If a function exists, the attributes are cost drivers. Candidates for metrics include system requirement metrics and engineering process metrics. Requirements are constraints on the engineering process. From optimization theory we know that any active constraint generates cost by not permitting full optimization of the objective. Thus, requirements are cost drivers. Engineering processes reflect a projection of the requirements onto the corporate culture, engineering technology, and system technology. Engineering processes are an indirect measure of the requirements and, hence, are cost drivers.
Sample Skewness as a Statistical Measurement of Neuronal Tuning Sharpness
Samonds, Jason M.; Potetz, Brian R.; Lee, Tai Sing
2014-01-01
We propose using the statistical measurement of the sample skewness of the distribution of mean firing rates of a tuning curve to quantify sharpness of tuning. For some features, like binocular disparity, tuning curves are best described by relatively complex and sometimes diverse functions, making it difficult to quantify sharpness with a single function and parameter. Skewness provides a robust nonparametric measure of tuning curve sharpness that is invariant with respect to the mean and variance of the tuning curve and is straightforward to apply to a wide range of tuning, including simple orientation tuning curves and complex object tuning curves that often cannot even be described parametrically. Because skewness does not depend on a specific model or function of tuning, it is especially appealing to cases of sharpening where recurrent interactions among neurons produce sharper tuning curves that deviate in a complex manner from the feedforward function of tuning. Since tuning curves for all neurons are not typically well described by a single parametric function, this model independence additionally allows skewness to be applied to all recorded neurons, maximizing the statistical power of a set of data. We also compare skewness with other nonparametric measures of tuning curve sharpness and selectivity. Compared to these other nonparametric measures tested, skewness is best used for capturing the sharpness of multimodal tuning curves defined by narrow peaks (maximum) and broad valleys (minima). Finally, we provide a more formal definition of sharpness using a shape-based information gain measure and derive and show that skewness is correlated with this definition. PMID:24555451
Exploration of complex visual feature spaces for object perception
Leeds, Daniel D.; Pyles, John A.; Tarr, Michael J.
2014-01-01
The mid- and high-level visual properties supporting object perception in the ventral visual pathway are poorly understood. In the absence of well-specified theory, many groups have adopted a data-driven approach in which they progressively interrogate neural units to establish each unit's selectivity. Such methods are challenging in that they require search through a wide space of feature models and stimuli using a limited number of samples. To more rapidly identify higher-level features underlying human cortical object perception, we implemented a novel functional magnetic resonance imaging method in which visual stimuli are selected in real-time based on BOLD responses to recently shown stimuli. This work was inspired by earlier primate physiology work, in which neural selectivity for mid-level features in IT was characterized using a simple parametric approach (Hung et al., 2012). To extend such work to human neuroimaging, we used natural and synthetic object stimuli embedded in feature spaces constructed on the basis of the complex visual properties of the objects themselves. During fMRI scanning, we employed a real-time search method to control continuous stimulus selection within each image space. This search was designed to maximize neural responses across a pre-determined 1 cm3 brain region within ventral cortex. To assess the value of this method for understanding object encoding, we examined both the behavior of the method itself and the complex visual properties the method identified as reliably activating selected brain regions. We observed: (1) Regions selective for both holistic and component object features and for a variety of surface properties; (2) Object stimulus pairs near one another in feature space that produce responses at the opposite extremes of the measured activity range. Together, these results suggest that real-time fMRI methods may yield more widely informative measures of selectivity within the broad classes of visual features associated with cortical object representation. PMID:25309408
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuzmina, L.K.
The research deals with different aspects of mathematical modelling and the analysis of complex dynamic non-linear systems as a consequence of applied problems in mechanics (in particular those for gyrosystems, for stabilization and orientation systems, control systems of movable objects, including the aviation and aerospace systems) Non-linearity, multi-connectedness and high dimensionness of dynamical problems, that occur at the initial full statement lead to the need of the problem narrowing, and of the decomposition of the full model, but with safe-keeping of main properties and of qualitative equivalence. The elaboration of regular methods for modelling problems in dynamics, the generalization ofmore » reduction principle are the main aims of the investigations. Here, uniform methodology, based on Lyapunov`s methods, founded by N.G.Ohetayev, is developed. The objects of the investigations are considered with exclusive positions, as systems of singularly perturbed class, treated as ones with singular parametrical perturbations. It is the natural extension of the statements of N.G.Chetayev and P.A.Kuzmin for parametrical stability. In paper the systematical procedures for construction of correct simplified models (comparison ones) are developed, the validity conditions of the transition are determined the appraisals are received, the regular algorithms of engineering level are obtained. Applicabilitelly to the stabilization and orientation systems with the gyroscopic controlling subsystems, these methods enable to build the hierarchical sequence of admissible simplified models; to determine the conditions of their correctness.« less
A Parametric Analysis of HELSTAR
1983-12-01
AFIT/GSO/OS/83D-7 S....A PARAMETRIC ANALYSIS OF HELSTAR THESIS James Miklasevich Captain, USAF AFIT/CSO/OS/83D-7 ’- 3 - Reproduced From J 04. • ’ S...1 Statement of Problem. ...... ................ ......... 3 Objectives of the Research. .... ............ . . . 3 ...Launch Scenarios ................. 39 Launch Sequencel................... 39 Launch Sequence 2 . . . . . .. . . . .. . . . . . 1 Launch Sequence 3
Parametric Modelling of As-Built Beam Framed Structure in Bim Environment
NASA Astrophysics Data System (ADS)
Yang, X.; Koehl, M.; Grussenmeyer, P.
2017-02-01
A complete documentation and conservation of a historic timber roof requires the integration of geometry modelling, attributional and dynamic information management and results of structural analysis. Recently developed as-built Building Information Modelling (BIM) technique has the potential to provide a uniform platform, which provides possibility to integrate the traditional geometry modelling, parametric elements management and structural analysis together. The main objective of the project presented in this paper is to develop a parametric modelling tool for a timber roof structure whose elements are leaning and crossing beam frame. Since Autodesk Revit, as the typical BIM software, provides the platform for parametric modelling and information management, an API plugin, able to automatically create the parametric beam elements and link them together with strict relationship, was developed. The plugin under development is introduced in the paper, which can obtain the parametric beam model via Autodesk Revit API from total station points and terrestrial laser scanning data. The results show the potential of automatizing the parametric modelling by interactive API development in BIM environment. It also integrates the separate data processing and different platforms into the uniform Revit software.
Classical imaging with undetected light
NASA Astrophysics Data System (ADS)
Cardoso, A. C.; Berruezo, L. P.; Ávila, D. F.; Lemos, G. B.; Pimenta, W. M.; Monken, C. H.; Saldanha, P. L.; Pádua, S.
2018-03-01
We obtained the phase and intensity images of an object by detecting classical light which never interacted with it. With a double passage of a pump and a signal laser beams through a nonlinear crystal, we observe interference between the two idler beams produced by stimulated parametric down conversion. The object is placed in the amplified signal beam after its first passage through the crystal and the image is observed in the interference of the generated idler beams. High contrast images can be obtained even for objects with small transmittance coefficient due to the geometry of the interferometer and to the stimulated parametric emission. Like its quantum counterpart, this three-color imaging concept can be useful when the object must be probed with light at a wavelength for which detectors are not available.
Does linear separability really matter? Complex visual search is explained by simple search
Vighneshvel, T.; Arun, S. P.
2013-01-01
Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search. PMID:24029822
Why preferring parametric forecasting to nonparametric methods?
Jabot, Franck
2015-05-07
A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.
2013-01-01
Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.
Systemic Analysis Approaches for Air Transportation
NASA Technical Reports Server (NTRS)
Conway, Sheila
2005-01-01
Air transportation system designers have had only limited success using traditional operations research and parametric modeling approaches in their analyses of innovations. They need a systemic methodology for modeling of safety-critical infrastructure that is comprehensive, objective, and sufficiently concrete, yet simple enough to be used with reasonable investment. The methodology must also be amenable to quantitative analysis so issues of system safety and stability can be rigorously addressed. However, air transportation has proven itself an extensive, complex system whose behavior is difficult to describe, no less predict. There is a wide range of system analysis techniques available, but some are more appropriate for certain applications than others. Specifically in the area of complex system analysis, the literature suggests that both agent-based models and network analysis techniques may be useful. This paper discusses the theoretical basis for each approach in these applications, and explores their historic and potential further use for air transportation analysis.
NASA Astrophysics Data System (ADS)
Garagnani, S.; Manferdini, A. M.
2013-02-01
Since their introduction, modeling tools aimed to architectural design evolved in today's "digital multi-purpose drawing boards" based on enhanced parametric elements able to originate whole buildings within virtual environments. Semantic splitting and elements topology are features that allow objects to be "intelligent" (i.e. self-aware of what kind of element they are and with whom they can interact), representing this way basics of Building Information Modeling (BIM), a coordinated, consistent and always up to date workflow improved in order to reach higher quality, reliability and cost reductions all over the design process. Even if BIM was originally intended for new architectures, its attitude to store semantic inter-related information can be successfully applied to existing buildings as well, especially if they deserve particular care such as Cultural Heritage sites. BIM engines can easily manage simple parametric geometries, collapsing them to standard primitives connected through hierarchical relationships: however, when components are generated by existing morphologies, for example acquiring point clouds by digital photogrammetry or laser scanning equipment, complex abstractions have to be introduced while remodeling elements by hand, since automatic feature extraction in available software is still not effective. In order to introduce a methodology destined to process point cloud data in a BIM environment with high accuracy, this paper describes some experiences on monumental sites documentation, generated through a plug-in written for Autodesk Revit and codenamed GreenSpider after its capability to layout points in space as if they were nodes of an ideal cobweb.
Kück, Patrick; Meusemann, Karen; Dambach, Johannes; Thormann, Birthe; von Reumont, Björn M; Wägele, Johann W; Misof, Bernhard
2010-03-31
Methods of alignment masking, which refers to the technique of excluding alignment blocks prior to tree reconstructions, have been successful in improving the signal-to-noise ratio in sequence alignments. However, the lack of formally well defined methods to identify randomness in sequence alignments has prevented a routine application of alignment masking. In this study, we compared the effects on tree reconstructions of the most commonly used profiling method (GBLOCKS) which uses a predefined set of rules in combination with alignment masking, with a new profiling approach (ALISCORE) based on Monte Carlo resampling within a sliding window, using different data sets and alignment methods. While the GBLOCKS approach excludes variable sections above a certain threshold which choice is left arbitrary, the ALISCORE algorithm is free of a priori rating of parameter space and therefore more objective. ALISCORE was successfully extended to amino acids using a proportional model and empirical substitution matrices to score randomness in multiple sequence alignments. A complex bootstrap resampling leads to an even distribution of scores of randomly similar sequences to assess randomness of the observed sequence similarity. Testing performance on real data, both masking methods, GBLOCKS and ALISCORE, helped to improve tree resolution. The sliding window approach was less sensitive to different alignments of identical data sets and performed equally well on all data sets. Concurrently, ALISCORE is capable of dealing with different substitution patterns and heterogeneous base composition. ALISCORE and the most relaxed GBLOCKS gap parameter setting performed best on all data sets. Correspondingly, Neighbor-Net analyses showed the most decrease in conflict. Alignment masking improves signal-to-noise ratio in multiple sequence alignments prior to phylogenetic reconstruction. Given the robust performance of alignment profiling, alignment masking should routinely be used to improve tree reconstructions. Parametric methods of alignment profiling can be easily extended to more complex likelihood based models of sequence evolution which opens the possibility of further improvements.
Time-varying phononic crystals
NASA Astrophysics Data System (ADS)
Wright, Derek Warren
The primary objective of this thesis was to gain a deeper understanding of acoustic wave propagation in phononic crystals, particularly those that include materials whose properties can be varied periodically in time. This research was accomplished in three ways. First, a 2D phononic crystal was designed, created, and characterized. Its properties closely matched those determined through simulation. The crystal demonstrated band gaps, dispersion, and negative refraction. It served as a means of elucidating the practicalities of phononic crystal design and construction and as a physical verification of their more interesting properties. Next, the transmission matrix method for analyzing 1D phononic crystals was extended to include the effects of time-varying material parameters. The method was then used to provide a closed-form solution for the case of periodically time-varying material parameters. Some intriguing results from the use of the extended method include dramatically altered transmission properties and parametric amplification. New insights can be gained from the governing equations and have helped to identify the conditions that lead to parametric amplification in these structures. Finally, 2D multiple scattering theory was modified to analyze scatterers with time-varying material parameters. It is shown to be highly compatible with existing multiple scattering theories. It allows the total scattered field from a 2D time-varying phononic crystal to be determined. It was shown that time-varying material parameters significantly affect the phononic crystal transmission spectrum, and this was used to switch an incident monochromatic wave. Parametric amplification can occur under certain circumstances, and this effect was investigated using the closed-form solutions provided by the new 1D method. The complexity of the extended methods grows logarithmically as opposed linearly with existing methods, resulting in superior computational complexity for large numbers of scatterers. Also, since both extended methods provide analytic solutions, they may give further insights into the factors that govern the behaviour of time-varying phononic crystals. These extended methods may now be used to design an active phononic crystal that could demonstrate new or enhanced properties.
Introduction to multivariate discrimination
NASA Astrophysics Data System (ADS)
Kégl, Balázs
2013-07-01
Multivariate discrimination or classification is one of the best-studied problem in machine learning, with a plethora of well-tested and well-performing algorithms. There are also several good general textbooks [1-9] on the subject written to an average engineering, computer science, or statistics graduate student; most of them are also accessible for an average physics student with some background on computer science and statistics. Hence, instead of writing a generic introduction, we concentrate here on relating the subject to a practitioner experimental physicist. After a short introduction on the basic setup (Section 1) we delve into the practical issues of complexity regularization, model selection, and hyperparameter optimization (Section 2), since it is this step that makes high-complexity non-parametric fitting so different from low-dimensional parametric fitting. To emphasize that this issue is not restricted to classification, we illustrate the concept on a low-dimensional but non-parametric regression example (Section 2.1). Section 3 describes the common algorithmic-statistical formal framework that unifies the main families of multivariate classification algorithms. We explain here the large-margin principle that partly explains why these algorithms work. Section 4 is devoted to the description of the three main (families of) classification algorithms, neural networks, the support vector machine, and AdaBoost. We do not go into the algorithmic details; the goal is to give an overview on the form of the functions these methods learn and on the objective functions they optimize. Besides their technical description, we also make an attempt to put these algorithm into a socio-historical context. We then briefly describe some rather heterogeneous applications to illustrate the pattern recognition pipeline and to show how widespread the use of these methods is (Section 5). We conclude the chapter with three essentially open research problems that are either relevant to or even motivated by certain unorthodox applications of multivariate discrimination in experimental physics.
Spacelab mission dependent training parametric resource requirements study
NASA Technical Reports Server (NTRS)
Ogden, D. H.; Watters, H.; Steadman, J.; Conrad, L.
1976-01-01
Training flows were developed for typical missions, resource relationships analyzed, and scheduling optimization algorithms defined. Parametric analyses were performed to study the effect of potential changes in mission model, mission complexity and training time required on the resource quantities required to support training of payload or mission specialists. Typical results of these analyses are presented both in graphic and tabular form.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-08-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-01-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784
The shape of novel objects contributes to shared impressions.
Kurosu, Aaron; Todorov, Alexander
2017-11-01
How do people share impressions of novel objects, and is this even possible? We tested whether the shape of novel 3-D objects can lead to similar impressions across people. To do this, we introduced a technique for manipulating highly complex shapes and measured four types of evaluative impressions (approachable, dangerous, beautiful, likable). Because relatively little is understood regarding how people form impressions of novel objects, we first sought to confirm the reliability of this behavior by examining how similar impressions are for an individual asked to re-evaluate the stimuli (i.e., impression consistency). To situate the magnitude of reliability, we compared novel objects to faces-familiar and extensively studied stimuli. Impression consistency was always present for both types of stimuli and comparable across all evaluations. Second, and more importantly, we tested how similar impressions are across people (i.e., impression consensus). Impression consensus was always present for faces, but not always for novel objects. In Study 2 we examined a greater diversity of shapes and replicated the findings of Study 1 for novel objects. The findings suggest that impression consensus for novel objects only emerges when certain types of shapes and evaluations map together. When such mapping is possible, impressions are isomorphic with the parametrized shapes.
Prepositioning emergency supplies under uncertainty: a parametric optimization method
NASA Astrophysics Data System (ADS)
Bai, Xuejie; Gao, Jinwu; Liu, Yankui
2018-07-01
Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.
Parameterization models for pesticide exposure via crop consumption.
Fantke, Peter; Wieland, Peter; Juraske, Ronnie; Shaddick, Gavin; Itoiz, Eva Sevigné; Friedrich, Rainer; Jolliet, Olivier
2012-12-04
An approach for estimating human exposure to pesticides via consumption of six important food crops is presented that can be used to extend multimedia models applied in health risk and life cycle impact assessment. We first assessed the variation of model output (pesticide residues per kg applied) as a function of model input variables (substance, crop, and environmental properties) including their possible correlations using matrix algebra. We identified five key parameters responsible for between 80% and 93% of the variation in pesticide residues, namely time between substance application and crop harvest, degradation half-lives in crops and on crop surfaces, overall residence times in soil, and substance molecular weight. Partition coefficients also play an important role for fruit trees and tomato (Kow), potato (Koc), and lettuce (Kaw, Kow). Focusing on these parameters, we develop crop-specific models by parametrizing a complex fate and exposure assessment framework. The parametric models thereby reflect the framework's physical and chemical mechanisms and predict pesticide residues in harvest using linear combinations of crop, crop surface, and soil compartments. Parametric model results correspond well with results from the complex framework for 1540 substance-crop combinations with total deviations between a factor 4 (potato) and a factor 66 (lettuce). Predicted residues also correspond well with experimental data previously used to evaluate the complex framework. Pesticide mass in harvest can finally be combined with reduction factors accounting for food processing to estimate human exposure from crop consumption. All parametric models can be easily implemented into existing assessment frameworks.
Parametrically excited helicopter ground resonance dynamics with high blade asymmetries
NASA Astrophysics Data System (ADS)
Sanches, L.; Michon, G.; Berlioz, A.; Alazard, D.
2012-07-01
The present work is aimed at verifying the influence of high asymmetries in the variation of in-plane lead-lag stiffness of one blade on the ground resonance phenomenon in helicopters. The periodical equations of motions are analyzed by using Floquet's Theory (FM) and the boundaries of instabilities predicted. The stability chart obtained as a function of asymmetry parameters and rotor speed reveals a complex evolution of critical zones and the existence of bifurcation points at low rotor speed values. Additionally, it is known that when treated as parametric excitations; periodic terms may cause parametric resonances in dynamic systems, some of which can become unstable. Therefore, the helicopter is later considered as a parametrically excited system and the equations are treated analytically by applying the Method of Multiple Scales (MMS). A stability analysis is used to verify the existence of unstable parametric resonances with first and second-order sets of equations. The results are compared and validated with those obtained by Floquet's Theory. Moreover, an explanation is given for the presence of unstable motion at low rotor speeds due to parametric instabilities of the second order.
Lorey, Britta; Pilgramm, Sebastian; Bischoff, Matthias; Stark, Rudolf; Vaitl, Dieter; Kindermann, Stefan; Munzert, Jörn; Zentgraf, Karen
2011-01-01
The present study examined the neural basis of vivid motor imagery with parametrical functional magnetic resonance imaging. 22 participants performed motor imagery (MI) of six different right-hand movements that differed in terms of pointing accuracy needs and object involvement, i.e., either none, two big or two small squares had to be pointed at in alternation either with or without an object grasped with the fingers. After each imagery trial, they rated the perceived vividness of motor imagery on a 7-point scale. Results showed that increased perceived imagery vividness was parametrically associated with increasing neural activation within the left putamen, the left premotor cortex (PMC), the posterior parietal cortex of the left hemisphere, the left primary motor cortex, the left somatosensory cortex, and the left cerebellum. Within the right hemisphere, activation was found within the right cerebellum, the right putamen, and the right PMC. It is concluded that the perceived vividness of MI is parametrically associated with neural activity within sensorimotor areas. The results corroborate the hypothesis that MI is an outcome of neural computations based on movement representations located within motor areas. PMID:21655298
Historic Bim: a New Repository for Structural Health Monitoring
NASA Astrophysics Data System (ADS)
Banfi, F.; Barazzetti, L.; Previtali, M.; Roncoroni, F.
2017-05-01
Recent developments in Building Information Modelling (BIM) technologies are facilitating the management of historic complex structures using new applications. This paper proposes a generative method combining the morphological and typological aspects of the historic buildings (H-BIM), with a set of monitoring information. This combination of 3D digital survey, parametric modelling and monitoring datasets allows for the development of a system for archiving and visualizing structural health monitoring (SHM) data (Fig. 1). The availability of a BIM database allows one to integrate a different kind of data stored in different ways (e.g. reports, tables, graphs, etc.) with a representation directly connected to the 3D model of the structure with appropriate levels of detail (LoD). Data can be interactively accessed by selecting specific objects of the BIM, i.e. connecting the 3D position of the sensors installed with additional digital documentation. Such innovative BIM objects, which form a new BIM family for SHM, can be then reused in other projects, facilitating data archiving and exploitation of data acquired and processed. The application of advanced modeling techniques allows for the reduction of time and costs of the generation process, and support cooperation between different disciplines using a central workspace. However, it also reveals new challenges for parametric software and exchange formats. The case study presented is the medieval bridge Azzone Visconti in Lecco (Italy), in which multi-temporal vertical movements during load testing were integrated into H-BIM.
The 'F-complex' and MMN tap different aspects of deviance.
Laufer, Ilan; Pratt, Hillel
2005-02-01
To compare the 'F(fusion)-complex' with the Mismatch negativity (MMN), both components associated with automatic detection of changes in the acoustic stimulus flow. Ten right-handed adult native Hebrew speakers discriminated vowel-consonant-vowel (V-C-V) sequences /ada/ (deviant) and /aga/ (standard) in an active auditory 'Oddball' task, and the brain potentials associated with performance of the task were recorded from 21 electrodes. Stimuli were generated by fusing the acoustic elements of the V-C-V sequences as follows: base was always presented in front of the subject, and formant transitions were presented to the front, left or right in a virtual reality room. An illusion of a lateralized echo (duplex sensation) accompanied base fusion with the lateralized formant locations. Source current density estimates were derived for the net response to the fusion of the speech elements (F-complex) and for the MMN, using low-resolution electromagnetic tomography (LORETA). Statistical non-parametric mapping was used to estimate the current density differences between the brain sources of the F-complex and the MMN. Occipito-parietal regions and prefrontal regions were associated with the F-complex in all formant locations, whereas the vicinity of the supratemporal plane was bilaterally associated with the MMN, but only in case of front-fusion (no duplex effect). MMN is sensitive to the novelty of the auditory object in relation to other stimuli in a sequence, whereas the F-complex is sensitive to the acoustic features of the auditory object and reflects a process of matching them with target categories. The F-complex and MMN reflect different aspects of auditory processing in a stimulus-rich and changing environment: content analysis of the stimulus and novelty detection, respectively.
Adjoint Sensitivity Computations for an Embedded-Boundary Cartesian Mesh Method and CAD Geometry
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis,Michael J.
2006-01-01
Cartesian-mesh methods are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric Computer-Aided Design (CAD) tools. Our goal is to combine the automation capabilities of Cartesian methods with an eficient computation of design sensitivities. We address this issue using the adjoint method, where the computational cost of the design sensitivities, or objective function gradients, is esseutially indepeudent of the number of design variables. In previous work, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm included the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Central to this development is the computation of volume-mesh sensitivities to obtain a reliable approximation of the objective finction gradient. Motivated by the success of mesh-perturbation schemes commonly used in body-fitted unstructured formulations, we propose an approach based on a local linearization of a mesh-perturbation scheme similar to the spring analogy. This approach circumvents most of the difficulties that arise due to non-smooth changes in the cut-cell layer as the boundary shape evolves and provides a consistent approximation tot he exact gradient of the discretized abjective function. A detailed gradient accurace study is presented to verify our approach. Thereafter, we focus on a shape optimization problem for an Apollo-like reentry capsule. The optimization seeks to enhance the lift-to-drag ratio of the capsule by modifyjing the shape of its heat-shield in conjunction with a center-of-gravity (c.g.) offset. This multipoint and multi-objective optimization problem is used to demonstrate the overall effectiveness of the Cartesian adjoint method for addressing the issues of complex aerodynamic design. This abstract presents only a brief outline of the numerical method and results; full details will be given in the final paper.
Automated a complex computer aided design concept generated using macros programming
NASA Astrophysics Data System (ADS)
Rizal Ramly, Mohammad; Asrokin, Azharrudin; Abd Rahman, Safura; Zulkifly, Nurul Ain Md
2013-12-01
Changing a complex Computer Aided design profile such as car and aircraft surfaces has always been difficult and challenging. The capability of CAD software such as AutoCAD and CATIA show that a simple configuration of a CAD design can be easily modified without hassle, but it is not the case with complex design configuration. Design changes help users to test and explore various configurations of the design concept before the production of a model. The purpose of this study is to look into macros programming as parametric method of the commercial aircraft design. Macros programming is a method where the configurations of the design are done by recording a script of commands, editing the data value and adding a certain new command line to create an element of parametric design. The steps and the procedure to create a macro programming are discussed, besides looking into some difficulties during the process of creation and advantage of its usage. Generally, the advantages of macros programming as a method of parametric design are; allowing flexibility for design exploration, increasing the usability of the design solution, allowing proper contained by the model while restricting others and real time feedback changes.
Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A
2015-05-01
Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.
Parametric Study of Variable Emissivity Radiator Surfaces
NASA Technical Reports Server (NTRS)
Grob, Lisa M.; Swanson, Theodore D.
2000-01-01
The goal of spacecraft thermal design is to accommodate a high function satellite in a low weight and real estate package. The extreme environments that the satellite is exposed during its orbit are handled using passive and active control techniques. Heritage passive heat rejection designs are sized for the hot conditions and augmented for the cold end with heaters. The active heat rejection designs to date are heavy, expensive and/or complex. Incorporating an active radiator into the design that is lighter, cheaper and more simplistic will allow designers to meet the previously stated goal of thermal spacecraft design Varying the radiator's surface properties without changing the radiating area (as with VCHP), or changing the radiators' views (traditional louvers) is the objective of the variable emissivity (vary-e) radiator technologies. A parametric evaluation of the thermal performance of three such technologies is documented in this paper. Comparisons of the Micro-Electromechanical Systems (MEMS), Electrochromics, and Electrophoretics radiators to conventional radiators, both passive and active are quantified herein. With some noted limitations, the vary-e radiator surfaces provide significant advantages over traditional radiators and a promising alternative design technique for future spacecraft thermal systems.
Parametrically driven scalar field in an expanding background
NASA Astrophysics Data System (ADS)
Yanez-Pagans, Sergio; Urzagasti, Deterlino; Oporto, Zui
2017-10-01
We study the existence and dynamic behavior of localized and extended structures in a massive scalar inflaton field ϕ in 1 +1 dimensions in the framework of an expanding universe with constant Hubble parameter. We introduce a parametric forcing, produced by another quantum scalar field ψ , over the effective mass squared around the minimum of the inflaton potential. For this purpose, we study the system in the context of the cubic quintic complex Ginzburg-Landau equation and find the associated amplitude equation to the cosmological scalar field equation, which near the parametric resonance allows us to find the field amplitude. We find homogeneous null solutions, flat-top expanding solitons, and dark soliton patterns. No persistent non-null solutions are found in the absence of parametric forcing, and divergent solutions are obtained when the forcing amplitude is greater than 4 /3 .
Design Automation Using Script Languages. High-Level CAD Templates in Non-Parametric Programs
NASA Astrophysics Data System (ADS)
Moreno, R.; Bazán, A. M.
2017-10-01
The main purpose of this work is to study the advantages offered by the application of traditional techniques of technical drawing in processes for automation of the design, with non-parametric CAD programs, provided with scripting languages. Given that an example drawing can be solved with traditional step-by-step detailed procedures, is possible to do the same with CAD applications and to generalize it later, incorporating references. In today’s modern CAD applications, there are striking absences of solutions for building engineering: oblique projections (military and cavalier), 3D modelling of complex stairs, roofs, furniture, and so on. The use of geometric references (using variables in script languages) and their incorporation into high-level CAD templates allows the automation of processes. Instead of repeatedly creating similar designs or modifying their data, users should be able to use these templates to generate future variations of the same design. This paper presents the automation process of several complex drawing examples based on CAD script files aided with parametric geometry calculation tools. The proposed method allows us to solve complex geometry designs not currently incorporated in the current CAD applications and to subsequently create other new derivatives without user intervention. Automation in the generation of complex designs not only saves time but also increases the quality of the presentations and reduces the possibility of human errors.
Parametric identification of the process of preparing ceramic mixture as an object of control
NASA Astrophysics Data System (ADS)
Galitskov, Stanislav; Nazarov, Maxim; Galitskov, Konstantin
2017-10-01
Manufacture of ceramic materials and products largely depends on the preparation of clay raw materials. The main process here is the process of mixing, which in industrial production is mostly done in cross-compound clay mixers of continuous operation with steam humidification. The authors identified features of dynamics of this technological stage, which in itself is a non-linear control object with distributed parameters. When solving practical tasks for automation of a certain class of ceramic materials production it is important to make parametric identification of moving clay. In this paper the task is solved with the use of computational models, approximated to a particular section of a clay mixer along its length. The research introduces a methodology of computational experiments as applied to the designed computational model. Parametric identification of dynamic links was carried out according to transient characteristics. The experiments showed that the control object in question is to a great extent a non-stationary one. The obtained results are problematically oriented on synthesizing a multidimensional automatic control system for preparation of ceramic mixture with specified values of humidity and temperature exposed to the technological process of major disturbances.
NASA Astrophysics Data System (ADS)
David, Laurent; Amara, Patricia; Field, Martin J.; Major, François
2002-08-01
Although techniques for the simulation of biomolecules, such as proteins and RNAs, have greatly advanced in the last decade, modeling complexes of biomolecules with metal ions remains problematic. Precise calculations can be done with quantum mechanical methods but these are prohibitive for systems the size of macromolecules. More qualitative modeling can be done with molecular mechanical potentials but the parametrization of force fields for metals is often difficult, particularly if the bonding between the metal and the groups in its coordination shell has significant covalent character. In this paper we present a method for deriving bond and bond-angle parameters for metal complexes from experimental bond and bond-angle distributions obtained from the Cambridge Structural Database. In conjunction with this method, we also introduce a non-standard energy term of gaussian form that allows us to obtain a stable description of the coordination about a metal center during a simulation. The method was evaluated on Fe(II)-porphyrin complexes, on simple Cu(II) ion complexes and a number of complexes of the Pb(II) ion.
Efficient model reduction of parametrized systems by matrix discrete empirical interpolation
NASA Astrophysics Data System (ADS)
Negri, Federico; Manzoni, Andrea; Amsallem, David
2015-12-01
In this work, we apply a Matrix version of the so-called Discrete Empirical Interpolation (MDEIM) for the efficient reduction of nonaffine parametrized systems arising from the discretization of linear partial differential equations. Dealing with affinely parametrized operators is crucial in order to enhance the online solution of reduced-order models (ROMs). However, in many cases such an affine decomposition is not readily available, and must be recovered through (often) intrusive procedures, such as the empirical interpolation method (EIM) and its discrete variant DEIM. In this paper we show that MDEIM represents a very efficient approach to deal with complex physical and geometrical parametrizations in a non-intrusive, efficient and purely algebraic way. We propose different strategies to combine MDEIM with a state approximation resulting either from a reduced basis greedy approach or Proper Orthogonal Decomposition. A posteriori error estimates accounting for the MDEIM error are also developed in the case of parametrized elliptic and parabolic equations. Finally, the capability of MDEIM to generate accurate and efficient ROMs is demonstrated on the solution of two computationally-intensive classes of problems occurring in engineering contexts, namely PDE-constrained shape optimization and parametrized coupled problems.
NASA Astrophysics Data System (ADS)
Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane
2018-02-01
Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.
Weakly Supervised Segmentation-Aided Classification of Urban Scenes from 3d LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Guinard, S.; Landrieu, L.
2017-05-01
We consider the problem of the semantic classification of 3D LiDAR point clouds obtained from urban scenes when the training set is limited. We propose a non-parametric segmentation model for urban scenes composed of anthropic objects of simple shapes, partionning the scene into geometrically-homogeneous segments which size is determined by the local complexity. This segmentation can be integrated into a conditional random field classifier (CRF) in order to capture the high-level structure of the scene. For each cluster, this allows us to aggregate the noisy predictions of a weakly-supervised classifier to produce a higher confidence data term. We demonstrate the improvement provided by our method over two publicly-available large-scale data sets.
Path integral learning of multidimensional movement trajectories
NASA Astrophysics Data System (ADS)
André, João; Santos, Cristina; Costa, Lino
2013-10-01
This paper explores the use of Path Integral Methods, particularly several variants of the recent Path Integral Policy Improvement (PI2) algorithm in multidimensional movement parametrized policy learning. We rely on Dynamic Movement Primitives (DMPs) to codify discrete and rhythmic trajectories, and apply the PI2-CMA and PIBB methods in the learning of optimal policy parameters, according to different cost functions that inherently encode movement objectives. Additionally we merge both of these variants and propose the PIBB-CMA algorithm, comparing all of them with the vanilla version of PI2. From the obtained results we conclude that PIBB-CMA surpasses all other methods in terms of convergence speed and iterative final cost, which leads to an increased interest in its application to more complex robotic problems.
NASA Astrophysics Data System (ADS)
Magri, Alphonso; Krol, Andrzej; Lipson, Edward; Mandel, James; McGraw, Wendy; Lee, Wei; Tillapaugh-Fay, Gwen; Feiglin, David
2009-02-01
This study was undertaken to register 3D parametric breast images derived from Gd-DTPA MR and F-18-FDG PET/CT dynamic image series. Nonlinear curve fitting (Levenburg-Marquardt algorithm) based on realistic two-compartment models was performed voxel-by-voxel separately for MR (Brix) and PET (Patlak). PET dynamic series consists of 50 frames of 1-minute duration. Each consecutive PET image was nonrigidly registered to the first frame using a finite element method and fiducial skin markers. The 12 post-contrast MR images were nonrigidly registered to the precontrast frame using a free-form deformation (FFD) method. Parametric MR images were registered to parametric PET images via CT using FFD because the first PET time frame was acquired immediately after the CT image on a PET/CT scanner and is considered registered to the CT image. We conclude that nonrigid registration of PET and MR parametric images using CT data acquired during PET/CT scan and the FFD method resulted in their improved spatial coregistration. The success of this procedure was limited due to relatively large target registration error, TRE = 15.1+/-7.7 mm, as compared to spatial resolution of PET (6-7 mm), and swirling image artifacts created in MR parametric images by the FFD. Further refinement of nonrigid registration of PET and MR parametric images is necessary to enhance visualization and integration of complex diagnostic information provided by both modalities that will lead to improved diagnostic performance.
Development and fabrication of S-band chip varactor parametric amplifier
NASA Technical Reports Server (NTRS)
Kramer, E.
1974-01-01
A noncryogenic, S-band parametric amplifier operating in the 2.2 to 2.3 GHz band and having an average input noise temperature of less than 30 K was built and tested. The parametric amplifier module occupies a volume of less than 1-1/4 cubic feet and weighs less than 60 pounds. The module is designed for use in various NASA ground stations to replace larger, more complex cryogenic units which require considerably more maintenance because of the cryogenic refrigeration system employed. The amplifier can be located up to 15 feet from the power supply unit. Optimum performance was achieved through the use of high-quality unpackaged (chip) varactors in the amplifier design.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Li, Jun
2002-09-01
In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.
Reliable clarity automatic-evaluation method for optical remote sensing images
NASA Astrophysics Data System (ADS)
Qin, Bangyong; Shang, Ren; Li, Shengyang; Hei, Baoqin; Liu, Zhiwen
2015-10-01
Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.
Role models for complex networks
NASA Astrophysics Data System (ADS)
Reichardt, J.; White, D. R.
2007-11-01
We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.
Model-based spectral estimation of Doppler signals using parallel genetic algorithms.
Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F
2000-05-01
Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.
Comparing Pixel- and Object-Based Approaches in Effectively Classifying Wetland-Dominated Landscapes
Berhane, Tedros M.; Lane, Charles R.; Wu, Qiusheng; Anenkhonov, Oleg A.; Chepinoga, Victor V.; Autrey, Bradley C.; Liu, Hongxing
2018-01-01
Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km2) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar’s chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection—which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes. PMID:29707381
Berhane, Tedros M; Lane, Charles R; Wu, Qiusheng; Anenkhonov, Oleg A; Chepinoga, Victor V; Autrey, Bradley C; Liu, Hongxing
2018-01-01
Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km 2 ) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar's chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection-which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes.
NASA Astrophysics Data System (ADS)
Poussot-Vassal, Charles; Tanelli, Mara; Lovera, Marco
The complexity of Information Technology (IT) systems is steadily increasing and system complexity has been recognised as the main obstacle to further advancements of IT. This fact has recently raised energy management issues. Control techniques have been proposed and successfully applied to design Autonomic Computing systems, trading-off system performance with energy saving goals. As users behaviour is highly time varying and workload conditions can change substantially within the same business day, the Linear Parametrically Varying (LPV) framework is particularly promising for modeling such systems. In this chapter, a control-theoretic method to investigate the trade-off between Quality of Service (QoS) requirements and energy saving objectives in the case of admission control in Web service systems is proposed, considering as control variables the server CPU frequency and the admission probability. To quantitatively evaluate the trade-off, a dynamic model of the admission control dynamics is estimated via LPV identification techniques. Based on this model, an optimisation problem within the Model Predictive Control (MPC) framework is setup, by means of which it is possible to investigate the optimal trade-off policy to manage QoS and energy saving objectives at design time and taking into explicit account the system dynamics.
Qiu, Cheng-Wei; Li, Le-Wei; Yeo, Tat-Soon; Zouhdi, Saïd
2007-02-01
Vector potential formulation and parametric studies of electromagnetic scattering problems of a sphere characterized by the rotationally symmetric anisotropy are studied. Both epsilon and mu tensors are considered herein, and four elementary parameters are utilized to specify the material properties in the structure. The field representations can be obtained in terms of two potentials, and both TE (TM) modes (with respect to r) inside (outside) the sphere can be derived and expressed in terms of a series of fractional-order (in a real or complex number) Ricatti-Bessel functions. The effects due to either electric anisotropy ratio (Ae=epsilont/epsilonr) or magnetic anisotropy ratio (Am=mut/mur) on the radar cross section (RCS) are considered, and the hybrid effects due to both Ae and Am are also examined extensively. It is found that the material anisotropy affects significantly the scattering behaviors of three-dimensional dielectric objects. For absorbing spheres, however, the Ae or Am no longer plays a significant role as in lossless dielectric spheres and the anisotropic dependence of RCS values is found to be predictable. The hybrid effects of Ae and Am are considered for absorbing spheres as well, but it is found that the RCS can be greatly reduced by controlling the material parameters. Details of the theoretical treatment and numerical results are presented.
Zheng, Yuanjie; Grossman, Murray; Awate, Suyash P; Gee, James C
2009-01-01
We propose to use the sparseness property of the gradient probability distribution to estimate the intensity nonuniformity in medical images, resulting in two novel automatic methods: a non-parametric method and a parametric method. Our methods are easy to implement because they both solve an iteratively re-weighted least squares problem. They are remarkably accurate as shown by our experiments on images of different imaged objects and from different imaging modalities.
Zheng, Yuanjie; Grossman, Murray; Awate, Suyash P.; Gee, James C.
2013-01-01
We propose to use the sparseness property of the gradient probability distribution to estimate the intensity nonuniformity in medical images, resulting in two novel automatic methods: a non-parametric method and a parametric method. Our methods are easy to implement because they both solve an iteratively re-weighted least squares problem. They are remarkably accurate as shown by our experiments on images of different imaged objects and from different imaging modalities. PMID:20426191
Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows
NASA Astrophysics Data System (ADS)
Srivastav, R. K.; Srinivasan, K.; Sudheer, K.
2009-05-01
Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.
Local tests of gravitation with Gaia observations of Solar System Objects
NASA Astrophysics Data System (ADS)
Hees, Aurélien; Le Poncin-Lafitte, Christophe; Hestroffer, Daniel; David, Pedro
2018-04-01
In this proceeding, we show how observations of Solar System Objects with Gaia can be used to test General Relativity and to constrain modified gravitational theories. The high number of Solar System objects observed and the variety of their orbital parameters associated with the impressive astrometric accuracy will allow us to perform local tests of General Relativity. In this communication, we present a preliminary sensitivity study of the Gaia observations on dynamical parameters such as the Sun quadrupolar moment and on various extensions to general relativity such as the parametrized post-Newtonian parameters, the fifth force formalism and a violation of Lorentz symmetry parametrized by the Standard-Model extension framework. We take into account the time sequences and the geometry of the observations that are particular to Gaia for its nominal mission (5 years) and for an extended mission (10 years).
NASA Astrophysics Data System (ADS)
Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin
2018-06-01
Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.
Solar Power System Options for the Radiation and Technology Demonstration Spacecraft
NASA Technical Reports Server (NTRS)
Kerslake, Thomas W.; Haraburda, Francis M.; Riehl, John P.
2000-01-01
The Radiation and Technology Demonstration (RTD) Mission has the primary objective of demonstrating high-power (10 kilowatts) electric thruster technologies in Earth orbit. This paper discusses the conceptual design of the RTD spacecraft photovoltaic (PV) power system and mission performance analyses. These power system studies assessed multiple options for PV arrays, battery technologies and bus voltage levels. To quantify performance attributes of these power system options, a dedicated Fortran code was developed to predict power system performance and estimate system mass. The low-thrust mission trajectory was analyzed and important Earth orbital environments were modeled. Baseline power system design options are recommended on the basis of performance, mass and risk/complexity. Important findings from parametric studies are discussed and the resulting impacts to the spacecraft design and cost.
Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah
2016-01-01
One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.
A Bayesian Alternative for Multi-objective Ecohydrological Model Specification
NASA Astrophysics Data System (ADS)
Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.
2015-12-01
Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
Heating and thermal squeezing in parametrically driven oscillators with added noise.
Batista, Adriano A
2012-11-01
In this paper we report a theoretical model based on Green's functions, Floquet theory, and averaging techniques up to second order that describes the dynamics of parametrically driven oscillators with added thermal noise. Quantitative estimates for heating and quadrature thermal noise squeezing near and below the transition line of the first parametric instability zone of the oscillator are given. Furthermore, we give an intuitive explanation as to why heating and thermal squeezing occur. For small amplitudes of the parametric pump the Floquet multipliers are complex conjugate of each other with a constant magnitude. As the pump amplitude is increased past a threshold value in the stable zone near the first parametric instability, the two Floquet multipliers become real and have different magnitudes. This creates two different effective dissipation rates (one smaller and the other larger than the real dissipation rate) along the stable manifolds of the first-return Poincaré map. We also show that the statistical average of the input power due to thermal noise is constant and independent of the pump amplitude and frequency. The combination of these effects causes most of heating and thermal squeezing. Very good agreement between analytical and numerical estimates of the thermal fluctuations is achieved.
Practical statistics in pain research.
Kim, Tae Kyun
2017-10-01
Pain is subjective, while statistics related to pain research are objective. This review was written to help researchers involved in pain research make statistical decisions. The main issues are related with the level of scales that are often used in pain research, the choice of statistical methods between parametric or nonparametric statistics, and problems which arise from repeated measurements. In the field of pain research, parametric statistics used to be applied in an erroneous way. This is closely related with the scales of data and repeated measurements. The level of scales includes nominal, ordinal, interval, and ratio scales. The level of scales affects the choice of statistics between parametric or non-parametric methods. In the field of pain research, the most frequently used pain assessment scale is the ordinal scale, which would include the visual analogue scale (VAS). There used to be another view, however, which considered the VAS to be an interval or ratio scale, so that the usage of parametric statistics would be accepted practically in some cases. Repeated measurements of the same subjects always complicates statistics. It means that measurements inevitably have correlations between each other, and would preclude the application of one-way ANOVA in which independence between the measurements is necessary. Repeated measures of ANOVA (RMANOVA), however, would permit the comparison between the correlated measurements as long as the condition of sphericity assumption is satisfied. Conclusively, parametric statistical methods should be used only when the assumptions of parametric statistics, such as normality and sphericity, are established.
The Problem of Size in Robust Design
NASA Technical Reports Server (NTRS)
Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri
1997-01-01
To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.
NASA Astrophysics Data System (ADS)
Vasilieva, V. N.
2017-11-01
The article deals with the solution of problems in AutoCAD offered at the All-Russian student Olympiads at the section of “Computer graphics” that are not typical for the students of construction specialties. The students are provided with the opportunity to study the algorithm for solving original tasks of high complexity. The article shows how the unknown parameter underlying the construction can be determined using a parametric drawing with geometric constraints and dimensional dependencies. To optimize the mark-up operation, the use of the command for projecting the points and lines of different types onto bodies and surfaces in different directions is shown. For the construction of a spring with a different pitch of turns, the paper describes the creation of a block from a part of the helix and its scaling when inserted into a model with unequal coefficients along the axes. The advantage of the NURBS surface and the application of the “body-surface-surface-NURBS-body” conversion are reflected to enhance the capabilities of both solid and surface modeling. The article’s material introduces construction students into the method of constructing complex models in AutoCAD that are not similar to typical training assignments.
Method of the active contour for segmentation of bone systems on bitmap images
NASA Astrophysics Data System (ADS)
Vu, Hai Anh; Safonov, Roman A.; Kolesnikova, Anna S.; Kirillova, Irina V.; Kossovich, Leonid U.
2018-02-01
It is developed within a method of the active contours the approach, which is allowing to realize separation of a contour of a object of the image in case of its segmentation. This approach exceeds a parametric method on speed, but also does not concede to it on decision accuracy. The approach is offered within this operation will allow to realize allotment of a contour with high accuracy of the image and quicker than a parametric method of the active contours.
1985-08-01
in a. typography system, the surface of a. ship hull, or the skin of a.n airplane. To define objects such as these, higher order curve a.nd surface...rate). Thus, a parametrization contains infor- mation about the geometry (the shape or image of the curve), the orientation, and the rate. Figure 2.3...2.3. Each of the curves above has the same image ; they only differ in orientation and rate. Orientation is indicated by arrowheads and rate is
Nonequilibrium Langevin approach to quantum optics in semiconductor microcavities
NASA Astrophysics Data System (ADS)
Portolan, S.; di Stefano, O.; Savasta, S.; Rossi, F.; Girlanda, R.
2008-01-01
Recently, the possibility of generating nonclassical polariton states by means of parametric scattering has been demonstrated. Excitonic polaritons propagate in a complex interacting environment and contain real electronic excitations subject to scattering events and noise affecting quantum coherence and entanglement. Here, we present a general theoretical framework for the realistic investigation of polariton quantum correlations in the presence of coherent and incoherent interaction processes. The proposed theoretical approach is based on the nonequilibrium quantum Langevin approach for open systems applied to interacting-electron complexes described within the dynamics controlled truncation scheme. It provides an easy recipe to calculate multitime correlation functions which are key quantities in quantum optics. As a first application, we analyze the buildup of polariton parametric emission in semiconductor microcavities including the influence of noise originating from phonon-induced scattering.
Complex Mapping of Aerofoils--A Different Perspective
ERIC Educational Resources Information Center
Matthews, Miccal T.
2012-01-01
In this article an application of conformal mapping to aerofoil theory is studied from a geometric and calculus point of view. The problem is suitable for undergraduate teaching in terms of a project or extended piece of work, and brings together the concepts of geometric mapping, parametric equations, complex numbers and calculus. The Joukowski…
NASA Technical Reports Server (NTRS)
Dash, S.; Delguidice, P. D.
1975-01-01
A parametric numerical procedure permitting the rapid determination of the performance of a class of scramjet nozzle configurations is presented. The geometric complexity of these configurations ruled out attempts to employ conventional nozzle design procedures. The numerical program developed permitted the parametric variation of cowl length, turning angles on the cowl and vehicle undersurface and lateral expansion, and was subject to fixed constraints such as the vehicle length and nozzle exit height. The program required uniform initial conditions at the burner exit station and yielded the location of all predominant wave zones, accounting for lateral expansion effects. In addition, the program yielded the detailed pressure distribution on the cowl, vehicle undersurface and fences, if any, and calculated the nozzle thrust, lift and pitching moments.
The role of temporo-parietal junction (TPJ) in global Gestalt perception.
Huberle, Elisabeth; Karnath, Hans-Otto
2012-07-01
Grouping processes enable the coherent perception of our environment. A number of brain areas has been suggested to be involved in the integration of elements into objects including early and higher visual areas along the ventral visual pathway as well as motion-processing areas of the dorsal visual pathway. However, integration not only is required for the cortical representation of individual objects, but is also essential for the perception of more complex visual scenes consisting of several different objects and/or shapes. The present fMRI experiments aimed to address such integration processes. We investigated the neural correlates underlying the global Gestalt perception of hierarchically organized stimuli that allowed parametrical degrading of the object at the global level. The comparison of intact versus disturbed perception of the global Gestalt revealed a network of cortical areas including the temporo-parietal junction (TPJ), anterior cingulate cortex and the precuneus. The TPJ location corresponds well with the areas known to be typically lesioned in stroke patients with simultanagnosia following bilateral brain damage. These patients typically show a deficit in identifying the global Gestalt of a visual scene. Further, we found the closest relation between behavioral performance and fMRI activation for the TPJ. Our data thus argue for a significant role of the TPJ in human global Gestalt perception.
Changing space and sound: Parametric design and variable acoustics
NASA Astrophysics Data System (ADS)
Norton, Christopher William
This thesis examines the potential for parametric design software to create performance based design using acoustic metrics as the design criteria. A former soundstage at the University of Southern California used by the Thornton School of Music is used as a case study for a multiuse space for orchestral, percussion, master class and recital use. The criteria used for each programmatic use include reverberation time, bass ratio, and the early energy ratios of the clarity index and objective support. Using a panelized ceiling as a design element to vary the parameters of volume, panel orientation and type of absorptive material, the relationships between these parameters and the design criteria are explored. These relationships and subsequently derived equations are applied to Grasshopper parametric modeling software for Rhino 3D (a NURBS modeling software). Using the target reverberation time and bass ratio for each programmatic use as input for the parametric model, the genomic optimization function of Grasshopper - Galapagos - is run to identify the optimum ceiling geometry and material distribution.
flexsurv: A Platform for Parametric Survival Modeling in R
Jackson, Christopher H.
2018-01-01
flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450
On the Way to Appropriate Model Complexity
NASA Astrophysics Data System (ADS)
Höge, M.
2016-12-01
When statistical models are used to represent natural phenomena they are often too simple or too complex - this is known. But what exactly is model complexity? Among many other definitions, the complexity of a model can be conceptualized as a measure of statistical dependence between observations and parameters (Van der Linde, 2014). However, several issues remain when working with model complexity: A unique definition for model complexity is missing. Assuming a definition is accepted, how can model complexity be quantified? How can we use a quantified complexity to the better of modeling? Generally defined, "complexity is a measure of the information needed to specify the relationships between the elements of organized systems" (Bawden & Robinson, 2015). The complexity of a system changes as the knowledge about the system changes. For models this means that complexity is not a static concept: With more data or higher spatio-temporal resolution of parameters, the complexity of a model changes. There are essentially three categories into which all commonly used complexity measures can be classified: (1) An explicit representation of model complexity as "Degrees of freedom" of a model, e.g. effective number of parameters. (2) Model complexity as code length, a.k.a. "Kolmogorov complexity": The longer the shortest model code, the higher its complexity (e.g. in bits). (3) Complexity defined via information entropy of parametric or predictive uncertainty. Preliminary results show that Bayes theorem allows for incorporating all parts of the non-static concept of model complexity like data quality and quantity or parametric uncertainty. Therefore, we test how different approaches for measuring model complexity perform in comparison to a fully Bayesian model selection procedure. Ultimately, we want to find a measure that helps to assess the most appropriate model.
Mazzotta, Laura; Cozzani, Mauro; Mutinelli, Sabrina; Castaldo, Attilio; Silvestrini-Biavati, Armando
2013-01-01
Objectives. To build a 3D parametric model to detect shape and volume of dental roots, from a panoramic radiograph (PAN) of the patient. Materials and Methods. A PAN and a cone beam computed tomography (CBCT) of a patient were acquired. For each tooth, various parameters were considered (coronal and root lengths and widths): these were measured from the CBCT and from the PAN. Measures were compared to evaluate the accuracy level of PAN measurements. By using a CAD software, parametric models of an incisor and of a molar were constructed employing B-spline curves and free-form surfaces. PAN measures of teeth 2.1 and 3.6 were assigned to the parametric models; the same two teeth were segmented from CBCT. The two models were superimposed to assess the accuracy of the parametric model. Results. PAN measures resulted to be accurate and comparable with all other measurements. From model superimposition the maximum error resulted was 1.1 mm on the incisor crown and 2 mm on the molar furcation. Conclusion. This study shows that it is possible to build a 3D parametric model starting from 2D information with a clinically valid accuracy level. This can ultimately lead to a crown-root movement simulation. PMID:23554814
Pinching parameters for open (super) strings
NASA Astrophysics Data System (ADS)
Playle, Sam; Sciuto, Stefano
2018-02-01
We present an approach to the parametrization of (super) Schottky space obtained by sewing together three-punctured discs with strips. Different cubic ribbon graphs classify distinct sets of pinching parameters; we show how they are mapped onto each other. The parametrization is particularly well-suited to describing the region within (super) moduli space where open bosonic or Neveu-Schwarz string propagators become very long and thin, which dominates the IR behaviour of string theories. We show how worldsheet objects such as the Green's function converge to graph theoretic objects such as the Symanzik polynomials in the α ' → 0 limit, allowing us to see how string theory reproduces the sum over Feynman graphs. The (super) string measure takes on a simple and elegant form when expressed in terms of these parameters.
Quantum illumination with Gaussian states.
Tan, Si-Hui; Erkmen, Baris I; Giovannetti, Vittorio; Guha, Saikat; Lloyd, Seth; Maccone, Lorenzo; Pirandola, Stefano; Shapiro, Jeffrey H
2008-12-19
An optical transmitter irradiates a target region containing a bright thermal-noise bath in which a low-reflectivity object might be embedded. The light received from this region is used to decide whether the object is present or absent. The performance achieved using a coherent-state transmitter is compared with that of a quantum-illumination transmitter, i.e., one that employs the signal beam obtained from spontaneous parametric down-conversion. By making the optimum joint measurement on the light received from the target region together with the retained spontaneous parametric down-conversion idler beam, the quantum-illumination system realizes a 6 dB advantage in the error-probability exponent over the optimum reception coherent-state system. This advantage accrues despite there being no entanglement between the light collected from the target region and the retained idler beam.
Parametric Modeling as a Technology of Rapid Prototyping in Light Industry
NASA Astrophysics Data System (ADS)
Tomilov, I. N.; Grudinin, S. N.; Frolovsky, V. D.; Alexandrov, A. A.
2016-04-01
The paper deals with the parametric modeling method of virtual mannequins for the purposes of design automation in clothing industry. The described approach includes the steps of generation of the basic model on the ground of the initial one (obtained in 3D-scanning process), its parameterization and deformation. The complex surfaces are presented by the wireframe model. The modeling results are evaluated with the set of similarity factors. Deformed models are compared with their virtual prototypes. The results of modeling are estimated by the standard deviation factor.
1989-07-31
Information System (OSMIS). The long-range objective is to develop methods to determine total operating and support (O&S) costs within life-cycle cost...objective was to assess the feasibility of developing cost estimating relationships (CERs) based on data from the Army Operating and Support Management
On the Nature of Syntactic Variation: Evidence from Complex Predicates and Complex Word-Formation.
ERIC Educational Resources Information Center
Snyder, William
2001-01-01
Provides evidence from child language acquisition and comparative syntax for existence of a syntactic parameter in the classical sense of Chomsky (1981), with simultaneous effects on syntactic argument structure. Implications are that syntax is subject to points of substantive parametric variation as envisioned in Chomsky, and the time course of…
The composition of M-type asteroids: Synthesis of spectroscopic and radar observations
NASA Astrophysics Data System (ADS)
Neeley, J. R.; Ockert-Bell, M. E.; Clark, B. E.; Shepard, M. K.; Cloutis, E. A.; Fornasier, S.; Bus, S. J.
2011-10-01
This work updates our and expands our long term radar-driven observational campaign of 27 main-belt asteroids (MBAs) focused on Bus-DeMeo Xc- and Xk-type objects (Tholen X and M class asteroids) using the Arecibo radar and NASA Infrared Telescope Facilities (IRTF). Seventeen of our targets were near-simultaneously observed with radar and those observations are described in companion paper (Shepard et al., 2010). We utilized visible wavelength for a more complete compositional analysis of our targets. Compositional evidence is derived from our target asteroid spectra using three different methods: 1) a χ2 search for spectral matches in the RELAB database, 2) parametric comparisons with meteorites and 3) linear discriminant analysis. This paper synthesizes the results of the RELAB search, parametric comparisons, and linear discriminant analysis with compositional suggestions based on radar observations. We find that for six of seventeen targets with radar data, our spectral results are consistent with their radar analog (16 Psyche, 21 Lutetia, 69 Hesperia, 135 Hertha, 216 Kleopatra, and 497 Iva). For twenty out of twenty-seven objects our statistical comparisons with RELAB meteorites result in consistent analog identification, providing a degree of confidence in our parametric methods.
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
NASA Astrophysics Data System (ADS)
Echeverria, Alex; Silva, Jorge F.; Mendez, Rene A.; Orchard, Marcos
2016-10-01
Context. The best precision that can be achieved to estimate the location of a stellar-like object is a topic of permanent interest in the astrometric community. Aims: We analyze bounds for the best position estimation of a stellar-like object on a CCD detector array in a Bayesian setting where the position is unknown, but where we have access to a prior distribution. In contrast to a parametric setting where we estimate a parameter from observations, the Bayesian approach estimates a random object (I.e., the position is a random variable) from observations that are statistically dependent on the position. Methods: We characterize the Bayesian Cramér-Rao (CR) that bounds the minimum mean square error (MMSE) of the best estimator of the position of a point source on a linear CCD-like detector, as a function of the properties of detector, the source, and the background. Results: We quantify and analyze the increase in astrometric performance from the use of a prior distribution of the object position, which is not available in the classical parametric setting. This gain is shown to be significant for various observational regimes, in particular in the case of faint objects or when the observations are taken under poor conditions. Furthermore, we present numerical evidence that the MMSE estimator of this problem tightly achieves the Bayesian CR bound. This is a remarkable result, demonstrating that all the performance gains presented in our analysis can be achieved with the MMSE estimator. Conclusions: The Bayesian CR bound can be used as a benchmark indicator of the expected maximum positional precision of a set of astrometric measurements in which prior information can be incorporated. This bound can be achieved through the conditional mean estimator, in contrast to the parametric case where no unbiased estimator precisely reaches the CR bound.
Sengupta Chattopadhyay, Amrita; Hsiao, Ching-Lin; Chang, Chien Ching; Lian, Ie-Bin; Fann, Cathy S J
2014-01-01
Identifying susceptibility genes that influence complex diseases is extremely difficult because loci often influence the disease state through genetic interactions. Numerous approaches to detect disease-associated SNP-SNP interactions have been developed, but none consistently generates high-quality results under different disease scenarios. Using summarizing techniques to combine a number of existing methods may provide a solution to this problem. Here we used three popular non-parametric methods-Gini, absolute probability difference (APD), and entropy-to develop two novel summary scores, namely principle component score (PCS) and Z-sum score (ZSS), with which to predict disease-associated genetic interactions. We used a simulation study to compare performance of the non-parametric scores, the summary scores, the scaled-sum score (SSS; used in polymorphism interaction analysis (PIA)), and the multifactor dimensionality reduction (MDR). The non-parametric methods achieved high power, but no non-parametric method outperformed all others under a variety of epistatic scenarios. PCS and ZSS, however, outperformed MDR. PCS, ZSS and SSS displayed controlled type-I-errors (<0.05) compared to GS, APDS, ES (>0.05). A real data study using the genetic-analysis-workshop 16 (GAW 16) rheumatoid arthritis dataset identified a number of interesting SNP-SNP interactions. © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Goger, Brigitta; Rotach, Mathias W.; Gohm, Alexander; Fuhrer, Oliver; Stiperski, Ivana; Holtslag, Albert A. M.
2018-02-01
The correct simulation of the atmospheric boundary layer (ABL) is crucial for reliable weather forecasts in truly complex terrain. However, common assumptions for model parametrizations are only valid for horizontally homogeneous and flat terrain. Here, we evaluate the turbulence parametrization of the numerical weather prediction model COSMO with a horizontal grid spacing of Δ x = 1.1 km for the Inn Valley, Austria. The long-term, high-resolution turbulence measurements of the i-Box measurement sites provide a useful data pool of the ABL structure in the valley and on slopes. We focus on days and nights when ABL processes dominate and a thermally-driven circulation is present. Simulations are performed for case studies with both a one-dimensional turbulence parametrization, which only considers the vertical turbulent exchange, and a hybrid turbulence parametrization, also including horizontal shear production and advection in the budget of turbulence kinetic energy (TKE). We find a general underestimation of TKE by the model with the one-dimensional turbulence parametrization. In the simulations with the hybrid turbulence parametrization, the modelled TKE has a more realistic structure, especially in situations when the TKE production is dominated by shear related to the afternoon up-valley flow, and during nights, when a stable ABL is present. The model performance also improves for stations on the slopes. An estimation of the horizontal shear production from the observation network suggests that three-dimensional effects are a relevant part of TKE production in the valley.
NASA Astrophysics Data System (ADS)
Goger, Brigitta; Rotach, Mathias W.; Gohm, Alexander; Fuhrer, Oliver; Stiperski, Ivana; Holtslag, Albert A. M.
2018-07-01
The correct simulation of the atmospheric boundary layer (ABL) is crucial for reliable weather forecasts in truly complex terrain. However, common assumptions for model parametrizations are only valid for horizontally homogeneous and flat terrain. Here, we evaluate the turbulence parametrization of the numerical weather prediction model COSMO with a horizontal grid spacing of Δ x = 1.1 km for the Inn Valley, Austria. The long-term, high-resolution turbulence measurements of the i-Box measurement sites provide a useful data pool of the ABL structure in the valley and on slopes. We focus on days and nights when ABL processes dominate and a thermally-driven circulation is present. Simulations are performed for case studies with both a one-dimensional turbulence parametrization, which only considers the vertical turbulent exchange, and a hybrid turbulence parametrization, also including horizontal shear production and advection in the budget of turbulence kinetic energy (TKE). We find a general underestimation of TKE by the model with the one-dimensional turbulence parametrization. In the simulations with the hybrid turbulence parametrization, the modelled TKE has a more realistic structure, especially in situations when the TKE production is dominated by shear related to the afternoon up-valley flow, and during nights, when a stable ABL is present. The model performance also improves for stations on the slopes. An estimation of the horizontal shear production from the observation network suggests that three-dimensional effects are a relevant part of TKE production in the valley.
Kral, L
2007-05-01
We present a complex stabilization and control system for a commercially available optical parametric oscillator. The system is able to stabilize the oscillator's output wavelength at a narrow spectral line of atomic iodine with subpicometer precision, allowing utilization of this solid-state parametric oscillator as a front end of a high-power photodissociation laser chain formed by iodine gas amplifiers. In such setup, a precise wavelength matching between the front end and the amplifier chain is necessary due to extremely narrow spectral lines of the gaseous iodine (approximately 20 pm). The system is based on a personal computer, a heated iodine cell, and a few other low-cost components. It automatically identifies the proper peak within the iodine absorption spectrum, and then keeps the oscillator tuned to this peak with high precision and reliability. The use of the solid-state oscillator as the front end allows us to use the whole iodine laser system as a pump laser for the optical parametric chirped pulse amplification, as it enables precise time synchronization with a signal Ti:sapphire laser.
Automation Hooks Architecture Trade Study for Flexible Test Orchestration
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Maclean, John R.; Graffagnino, Frank J.; McCartney, Patrick A.
2010-01-01
We describe the conclusions of a technology and communities survey supported by concurrent and follow-on proof-of-concept prototyping to evaluate feasibility of defining a durable, versatile, reliable, visible software interface to support strategic modularization of test software development. The objective is that test sets and support software with diverse origins, ages, and abilities can be reliably integrated into test configurations that assemble and tear down and reassemble with scalable complexity in order to conduct both parametric tests and monitored trial runs. The resulting approach is based on integration of three recognized technologies that are currently gaining acceptance within the test industry and when combined provide a simple, open and scalable test orchestration architecture that addresses the objectives of the Automation Hooks task. The technologies are automated discovery using multicast DNS Zero Configuration Networking (zeroconf), commanding and data retrieval using resource-oriented Restful Web Services, and XML data transfer formats based on Automatic Test Markup Language (ATML). This open-source standards-based approach provides direct integration with existing commercial off-the-shelf (COTS) analysis software tools.
Parametric Inlet Tested in Glenn's 10- by 10-Foot Supersonic Wind Tunnel
NASA Technical Reports Server (NTRS)
Slater, John W.; Davis, David O.; Solano, Paul A.
2005-01-01
The Parametric Inlet is an innovative concept for the inlet of a gas-turbine propulsion system for supersonic aircraft. The concept approaches the performance of past inlet concepts, but with less mechanical complexity, lower weight, and greater aerodynamic stability and safety. Potential applications include supersonic cruise aircraft and missiles. The Parametric Inlet uses tailored surfaces to turn the incoming supersonic flow inward toward an axis of symmetry. The terminal shock spans the opening of the subsonic diffuser leading to the engine. The external cowl area is smaller, which reduces cowl drag. The use of only external supersonic compression avoids inlet unstart--an unsafe shock instability present in previous inlet designs that use internal supersonic compression. This eliminates the need for complex mechanical systems to control unstart, which reduces weight. The conceptual design was conceived by TechLand Research, Inc. (North Olmsted, OH), which received funding through NASA s Small-Business Innovation Research program. The Boeing Company (Seattle, WA) also participated in the conceptual design. The NASA Glenn Research Center became involved starting with the preliminary design of a model for testing in Glenn s 10- by 10-Foot Supersonic Wind Tunnel (10 10 SWT). The inlet was sized for a speed of Mach 2.35 while matching requirements of an existing cold pipe used in previous inlet tests. The parametric aspects of the model included interchangeable components for different cowl lip, throat slot, and sidewall leading-edge shapes and different vortex generator configurations. Glenn researchers used computational fluid dynamics (CFD) tools for three-dimensional, turbulent flow analysis to further refine the aerodynamic design.
Comparison of four approaches to a rock facies classification problem
Dubois, M.K.; Bohling, Geoffrey C.; Chakrabarti, S.
2007-01-01
In this study, seven classifiers based on four different approaches were tested in a rock facies classification problem: classical parametric methods using Bayes' rule, and non-parametric methods using fuzzy logic, k-nearest neighbor, and feed forward-back propagating artificial neural network. Determining the most effective classifier for geologic facies prediction in wells without cores in the Panoma gas field, in Southwest Kansas, was the objective. Study data include 3600 samples with known rock facies class (from core) with each sample having either four or five measured properties (wire-line log curves), and two derived geologic properties (geologic constraining variables). The sample set was divided into two subsets, one for training and one for testing the ability of the trained classifier to correctly assign classes. Artificial neural networks clearly outperformed all other classifiers and are effective tools for this particular classification problem. Classical parametric models were inadequate due to the nature of the predictor variables (high dimensional and not linearly correlated), and feature space of the classes (overlapping). The other non-parametric methods tested, k-nearest neighbor and fuzzy logic, would need considerable improvement to match the neural network effectiveness, but further work, possibly combining certain aspects of the three non-parametric methods, may be justified. ?? 2006 Elsevier Ltd. All rights reserved.
Geometry of Quantum Computation with Qudits
Luo, Ming-Xing; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun
2014-01-01
The circuit complexity of quantum qubit system evolution as a primitive problem in quantum computation has been discussed widely. We investigate this problem in terms of qudit system. Using the Riemannian geometry the optimal quantum circuits are equivalent to the geodetic evolutions in specially curved parametrization of SU(dn). And the quantum circuit complexity is explicitly dependent of controllable approximation error bound. PMID:24509710
Yu, Wenbao; Park, Taesung
2014-01-01
It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. We propose an AUC-based approach using penalized regression (AucPR), which is a parametric method used for obtaining a linear combination for maximizing the AUC. To obtain the AUC maximizer in a high-dimensional context, we transform a classical parametric AUC maximizer, which is used in a low-dimensional context, into a regression framework and thus, apply the penalization regression approach directly. Two kinds of penalization, lasso and elastic net, are considered. The parametric approach can avoid some of the difficulties of a conventional non-parametric AUC-based approach, such as the lack of an appropriate concave objective function and a prudent choice of the smoothing parameter. We apply the proposed AucPR for gene selection and classification using four real microarray and synthetic data. Through numerical studies, AucPR is shown to perform better than the penalized logistic regression and the nonparametric AUC-based method, in the sense of AUC and sensitivity for a given specificity, particularly when there are many correlated genes. We propose a powerful parametric and easily-implementable linear classifier AucPR, for gene selection and disease prediction for high-dimensional data. AucPR is recommended for its good prediction performance. Beside gene expression microarray data, AucPR can be applied to other types of high-dimensional omics data, such as miRNA and protein data.
Nardelli, M; Del Piccolo, L; Danzi, Op; Perlini, C; Tedeschi, F; Greco, A; Scilingo, Ep; Valenza, G
2017-07-01
Emphatic doctor-patient communication has been associated with an improved psycho-physiological well-being involving cardiovascular and neuroendocrine responses. Nevertheless, a comprehensive assessment of heartbeat linear and nonlinear/complex dynamics throughout the communication of a life-threatening disease has not been performed yet. To this extent, we here study heart rate variability (HRV) series gathered from 17 subjects while watching a video where an oncologist discloses the diagnosis of a cancer metastasis to a patient. Further 17 subjects watched the same video including additional affective emphatic contents. For the assessment of the two groups, linear heartbeat dynamics was quantified through measures defined in the time and frequency domains, whereas nonlinear/complex dynamics referred to measures of entropy, and combined Lagged Poincare Plots (LPP) and symbolic analyses. Considering differences between the beginning and the end of the video, results from non-parametric statistical tests demonstrated that the group watching emphatic contents showed HRV changes in the LF/HF ratio exclusively. Conversely, the group watching the purely informative video showed changes in vagal activity (i.e., HF power), LF/HF ratio, as well as LPP measures. Additionally, a Support Vector Machine algorithm including HRV nonlinear/complex information was able to automatically discern between groups with an accuracy of 76.47%. We therefore propose the use of heartbeat nonlinear/complex dynamics to objectively assess the empathy level of healthy women.
Reentry-Vehicle Shape Optimization Using a Cartesian Adjoint Method and CAD Geometry
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2006-01-01
A DJOINT solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (e.g., geometric parameters that control the shape). Classic aerodynamic applications of gradient-based optimization include the design of cruise configurations for transonic and supersonic flow, as well as the design of high-lift systems. are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric computer-aided design (CAD). In previous work on Cartesian adjoint solvers, Melvin et al. developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the two-dimensional Euler equations using a ghost-cell method to enforce the wall boundary conditions. In Refs. 18 and 19, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm were the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The accuracy of the gradient computation was verified using several three-dimensional test cases, which included design variables such as the free stream parameters and the planform shape of an isolated wing. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Factors under consideration include the computation of mesh sensitivities that provide a reliable approximation of the objective function gradient, as well as the computation of surface shape sensitivities based on a direct-CAD interface. We present detailed gradient verification studies and then focus on a shape optimization problem for an Apollo-like reentry vehicle. The goal of the optimization is to enhance the lift-to-drag ratio of the capsule by modifying the shape of its heat-shield in conjunction with a center-of-gravity (c.g.) offset. This multipoint and multi-objective optimization problem is used to demonstrate the overall effectiveness of the Cartesian adjoint method for addressing the issues of complex aerodynamic design.
Simple heterogeneity parametrization for sea surface temperature and chlorophyll
NASA Astrophysics Data System (ADS)
Skákala, Jozef; Smyth, Timothy J.
2016-06-01
Using satellite maps this paper offers a complex analysis of chlorophyll & SST heterogeneity in the shelf seas around the southwest of the UK. The heterogeneity scaling follows a simple power law and is consequently parametrized by two parameters. It is shown that in most cases these two parameters vary only relatively little with time. The paper offers a detailed comparison of field heterogeneity between different regions. How much heterogeneity is in each region preserved in the annual median data is also determined. The paper explicitly demonstrates how one can use these results to calculate representative measurement area for in situ networks.
Generating Three-Dimensional Surface Models of Solid Objects from Multiple Projections.
1982-10-01
volume descriptions. The surface models are composed of curved, topologically rectangular, parametric patches. The data required to define these patches...geometry directly from image data .__ This method generates 3D surface descriptions of only those parts of the object that are illuminated by the pro- jected...objects. Generation of such models inherently requires the acquisition and analysis of 3D surface data . In this context, acquisition refers to the
NASA Astrophysics Data System (ADS)
Bekkouche, Toufik; Bouguezel, Saad
2018-03-01
We propose a real-to-real image encryption method. It is a double random amplitude encryption method based on the parametric discrete Fourier transform coupled with chaotic maps to perform the scrambling. The main idea behind this method is the introduction of a complex-to-real conversion by exploiting the inherent symmetry property of the transform in the case of real-valued sequences. This conversion allows the encrypted image to be real-valued instead of being a complex-valued image as in all existing double random phase encryption methods. The advantage is to store or transmit only one image instead of two images (real and imaginary parts). Computer simulation results and comparisons with the existing double random amplitude encryption methods are provided for peak signal-to-noise ratio, correlation coefficient, histogram analysis, and key sensitivity.
Gras, Laure-Lise; Mitton, David; Crevier-Denoix, Nathalie; Laporte, Sébastien
2012-01-01
Most recent finite element models that represent muscles are generic or subject-specific models that use complex, constitutive laws. Identification of the parameters of such complex, constitutive laws could be an important limit for subject-specific approaches. The aim of this study was to assess the possibility of modelling muscle behaviour in compression with a parametric model and a simple, constitutive law. A quasi-static compression test was performed on the muscles of dogs. A parametric finite element model was designed using a linear, elastic, constitutive law. A multi-variate analysis was performed to assess the effects of geometry on muscle response. An inverse method was used to define Young's modulus. The non-linear response of the muscles was obtained using a subject-specific geometry and a linear elastic law. Thus, a simple muscle model can be used to have a bio-faithful, biomechanical response.
Stress Recovery and Error Estimation for Shell Structures
NASA Technical Reports Server (NTRS)
Yazdani, A. A.; Riggs, H. R.; Tessler, A.
2000-01-01
The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.
Lewis, James W.; Talkington, William J.; Tallaksen, Katherine C.; Frum, Chris A.
2012-01-01
Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds. PMID:22582038
Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.
2016-01-01
In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration (High Freq) performed similarly to non-parametric methods, but had the highest recall values, suggesting that this method could be employed for automatic tremor detection. PMID:27258018
Connectionist model-based stereo vision for telerobotics
NASA Technical Reports Server (NTRS)
Hoff, William; Mathis, Donald
1989-01-01
Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.
Parametrization of DFTB3/3OB for Magnesium and Zinc for Chemical and Biological Applications
2015-01-01
We report the parametrization of the approximate density functional theory, DFTB3, for magnesium and zinc for chemical and biological applications. The parametrization strategy follows that established in previous work that parametrized several key main group elements (O, N, C, H, P, and S). This 3OB set of parameters can thus be used to study many chemical and biochemical systems. The parameters are benchmarked using both gas-phase and condensed-phase systems. The gas-phase results are compared to DFT (mostly B3LYP), ab initio (MP2 and G3B3), and PM6, as well as to a previous DFTB parametrization (MIO). The results indicate that DFTB3/3OB is particularly successful at predicting structures, including rather complex dinuclear metalloenzyme active sites, while being semiquantitative (with a typical mean absolute deviation (MAD) of ∼3–5 kcal/mol) for energetics. Single-point calculations with high-level quantum mechanics (QM) methods generally lead to very satisfying (a typical MAD of ∼1 kcal/mol) energetic properties. DFTB3/MM simulations for solution and two enzyme systems also lead to encouraging structural and energetic properties in comparison to available experimental data. The remaining limitations of DFTB3, such as the treatment of interaction between metal ions and highly charged/polarizable ligands, are also discussed. PMID:25178644
Transfer pricing in hospitals and efficiency of physicians: the case of anesthesia services.
Kuntz, Ludwig; Vera, Antonio
2005-01-01
The objective is to investigate theoretically and empirically how the efficiency of the physicians involved in anesthesia and surgery can be optimized by the introduction of transfer pricing for anesthesia services. The anesthesiology data of approximately 57,000 operations carried out at the University Hospital Hamburg-Eppendorf (UKE) in Germany in the period from 2000 to 2002 are analyzed using parametric and non-parametric methods. The principal finding of the empirical analysis is that the efficiency of the physicians involved in anesthesia and surgery at the UKE improved after the introduction of transfer pricing.
Malyarenko, Dariya; Fedorov, Andriy; Bell, Laura; Prah, Melissa; Hectors, Stefanie; Arlinghaus, Lori; Muzi, Mark; Solaiyappan, Meiyappan; Jacobs, Michael; Fung, Maggie; Shukla-Dave, Amita; McManus, Kevin; Boss, Michael; Taouli, Bachir; Yankeelov, Thomas E; Quarles, Christopher Chad; Schmainda, Kathleen; Chenevert, Thomas L; Newitt, David C
2018-01-01
This paper reports on results of a multisite collaborative project launched by the MRI subgroup of Quantitative Imaging Network to assess current capability and provide future guidelines for generating a standard parametric diffusion map Digital Imaging and Communication in Medicine (DICOM) in clinical trials that utilize quantitative diffusion-weighted imaging (DWI). Participating sites used a multivendor DWI DICOM dataset of a single phantom to generate parametric maps (PMs) of the apparent diffusion coefficient (ADC) based on two models. The results were evaluated for numerical consistency among models and true phantom ADC values, as well as for consistency of metadata with attributes required by the DICOM standards. This analysis identified missing metadata descriptive of the sources for detected numerical discrepancies among ADC models. Instead of the DICOM PM object, all sites stored ADC maps as DICOM MR objects, generally lacking designated attributes and coded terms for quantitative DWI modeling. Source-image reference, model parameters, ADC units and scale, deemed important for numerical consistency, were either missing or stored using nonstandard conventions. Guided by the identified limitations, the DICOM PM standard has been amended to include coded terms for the relevant diffusion models. Open-source software has been developed to support conversion of site-specific formats into the standard representation.
NASA Astrophysics Data System (ADS)
Hoy, Carlton F. O.
The overall objective of this thesis was to control the fabrication technique and relevant material properties for phantom devices designated for computed tomography (CT) scanning. Fabrication techniques using polymeric composites and foams were detailed together with parametric studies outlining the fundamentals behind the changes in material properties which affect the characteristic CT number. The composites fabricated used polyvinylidene fluoride (PVDF), thermoplastic polyurethane (TPU) and polyethylene (PE) with hydroxylapatite (hA) as additive with different composites made by means of different weight percentages of additive. Polymeric foams were fabricated through a batch foaming technique with the heating time controlled to create different levels of foams. Finally, the effect of fabricated phantoms under varied scanning media was assessed to determine whether self-made phantoms can be scanned accurately under non-water or rigid environments allowing for the future development of complex shaped or fragile material types.
Transferable Reactive Force Fields: Extensions of ReaxFF-lg to Nitromethane.
Larentzos, James P; Rice, Betsy M
2017-03-09
Transferable ReaxFF-lg models of nitromethane that predict a variety of material properties over a wide range of thermodynamic states are obtained by screening a library of ∼6600 potentials that were previously optimized through the Multiple Objective Evolutionary Strategies (MOES) approach using a training set that included information for other energetic materials composed of carbon, hydrogen, nitrogen, and oxygen. Models that best match experimental nitromethane lattice constants at 4.2 K and 1 atm are evaluated for transferability to high-pressure states at room temperature and are shown to better predict various liquid- and solid-phase structural, thermodynamic, and transport properties as compared to the existing ReaxFF and ReaxFF-lg parametrizations. Although demonstrated for an energetic material, the library of ReaxFF-lg models is supplied to the scientific community to enable new research explorations of complex reactive phenomena in a variety of materials research applications.
The OMG Modelling Language (SYSML)
NASA Astrophysics Data System (ADS)
Hause, M.
2007-08-01
On July 6th 2006, the Object Management Group (OMG) announced the adoption of the OMG Systems Modeling Language (OMG SysML). The SysML specification was in response to the joint Request for Proposal issued by the OMG and INCOSE (the International Council on Systems Engineering) for a customized version of UML 2, designed to address the specific needs of system engineers. SysML is a visual modeling language that extends UML 2 in order to support the specification, analysis, design, verification and validation of complex systems. This paper will look at the background of SysML and summarize the SysML specification including the modifications to UML 2.0, along with the new requirement and parametric diagrams. It will also show how SysML artifacts can be used to specify the requirements for other solution spaces such as software and hardware to provide handover to other disciplines.
Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.
Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus
2017-01-01
Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.
Estimation of railroad capacity using parametric methods.
DOT National Transportation Integrated Search
2013-12-01
This paper reviews different methodologies used for railroad capacity estimation and presents a user-friendly method to measure capacity. The objective of this paper is to use multivariate regression analysis to develop a continuous relation of the d...
NASA Astrophysics Data System (ADS)
Noroozian, Omid
2018-01-01
The current state of the art for some superconducting technologies will be reviewed in the context of a future single-dish submillimeter telescope called AtLAST. The technologies reviews include: 1) Kinetic Inductance Detectors (KIDs), which have now been demonstrated in large-format kilo-pixel arrays with photon background-limited sensitivity suitable for large field of view cameras for wide-field imaging. 2) Parametric amplifiers - specifically the Traveling-Wave Kinetic Inductance (TKIP) amplifier - which has enormous potential to increase sensitivity, bandwidth, and mapping speed of heterodyne receivers, and 3) On-chip spectrometers, which combined with sensitive direct detectors such as KIDs or TESs could be used as Multi-Object Spectrometers on the AtLAST focal plane, and could provide low-medium resolution spectroscopy of 100 objects at a time in each field of view.
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.
The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.
ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION
Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey
2013-01-01
MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053
A physiology-based parametric imaging method for FDG-PET data
NASA Astrophysics Data System (ADS)
Scussolini, Mara; Garbarino, Sara; Sambuceti, Gianmario; Caviglia, Giacomo; Piana, Michele
2017-12-01
Parametric imaging is a compartmental approach that processes nuclear imaging data to estimate the spatial distribution of the kinetic parameters governing tracer flow. The present paper proposes a novel and efficient computational method for parametric imaging which is potentially applicable to several compartmental models of diverse complexity and which is effective in the determination of the parametric maps of all kinetic coefficients. We consider applications to [18 F]-fluorodeoxyglucose positron emission tomography (FDG-PET) data and analyze the two-compartment catenary model describing the standard FDG metabolization by an homogeneous tissue and the three-compartment non-catenary model representing the renal physiology. We show uniqueness theorems for both models. The proposed imaging method starts from the reconstructed FDG-PET images of tracer concentration and preliminarily applies image processing algorithms for noise reduction and image segmentation. The optimization procedure solves pixel-wise the non-linear inverse problem of determining the kinetic parameters from dynamic concentration data through a regularized Gauss-Newton iterative algorithm. The reliability of the method is validated against synthetic data, for the two-compartment system, and experimental real data of murine models, for the renal three-compartment system.
Sparkle/AM1 Parameters for the Modeling of Samarium(III) and Promethium(III) Complexes.
Freire, Ricardo O; da Costa, Nivan B; Rocha, Gerd B; Simas, Alfredo M
2006-01-01
The Sparkle/AM1 model is extended to samarium(III) and promethium(III) complexes. A set of 15 structures of high crystallographic quality (R factor < 0.05 Å), with ligands chosen to be representative of all samarium complexes in the Cambridge Crystallographic Database 2004, CSD, with nitrogen or oxygen directly bonded to the samarium ion, was used as a training set. In the validation procedure, we used a set of 42 other complexes, also of high crystallographic quality. The results show that this parametrization for the Sm(III) ion is similar in accuracy to the previous parametrizations for Eu(III), Gd(III), and Tb(III). On the other hand, promethium is an artificial radioactive element with no stable isotope. So far, there are no promethium complex crystallographic structures in CSD. To circumvent this, we confirmed our previous result that RHF/STO-3G/ECP, with the MWB effective core potential (ECP), appears to be the most efficient ab initio model chemistry in terms of coordination polyhedron crystallographic geometry predictions from isolated lanthanide complex ion calculations. We thus generated a set of 15 RHF/STO-3G/ECP promethium complex structures with ligands chosen to be representative of complexes available in the CSD for all other trivalent lanthanide cations, with nitrogen or oxygen directly bonded to the lanthanide ion. For the 42 samarium(III) complexes and 15 promethium(III) complexes considered, the Sparkle/AM1 unsigned mean error, for all interatomic distances between the Ln(III) ion and the ligand atoms of the first sphere of coordination, is 0.07 and 0.06 Å, respectively, a level of accuracy comparable to present day ab initio/ECP geometries, while being hundreds of times faster.
The representation of object viewpoint in human visual cortex.
Andresen, David R; Vinberg, Joakim; Grill-Spector, Kalanit
2009-04-01
Understanding the nature of object representations in the human brain is critical for understanding the neural basis of invariant object recognition. However, the degree to which object representations are sensitive to object viewpoint is unknown. Using fMRI we employed a parametric approach to examine the sensitivity to object view as a function of rotation (0 degrees-180 degrees ), category (animal/vehicle) and fMRI-adaptation paradigm (short or long-lagged). For both categories and fMRI-adaptation paradigms, object-selective regions recovered from adaptation when a rotated view of an object was shown after adaptation to a specific view of that object, suggesting that representations are sensitive to object rotation. However, we found evidence for differential representations across categories and ventral stream regions. Rotation cross-adaptation was larger for animals than vehicles, suggesting higher sensitivity to vehicle than animal rotation, and was largest in the left fusiform/occipito-temporal sulcus (pFUS/OTS), suggesting that this region has low sensitivity to rotation. Moreover, right pFUS/OTS and FFA responded more strongly to front than back views of animals (without adaptation) and rotation cross-adaptation depended both on the level of rotation and the adapting view. This result suggests a prevalence of neurons that prefer frontal views of animals in fusiform regions. Using a computational model of view-tuned neurons, we demonstrate that differential neural view tuning widths and relative distributions of neural-tuned populations in fMRI voxels can explain the fMRI results. Overall, our findings underscore the utility of parametric approaches for studying the neural basis of object invariance and suggest that there is no complete invariance to object view in the human ventral stream.
A Bayesian alternative for multi-objective ecohydrological model specification
NASA Astrophysics Data System (ADS)
Tang, Yating; Marshall, Lucy; Sharma, Ashish; Ajami, Hoori
2018-01-01
Recent studies have identified the importance of vegetation processes in terrestrial hydrologic systems. Process-based ecohydrological models combine hydrological, physical, biochemical and ecological processes of the catchments, and as such are generally more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov chain Monte Carlo (MCMC) techniques. The Bayesian approach offers an appealing alternative to traditional multi-objective hydrologic model calibrations by defining proper prior distributions that can be considered analogous to the ad-hoc weighting often prescribed in multi-objective calibration. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological modeling framework based on a traditional Pareto-based model calibration technique. In our study, a Pareto-based multi-objective optimization and a formal Bayesian framework are implemented in a conceptual ecohydrological model that combines a hydrological model (HYMOD) and a modified Bucket Grassland Model (BGM). Simulations focused on one objective (streamflow/LAI) and multiple objectives (streamflow and LAI) with different emphasis defined via the prior distribution of the model error parameters. Results show more reliable outputs for both predicted streamflow and LAI using Bayesian multi-objective calibration with specified prior distributions for error parameters based on results from the Pareto front in the ecohydrological modeling. The methodology implemented here provides insight into the usefulness of multiobjective Bayesian calibration for ecohydrologic systems and the importance of appropriate prior distributions in such approaches.
NASA Astrophysics Data System (ADS)
Ma, Yulong; Liu, Heping
2017-12-01
Atmospheric flow over complex terrain, particularly recirculation flows, greatly influences wind-turbine siting, forest-fire behaviour, and trace-gas and pollutant dispersion. However, there is a large uncertainty in the simulation of flow over complex topography, which is attributable to the type of turbulence model, the subgrid-scale (SGS) turbulence parametrization, terrain-following coordinates, and numerical errors in finite-difference methods. Here, we upgrade the large-eddy simulation module within the Weather Research and Forecasting model by incorporating the immersed-boundary method into the module to improve simulations of the flow and recirculation over complex terrain. Simulations over the Bolund Hill indicate improved mean absolute speed-up errors with respect to previous studies, as well an improved simulation of the recirculation zone behind the escarpment of the hill. With regard to the SGS parametrization, the Lagrangian-averaged scale-dependent Smagorinsky model performs better than the classic Smagorinsky model in reproducing both velocity and turbulent kinetic energy. A finer grid resolution also improves the strength of the recirculation in flow simulations, with a higher horizontal grid resolution improving simulations just behind the escarpment, and a higher vertical grid resolution improving results on the lee side of the hill. Our modelling approach has broad applications for the simulation of atmospheric flows over complex topography.
Kramer, Gerbrand Maria; Frings, Virginie; Heijtel, Dennis; Smit, E F; Hoekstra, Otto S; Boellaard, Ronald
2017-06-01
The objective of this study was to validate several parametric methods for quantification of 3'-deoxy-3'- 18 F-fluorothymidine ( 18 F-FLT) PET in advanced-stage non-small cell lung carcinoma (NSCLC) patients with an activating epidermal growth factor receptor mutation who were treated with gefitinib or erlotinib. Furthermore, we evaluated the impact of noise on accuracy and precision of the parametric analyses of dynamic 18 F-FLT PET/CT to assess the robustness of these methods. Methods : Ten NSCLC patients underwent dynamic 18 F-FLT PET/CT at baseline and 7 and 28 d after the start of treatment. Parametric images were generated using plasma input Logan graphic analysis and 2 basis functions-based methods: a 2-tissue-compartment basis function model (BFM) and spectral analysis (SA). Whole-tumor-averaged parametric pharmacokinetic parameters were compared with those obtained by nonlinear regression of the tumor time-activity curve using a reversible 2-tissue-compartment model with blood volume fraction. In addition, 2 statistically equivalent datasets were generated by countwise splitting the original list-mode data, each containing 50% of the total counts. Both new datasets were reconstructed, and parametric pharmacokinetic parameters were compared between the 2 replicates and the original data. Results: After the settings of each parametric method were optimized, distribution volumes (V T ) obtained with Logan graphic analysis, BFM, and SA all correlated well with those derived using nonlinear regression at baseline and during therapy ( R 2 ≥ 0.94; intraclass correlation coefficient > 0.97). SA-based V T images were most robust to increased noise on a voxel-level (repeatability coefficient, 16% vs. >26%). Yet BFM generated the most accurate K 1 values ( R 2 = 0.94; intraclass correlation coefficient, 0.96). Parametric K 1 data showed a larger variability in general; however, no differences were found in robustness between methods (repeatability coefficient, 80%-84%). Conclusion: Both BFM and SA can generate quantitatively accurate parametric 18 F-FLT V T images in NSCLC patients before and during therapy. SA was more robust to noise, yet BFM provided more accurate parametric K 1 data. We therefore recommend BFM as the preferred parametric method for analysis of dynamic 18 F-FLT PET/CT studies; however, SA can also be used. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Multiple-object tracking as a tool for parametrically modulating memory reactivation
Poppenk, J.; Norman, K.A.
2017-01-01
Converging evidence supports the “non-monotonic plasticity” hypothesis that although complete retrieval may strengthen memories, partial retrieval weakens them. Yet, the classic experimental paradigms used to study effects of partial retrieval are not ideally suited to doing so, because they lack the parametric control needed to ensure that the memory is activated to the appropriate degree (i.e., that there is some retrieval, but not enough to cause memory strengthening). Here we present a novel procedure designed to accommodate this need. After participants learned a list of word-scene associates, they completed a cued mental visualization task that was combined with a multiple-object tracking (MOT) procedure, which we selected for its ability to interfere with mental visualization in a parametrically adjustable way (by varying the number of MOT targets). We also used fMRI data to successfully train an “associative recall” classifier for use in this task: this classifier revealed greater memory reactivation during trials in which associative memories were cued while participants tracked one, rather than five MOT targets. However, the classifier was insensitive to task difficulty when recall was not taking place, suggesting it had indeed tracked memory reactivation rather than task difficulty per se. Consistent with the classifier findings, participants’ introspective ratings of visualization vividness were modulated by MOT task difficulty. In addition, we observed reduced classifier output and slowing of responses in a post-reactivation memory test, consistent with the hypothesis that partial reactivation, induced by MOT, weakened memory. These results serve as a “proof of concept” that MOT can be used to parametrically modulate memory retrieval – a property that may prove useful in future investigation of partial retrieval effects, e.g., in closed-loop experiments. PMID:28387587
Liu, Kui; Guo, Jun; Cai, Chunxiao; Zhang, Junxiang; Gao, Jiangrui
2016-11-15
Multipartite entanglement is used for quantum information applications, such as building multipartite quantum communications. Generally, generation of multipartite entanglement is based on a complex beam-splitter network. Here, based on the spatial freedom of light, we experimentally demonstrated spatial quadripartite continuous variable entanglement among first-order Hermite-Gaussian modes using a single type II optical parametric oscillator operating below threshold with an HG0245° pump beam. The entanglement can be scalable for larger numbers of spatial modes by changing the spatial profile of the pump beam. In addition, spatial multipartite entanglement will be useful for future spatial multichannel quantum information applications.
Efficient solution of a multi objective fuzzy transportation problem
NASA Astrophysics Data System (ADS)
Vidhya, V.; Ganesan, K.
2018-04-01
In this paper we present a methodology for the solution of multi-objective fuzzy transportation problem when all the cost and time coefficients are trapezoidal fuzzy numbers and the supply and demand are crisp numbers. Using a new fuzzy arithmetic on parametric form of trapezoidal fuzzy numbers and a new ranking method all efficient solutions are obtained. The proposed method is illustrated with an example.
NASA Astrophysics Data System (ADS)
Messina, Luca; Castin, Nicolas; Domain, Christophe; Olsson, Pär
2017-02-01
The quality of kinetic Monte Carlo (KMC) simulations of microstructure evolution in alloys relies on the parametrization of point-defect migration rates, which are complex functions of the local chemical composition and can be calculated accurately with ab initio methods. However, constructing reliable models that ensure the best possible transfer of physical information from ab initio to KMC is a challenging task. This work presents an innovative approach, where the transition rates are predicted by artificial neural networks trained on a database of 2000 migration barriers, obtained with density functional theory (DFT) in place of interatomic potentials. The method is tested on copper precipitation in thermally aged iron alloys, by means of a hybrid atomistic-object KMC model. For the object part of the model, the stability and mobility properties of copper-vacancy clusters are analyzed by means of independent atomistic KMC simulations, driven by the same neural networks. The cluster diffusion coefficients and mean free paths are found to increase with size, confirming the dominant role of coarsening of medium- and large-sized clusters in the precipitation kinetics. The evolution under thermal aging is in better agreement with experiments with respect to a previous interatomic-potential model, especially concerning the experiment time scales. However, the model underestimates the solubility of copper in iron due to the excessively high solution energy predicted by the chosen DFT method. Nevertheless, this work proves the capability of neural networks to transfer complex ab initio physical properties to higher-scale models, and facilitates the extension to systems with increasing chemical complexity, setting the ground for reliable microstructure evolution simulations in a wide range of alloys and applications.
Application of selection and estimation regular vine copula on go public company share
NASA Astrophysics Data System (ADS)
Hasna Afifah, R.; Noviyanti, Lienda; Bachrudin, Achmad
2018-03-01
The accuracy of financial risk management involving a large number of assets is needed, but information about dependencies among assets cannot be adequately analyzed. To analyze dependencies on a number of assets, several tools have been added to standard multivariate copula. However, these tools have not been adequately used in apps with higher dimensions. The bivariate parametric copula families can be used to solve it. The multivariate copula can be built from the bivariate parametric copula which is connected by a graphical representation to become Pair Copula Constructions (PCCs) or vine copula. The application of C-vine and D-vine copula have been used in some researches, but the use of C-vine and D-vine copula is more limited than R-vine copula. Therefore, this study used R-vine copula to provide flexibility for modeling complex dependencies on a high dimension. Since copula is a static model, while stock values change over time, then copula should be combined with the ARMA- GARCH model for modeling the movement of shares (volatility). The objective of this paper is to select and estimate R-vine copula which is used to analyze PT Jasa Marga (Persero) Tbk (JSMR), PT Waskita Karya (Persero) Tbk (WSKT), and PT Bank Mandiri (Persero) Tbk (BMRI) from august 31, 2014 to august 31, 2017. From the method it is obtained that the selected copulas for 2 edges at the first tree are survival Gumbel and the copula for edge at the second tree is Gaussian.
Service, Susan; Molina, Julio; Deyoung, Joseph; Jawaheer, Damini; Aldana, Ileana; Vu, Thuy; Araya, Carmen; Araya, Xinia; Bejarano, Julio; Fournier, Eduardo; Ramirez, Magui; Mathews, Carol A; Davanzo, Pablo; Macaya, Gabriel; Sandkuijl, Lodewijk; Sabatti, Chiara; Reus, Victor; Freimer, Nelson
2006-06-05
We have ascertained in the Central Valley of Costa Rica a new kindred (CR201) segregating for severe bipolar disorder (BP-I). The family was identified by tracing genealogical connections among eight persons initially independently ascertained for a genome wide association study of BP-I. For the genome screen in CR201, we trimmed the family down to 168 persons (82 of whom are genotyped), containing 25 individuals with a best-estimate diagnosis of BP-I. A total of 4,690 SNP markers were genotyped. Analysis of the data was hampered by the size and complexity of the pedigree, which prohibited using exact multipoint methods on the entire kindred. Two-point parametric linkage analysis, using a conservative model of transmission, produced a maximum LOD score of 2.78 on chromosome 6, and a total of 39 loci with LOD scores >1.0. Multipoint parametric and non-parametric linkage analysis was performed separately on four sections of CR201, and interesting (nominal P-value from either analysis <0.01), although not statistically significant, regions were highlighted on chromosomes 1, 2, 3, 12, 16, 19, and 22, in at least one section of the pedigree, or when considering all sections together. The difficulties of analyzing genome wide SNP data for complex disorders in large, potentially informative, kindreds are discussed.
Kernel-based whole-genome prediction of complex traits: a review.
Morota, Gota; Gianola, Daniel
2014-01-01
Prediction of genetic values has been a focus of applied quantitative genetics since the beginning of the 20th century, with renewed interest following the advent of the era of whole genome-enabled prediction. Opportunities offered by the emergence of high-dimensional genomic data fueled by post-Sanger sequencing technologies, especially molecular markers, have driven researchers to extend Ronald Fisher and Sewall Wright's models to confront new challenges. In particular, kernel methods are gaining consideration as a regression method of choice for genome-enabled prediction. Complex traits are presumably influenced by many genomic regions working in concert with others (clearly so when considering pathways), thus generating interactions. Motivated by this view, a growing number of statistical approaches based on kernels attempt to capture non-additive effects, either parametrically or non-parametrically. This review centers on whole-genome regression using kernel methods applied to a wide range of quantitative traits of agricultural importance in animals and plants. We discuss various kernel-based approaches tailored to capturing total genetic variation, with the aim of arriving at an enhanced predictive performance in the light of available genome annotation information. Connections between prediction machines born in animal breeding, statistics, and machine learning are revisited, and their empirical prediction performance is discussed. Overall, while some encouraging results have been obtained with non-parametric kernels, recovering non-additive genetic variation in a validation dataset remains a challenge in quantitative genetics.
Algorithm for parametric community detection in networks.
Bettinelli, Andrea; Hansen, Pierre; Liberti, Leo
2012-07-01
Modularity maximization is extensively used to detect communities in complex networks. It has been shown, however, that this method suffers from a resolution limit: Small communities may be undetectable in the presence of larger ones even if they are very dense. To alleviate this defect, various modifications of the modularity function have been proposed as well as multiresolution methods. In this paper we systematically study a simple model (proposed by Pons and Latapy [Theor. Comput. Sci. 412, 892 (2011)] and similar to the parametric model of Reichardt and Bornholdt [Phys. Rev. E 74, 016110 (2006)]) with a single parameter α that balances the fraction of within community edges and the expected fraction of edges according to the configuration model. An exact algorithm is proposed to find optimal solutions for all values of α as well as the corresponding successive intervals of α values for which they are optimal. This algorithm relies upon a routine for exact modularity maximization and is limited to moderate size instances. An agglomerative hierarchical heuristic is therefore proposed to address parametric modularity detection in large networks. At each iteration the smallest value of α for which it is worthwhile to merge two communities of the current partition is found. Then merging is performed and the data are updated accordingly. An implementation is proposed with the same time and space complexity as the well-known Clauset-Newman-Moore (CNM) heuristic [Phys. Rev. E 70, 066111 (2004)]. Experimental results on artificial and real world problems show that (i) communities are detected by both exact and heuristic methods for all values of the parameter α; (ii) the dendrogram summarizing the results of the heuristic method provides a useful tool for substantive analysis, as illustrated particularly on a Les Misérables data set; (iii) the difference between the parametric modularity values given by the exact method and those given by the heuristic is moderate; (iv) the heuristic version of the proposed parametric method, viewed as a modularity maximization tool, gives better results than the CNM heuristic for large instances.
Developing Software for Pharmacodynamics and Bioassay Studies
The objective of the project is to develop a software system to process general pharmacologic, toxicological, or other biomedical research data that...exhibit a non-monotonic dose-response relationship - for which the current parametric models fail. The software will analyze dose-response
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
NASA Astrophysics Data System (ADS)
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
Can you trust the parametric standard errors in nonlinear least squares? Yes, with provisos.
Tellinghuisen, Joel
2018-04-01
Questions about the reliability of parametric standard errors (SEs) from nonlinear least squares (LS) algorithms have led to a general mistrust of these precision estimators that is often unwarranted. The importance of non-Gaussian parameter distributions is illustrated by converting linear models to nonlinear by substituting e A , ln A, and 1/A for a linear parameter a. Monte Carlo (MC) simulations characterize parameter distributions in more complex cases, including when data have varying uncertainty and should be weighted, but weights are neglected. This situation leads to loss of precision and erroneous parametric SEs, as is illustrated for the Lineweaver-Burk analysis of enzyme kinetics data and the analysis of isothermal titration calorimetry data. Non-Gaussian parameter distributions are generally asymmetric and biased. However, when the parametric SE is <10% of the magnitude of the parameter, both the bias and the asymmetry can usually be ignored. Sometimes nonlinear estimators can be redefined to give more normal distributions and better convergence properties. Variable data uncertainty, or heteroscedasticity, can sometimes be handled by data transforms but more generally requires weighted LS, which in turn require knowledge of the data variance. Parametric SEs are rigorously correct in linear LS under the usual assumptions, and are a trustworthy approximation in nonlinear LS provided they are sufficiently small - a condition favored by the abundant, precise data routinely collected in many modern instrumental methods. Copyright © 2018 Elsevier B.V. All rights reserved.
Minimum distance classification in remote sensing
NASA Technical Reports Server (NTRS)
Wacker, A. G.; Landgrebe, D. A.
1972-01-01
The utilization of minimum distance classification methods in remote sensing problems, such as crop species identification, is considered. Literature concerning both minimum distance classification problems and distance measures is reviewed. Experimental results are presented for several examples. The objective of these examples is to: (a) compare the sample classification accuracy of a minimum distance classifier, with the vector classification accuracy of a maximum likelihood classifier, and (b) compare the accuracy of a parametric minimum distance classifier with that of a nonparametric one. Results show the minimum distance classifier performance is 5% to 10% better than that of the maximum likelihood classifier. The nonparametric classifier is only slightly better than the parametric version.
The Bayesian Cramér-Rao lower bound in Astrometry
NASA Astrophysics Data System (ADS)
Mendez, R. A.; Echeverria, A.; Silva, J.; Orchard, M.
2018-01-01
A determination of the highest precision that can be achieved in the measurement of the location of a stellar-like object has been a topic of permanent interest by the astrometric community. The so-called (parametric, or non-Bayesian) Cramér-Rao (CR hereafter) bound provides a lower bound for the variance with which one could estimate the position of a point source. This has been studied recently by Mendez et al. (2013, 2014, 2015). In this work we present a different approach to the same problem (Echeverria et al. 2016), using a Bayesian CR setting which has a number of advantages over the parametric scenario.
The Bayesian Cramér-Rao lower bound in Astrometry
NASA Astrophysics Data System (ADS)
Mendez, R. A.; Echeverria, A.; Silva, J.; Orchard, M.
2017-07-01
A determination of the highest precision that can be achieved in the measurement of the location of a stellar-like object has been a topic of permanent interest by the astrometric community. The so-called (parametric, or non-Bayesian) Cramér-Rao (CR hereafter) bound provides a lower bound for the variance with which one could estimate the position of a point source. This has been studied recently by Mendez and collaborators (2014, 2015). In this work we present a different approach to the same problem (Echeverria et al. 2016), using a Bayesian CR setting which has a number of advantages over the parametric scenario.
Development of Corrections for Biomass Burning Effects in Version 2 of GEWEX/SRB Algorithm
NASA Technical Reports Server (NTRS)
Pinker, Rachel T.; Laszlo, I.; Dicus, Dennis L. (Technical Monitor)
1999-01-01
The objectives of this project were: (1) To incorporate into an existing version of the University of Maryland Surface Radiation Budget (SRB) model, optical parameters of forest fire aerosols, using best available information, as well as optical properties of other aerosols, identified as significant. (2) To run the model on regional scales with the new parametrization and information on forest fire occurrence and plume advection, as available from NASA LARC, and test improvements in inferring surface fluxes against daily values of measured fluxes. (3) Develop strategy how to incorporate the new parametrization on global scale and how to transfer modified model to NASA LARC.
NASA Astrophysics Data System (ADS)
Varghese, Julian
This research work has contributed in various ways to help develop a better understanding of textile composites and materials with complex microstructures in general. An instrumental part of this work was the development of an object-oriented framework that made it convenient to perform multiscale/multiphysics analyses of advanced materials with complex microstructures such as textile composites. In addition to the studies conducted in this work, this framework lays the groundwork for continued research of these materials. This framework enabled a detailed multiscale stress analysis of a woven DCB specimen that revealed the effect of the complex microstructure on the stress and strain energy release rate distribution along the crack front. In addition to implementing an oxidation model, the framework was also used to implement strategies that expedited the simulation of oxidation in textile composites so that it would take only a few hours. The simulation showed that the tow architecture played a significant role in the oxidation behavior in textile composites. Finally, a coupled diffusion/oxidation and damage progression analysis was implemented that was used to study the mechanical behavior of textile composites under mechanical loading as well as oxidation. A parametric study was performed to determine the effect of material properties and the number of plies in the laminate on its mechanical behavior. The analyses indicated a significant effect of the tow architecture and other parameters on the damage progression in the laminates.
Enhanced detection and visualization of anomalies in spectral imagery
NASA Astrophysics Data System (ADS)
Basener, William F.; Messinger, David W.
2009-05-01
Anomaly detection algorithms applied to hyperspectral imagery are able to reliably identify man-made objects from a natural environment based on statistical/geometric likelyhood. The process is more robust than target identification, which requires precise prior knowledge of the object of interest, but has an inherently higher false alarm rate. Standard anomaly detection algorithms measure deviation of pixel spectra from a parametric model (either statistical or linear mixing) estimating the image background. The topological anomaly detector (TAD) creates a fully non-parametric, graph theory-based, topological model of the image background and measures deviation from this background using codensity. In this paper we present a large-scale comparative test of TAD against 80+ targets in four full HYDICE images using the entire canonical target set for generation of ROC curves. TAD will be compared against several statistics-based detectors including local RX and subspace RX. Even a perfect anomaly detection algorithm would have a high practical false alarm rate in most scenes simply because the user/analyst is not interested in every anomalous object. To assist the analyst in identifying and sorting objects of interest, we investigate coloring of the anomalies with principle components projections using statistics computed from the anomalies. This gives a very useful colorization of anomalies in which objects of similar material tend to have the same color, enabling an analyst to quickly sort and identify anomalies of highest interest.
NASA Astrophysics Data System (ADS)
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
2018-06-01
Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter trade-off, arising from the simultaneous variations of different physical parameters, which increase the nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parametrization and acquisition arrangement. An appropriate choice of model parametrization is important to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parametrizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) data for unconventional heavy oil reservoir characterization. Six model parametrizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^' }) and velocity-impedance-II (α″, β″ and I_S^' }). We begin analysing the interparameter trade-off by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. We discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter trade-offs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter trade-offs for various model parametrizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parametrization, the inverted density profile can be overestimated, underestimated or spatially distorted. Among the six cases, only the velocity-density parametrization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. The heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson's ratios, can be identified clearly with the inverted isotropic-elastic parameters.
Zadpoor, Amir A
2017-07-25
Recent advances in additive manufacturing (AM) techniques in terms of accuracy, reliability, the range of processable materials, and commercial availability have made them promising candidates for production of functional parts including those used in the biomedical industry. The complexity-for-free feature offered by AM means that very complex designs become feasible to manufacture, while batch-size-indifference enables fabrication of fully patient-specific medical devices. Design for AM (DfAM) approaches aim to fully utilize those features for development of medical devices with substantially enhanced performance and biomaterials with unprecedented combinations of favorable properties that originate from complex geometrical designs at the micro-scale. This paper reviews the most important approaches in DfAM particularly those applicable to additive bio-manufacturing including image-based design pipelines, parametric and non-parametric designs, metamaterials, rational and computationally enabled design, topology optimization, and bio-inspired design. Areas with limited research have been identified and suggestions have been made for future research. The paper concludes with a brief discussion on the practical aspects of DfAM and the potential of combining AM with subtractive and formative manufacturing processes in so-called hybrid manufacturing processes.
Zadpoor, Amir A.
2017-01-01
Recent advances in additive manufacturing (AM) techniques in terms of accuracy, reliability, the range of processable materials, and commercial availability have made them promising candidates for production of functional parts including those used in the biomedical industry. The complexity-for-free feature offered by AM means that very complex designs become feasible to manufacture, while batch-size-indifference enables fabrication of fully patient-specific medical devices. Design for AM (DfAM) approaches aim to fully utilize those features for development of medical devices with substantially enhanced performance and biomaterials with unprecedented combinations of favorable properties that originate from complex geometrical designs at the micro-scale. This paper reviews the most important approaches in DfAM particularly those applicable to additive bio-manufacturing including image-based design pipelines, parametric and non-parametric designs, metamaterials, rational and computationally enabled design, topology optimization, and bio-inspired design. Areas with limited research have been identified and suggestions have been made for future research. The paper concludes with a brief discussion on the practical aspects of DfAM and the potential of combining AM with subtractive and formative manufacturing processes in so-called hybrid manufacturing processes. PMID:28757572
Statistical methods used in articles published by the Journal of Periodontal and Implant Science.
Choi, Eunsil; Lyu, Jiyoung; Park, Jinyoung; Kim, Hae-Young
2014-12-01
The purposes of this study were to assess the trend of use of statistical methods including parametric and nonparametric methods and to evaluate the use of complex statistical methodology in recent periodontal studies. This study analyzed 123 articles published in the Journal of Periodontal & Implant Science (JPIS) between 2010 and 2014. Frequencies and percentages were calculated according to the number of statistical methods used, the type of statistical method applied, and the type of statistical software used. Most of the published articles considered (64.4%) used statistical methods. Since 2011, the percentage of JPIS articles using statistics has increased. On the basis of multiple counting, we found that the percentage of studies in JPIS using parametric methods was 61.1%. Further, complex statistical methods were applied in only 6 of the published studies (5.0%), and nonparametric statistical methods were applied in 77 of the published studies (38.9% of a total of 198 studies considered). We found an increasing trend towards the application of statistical methods and nonparametric methods in recent periodontal studies and thus, concluded that increased use of complex statistical methodology might be preferred by the researchers in the fields of study covered by JPIS.
NASA Astrophysics Data System (ADS)
Traversaro, Francisco; O. Redelico, Francisco
2018-04-01
In nonlinear dynamics, and to a lesser extent in other fields, a widely used measure of complexity is the Permutation Entropy. But there is still no known method to determine the accuracy of this measure. There has been little research on the statistical properties of this quantity that characterize time series. The literature describes some resampling methods of quantities used in nonlinear dynamics - as the largest Lyapunov exponent - but these seems to fail. In this contribution, we propose a parametric bootstrap methodology using a symbolic representation of the time series to obtain the distribution of the Permutation Entropy estimator. We perform several time series simulations given by well-known stochastic processes: the 1/fα noise family, and show in each case that the proposed accuracy measure is as efficient as the one obtained by the frequentist approach of repeating the experiment. The complexity of brain electrical activity, measured by the Permutation Entropy, has been extensively used in epilepsy research for detection in dynamical changes in electroencephalogram (EEG) signal with no consideration of the variability of this complexity measure. An application of the parametric bootstrap methodology is used to compare normal and pre-ictal EEG signals.
NASA's Orbital Debris Conjuction Assessment and Collision Avoidance Strategy
NASA Technical Reports Server (NTRS)
Gavin, Richard T.
2010-01-01
NASA has successfully used debris avoidance maneuvers to protect our spacecraft for more than 20 . years. This process which started out using parametric data and maneuver boxes has seen considerable evolution and now allows us to continue nominal operations for all but the most threatening objects. This has greatly reduced the interruptions to the critical mission objectives being pursued by NASA s Space Station, Space Shuttle, and robotic satellites.
NASA Astrophysics Data System (ADS)
Li, Xiuming; Sun, Mei; Gao, Cuixia; Han, Dun; Wang, Minggang
2018-02-01
This paper presents the parametric modified limited penetrable visibility graph (PMLPVG) algorithm for constructing complex networks from time series. We modify the penetrable visibility criterion of limited penetrable visibility graph (LPVG) in order to improve the rationality of the original penetrable visibility and preserve the dynamic characteristics of the time series. The addition of view angle provides a new approach to characterize the dynamic structure of the time series that is invisible in the previous algorithm. The reliability of the PMLPVG algorithm is verified by applying it to three types of artificial data as well as the actual data of natural gas prices in different regions. The empirical results indicate that PMLPVG algorithm can distinguish the different time series from each other. Meanwhile, the analysis results of natural gas prices data using PMLPVG are consistent with the detrended fluctuation analysis (DFA). The results imply that the PMLPVG algorithm may be a reasonable and significant tool for identifying various time series in different fields.
Complex mapping of aerofoils - a different perspective
NASA Astrophysics Data System (ADS)
Matthews, Miccal T.
2012-01-01
In this article an application of conformal mapping to aerofoil theory is studied from a geometric and calculus point of view. The problem is suitable for undergraduate teaching in terms of a project or extended piece of work, and brings together the concepts of geometric mapping, parametric equations, complex numbers and calculus. The Joukowski and Karman-Trefftz aerofoils are studied, and it is shown that the Karman-Trefftz aerofoil is an improvement over the Joukowski aerofoil from a practical point of view. For the most part only a spreadsheet program and pen and paper is required, only for the last portion of the study of the Karman-Trefftz aerofoils a symbolic computer package is employed. Ignoring the concept of a conformal mapping and instead viewing the problem from a parametric point of view, some interesting mappings are obtained. By considering the derivative of the mapped mapping via the chain rule, some new and interesting analytical results are obtained for the Joukowski aerofoil, and numerical results for the Karman-Trefftz aerofoil.
NASA Astrophysics Data System (ADS)
He, G.; Zhu, H.; Xu, J.; Gao, K.; Zhu, D.
2017-09-01
The bionic research of shape is an important aspect of the research on bionic robot, and its implementation cannot be separated from the shape modeling and numerical simulation of the bionic object, which is tedious and time-consuming. In order to improve the efficiency of shape bionic design, the feet of animals living in soft soil and swamp environment are taken as bionic objects, and characteristic skeleton curve, section curve, joint rotation variable, position and other parameters are used to describe the shape and position information of bionic object’s sole, toes and flipper. The geometry modeling of the bionic object is established by using the parameterization of characteristic curves and variables. Based on this, the integration framework of parametric modeling and finite element modeling, dynamic analysis and post-processing of sinking process in soil is proposed in this paper. The examples of bionic ostrich foot and bionic duck foot are also given. The parametric modeling and integration technique can achieve rapid improved design based on bionic object, and it can also greatly improve the efficiency and quality of robot foot bionic design, and has important practical significance to improve the level of bionic design of robot foot’s shape and structure.
Dynamic Identification for Control of Large Space Structures
NASA Technical Reports Server (NTRS)
Ibrahim, S. R.
1985-01-01
This is a compilation of reports by the one author on one subject. It consists of the following five journal articles: (1) A Parametric Study of the Ibrahim Time Domain Modal Identification Algorithm; (2) Large Modal Survey Testing Using the Ibrahim Time Domain Identification Technique; (3) Computation of Normal Modes from Identified Complex Modes; (4) Dynamic Modeling of Structural from Measured Complex Modes; and (5) Time Domain Quasi-Linear Identification of Nonlinear Dynamic Systems.
Nishiura, Hiroshi
2009-01-01
Determination of the most appropriate quarantine period for those exposed to smallpox is crucial to the construction of an effective preparedness program against a potential bioterrorist attack. This study reanalyzed data on the incubation period distribution of smallpox to allow the optimal quarantine period to be objectively calculated. In total, 131 cases of smallpox were examined; incubation periods were extracted from four different sets of historical data and only cases arising from exposure for a single day were considered. The mean (median and standard deviation (SD)) incubation period was 12.5 (12.0, 2.2) days. Assuming lognormal and gamma distributions for the incubation period, maximum likelihood estimates (and corresponding 95% confidence interval (CI)) of the 95th percentile were 16.4 (95% CI: 15.6, 17.9) and 16.2 (95% CI: 15.5, 17.4) days, respectively. Using a non-parametric method, the 95th percentile point was estimated as 16 (95% CI: 15, 17) days. The upper 95% CIs of the incubation periods at the 90th, 95th and 99th percentiles were shorter than 17, 18 and 23 days, respectively, using both parametric and non-parametric methods. These results suggest that quarantine measures can ensure non-infection among those exposed to smallpox with probabilities higher than 95-99%, if the exposed individuals are quarantined for 18-23 days after the date of contact tracing.
Bláha, M; Hoch, J; Ferko, A; Ryška, A; Hovorková, E
Improvement in any human activity is preconditioned by inspection of results and providing feedback used for modification of the processes applied. Comparison of experts experience in the given field is another indispensable part leading to optimisation and improvement of processes, and optimally to implementation of standards. For the purpose of objective comparison and assessment of the processes, it is always necessary to describe the processes in a parametric way, to obtain representative data, to assess the achieved results, and to provide unquestionable and data-driven feedback based on such analysis. This may lead to a consensus on the definition of standards in the given area of health care. Total mesorectal excision (TME) is a standard procedure of rectal cancer (C20) surgical treatment. However, the quality of performed procedures varies in different health care facilities, which is given, among others, by internal processes and surgeons experience. Assessment of surgical treatment results is therefore of key importance. A pathologist who assesses the resected tissue can provide valuable feedback in this respect. An information system for the parametric assessment of TME performance is described in our article, including technical background in the form of a multicentre clinical registry and the structure of observed parameters. We consider the proposed system of TME parametric assessment as significant for improvement of TME performance, aimed at reducing local recurrences and at improving the overall prognosis of patients. rectal cancer total mesorectal excision parametric data clinical registries TME registry.
Potential Use of High Frequency Data Transmission for Oceanic Air Traffic Control Improvement
DOT National Transportation Integrated Search
1979-09-01
This report is concerned with the transatlantic Air Traffic Control (ATC) data links in the high frequency (HF) band. The report tries to broaden the appropriate communication system concepts by fortifying them with general parametric objectives. Whi...
Bahrami, Sheyda; Shamsi, Mousa
2017-01-01
Functional magnetic resonance imaging (fMRI) is a popular method to probe the functional organization of the brain using hemodynamic responses. In this method, volume images of the entire brain are obtained with a very good spatial resolution and low temporal resolution. However, they always suffer from high dimensionality in the face of classification algorithms. In this work, we combine a support vector machine (SVM) with a self-organizing map (SOM) for having a feature-based classification by using SVM. Then, a linear kernel SVM is used for detecting the active areas. Here, we use SOM for feature extracting and labeling the datasets. SOM has two major advances: (i) it reduces dimension of data sets for having less computational complexity and (ii) it is useful for identifying brain regions with small onset differences in hemodynamic responses. Our non-parametric model is compared with parametric and non-parametric methods. We use simulated fMRI data sets and block design inputs in this paper and consider the contrast to noise ratio (CNR) value equal to 0.6 for simulated datasets. fMRI simulated dataset has contrast 1-4% in active areas. The accuracy of our proposed method is 93.63% and the error rate is 6.37%.
Subharmonics, Chaos, and Beyond
NASA Technical Reports Server (NTRS)
Adler, Laszlo; Yost, William T.; Cantrell, John H.
2011-01-01
While studying finite amplitude ultrasonic wave resonance in a one dimensional liquid-filled cavity, which is formed by a narrow band transducer and a plane reflector, subharmonics of the driver's frequency were observed in addition to the expected harmonic structure. Subsequently it was realized that the system was one of the many examples where parametric resonance takes place and in which the observed subharmonics are parametrically generated. Parametric resonance occurs in any physical system which has a periodically modulated natural frequency. The generation mechanism also requires a sufficiently high threshold value of the driving amplitude so that the system becomes increasingly nonlinear in response. The nonlinear features were recently investigated and are the objective of this presentation. An ultrasonic interferometer with optical precision was built. The transducers were compressional undamped quartz and Lithium Niobate crystals ranging from 1-10 Mhz, and driven by a high power amplifier. Both an optical diffraction system and a receive transducer attached to an aligned reflector with lapped flat and parallel surfaces were used to observe the generated frequency components in the cavity.
A novel SURE-based criterion for parametric PSF estimation.
Xue, Feng; Blu, Thierry
2015-02-01
We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.
Advanced Imaging Methods for Long-Baseline Optical Interferometry
NASA Astrophysics Data System (ADS)
Le Besnerais, G.; Lacour, S.; Mugnier, L. M.; Thiebaut, E.; Perrin, G.; Meimon, S.
2008-11-01
We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.
Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications
NASA Technical Reports Server (NTRS)
Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.
2000-01-01
The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.
BLIND EXTRACTION OF AN EXOPLANETARY SPECTRUM THROUGH INDEPENDENT COMPONENT ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldmann, I. P.; Tinetti, G.; Hollis, M. D. J.
2013-03-20
Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of {approx}0.09 {mu}m. This represents a reasonablemore » trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.« less
Shape sensing using multi-core fiber optic cable and parametric curve solutions.
Moore, Jason P; Rogge, Matthew D
2012-01-30
The shape of a multi-core optical fiber is calculated by numerically solving a set of Frenet-Serret equations describing the path of the fiber in three dimensions. Included in the Frenet-Serret equations are curvature and bending direction functions derived from distributed fiber Bragg grating strain measurements in each core. The method offers advantages over prior art in that it determines complex three-dimensional fiber shape as a continuous parametric solution rather than an integrated series of discrete planar bends. Results and error analysis of the method using a tri-core optical fiber is presented. Maximum error expressed as a percentage of fiber length was found to be 7.2%.
Organizing Space Shuttle parametric data for maintainability
NASA Technical Reports Server (NTRS)
Angier, R. C.
1983-01-01
A model of organization and management of Space Shuttle data is proposed. Shuttle avionics software is parametrically altered by a reconfiguration process for each flight. As the flight rate approaches an operational level, current methods of data management would become increasingly complex. An alternative method is introduced, using modularized standard data, and its implications for data collection, integration, validation, and reconfiguration processes are explored. Information modules are cataloged for later use, and may be combined in several levels for maintenance. For each flight, information modules can then be selected from the catalog at a high level. These concepts take advantage of the reusability of Space Shuttle information to reduce the cost of reconfiguration as flight experience increases.
Holt film wall shear instrumentation for boundary layer transition research
NASA Technical Reports Server (NTRS)
Schneider, Steven P.
1994-01-01
Measurements of the performance of hot-film wall-shear sensors were performed to aid development of improved sensors. The effect of film size and substrate properties on the sensor performance was quantified through parametric studies carried out both electronically and in a shock tube. The results show that sensor frequency response increases with decreasing sensor size, while at the same time sensitivity decreases. Substrate effects were also studied, through parametric variation of thermal conductivity and heat capacity. Early studies used complex dual-layer substrates, while later studies were designed for both single-layer and dual-layer substrates. Sensor failures and funding limitations have precluded completion of the substrate thermal-property tests.
The benefits of adaptive parametrization in multi-objective Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ghisu, Tiziano; Parks, Geoffrey T.; Jaeggi, Daniel M.; Jarrett, Jerome P.; Clarkson, P. John
2010-10-01
In real-world optimization problems, large design spaces and conflicting objectives are often combined with a large number of constraints, resulting in a highly multi-modal, challenging, fragmented landscape. The local search at the heart of Tabu Search, while being one of its strengths in highly constrained optimization problems, requires a large number of evaluations per optimization step. In this work, a modification of the pattern search algorithm is proposed: this modification, based on a Principal Components' Analysis of the approximation set, allows both a re-alignment of the search directions, thereby creating a more effective parametrization, and also an informed reduction of the size of the design space itself. These changes make the optimization process more computationally efficient and more effective - higher quality solutions are identified in fewer iterations. These advantages are demonstrated on a number of standard analytical test functions (from the ZDT and DTLZ families) and on a real-world problem (the optimization of an axial compressor preliminary design).
NASA Astrophysics Data System (ADS)
Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien
2006-03-01
Subdivision surfaces and parameterization are desirable for many algorithms that are commonly used in Medical Image Analysis. However, extracting an accurate surface and parameterization can be difficult for many anatomical objects of interest, due to noisy segmentations and the inherent variability of the object. The thin cartilages of the knee are an example of this, especially after damage is incurred from injuries or conditions like osteoarthritis. As a result, the cartilages can have different topologies or exist in multiple pieces. In this paper we present a topology preserving (genus 0) subdivision-based parametric deformable model that is used to extract the surfaces of the patella and tibial cartilages in the knee. These surfaces have minimal thickness in areas without cartilage. The algorithm inherently incorporates several desirable properties, including: shape based interpolation, sub-division remeshing and parameterization. To illustrate the usefulness of this approach, the surfaces and parameterizations of the patella cartilage are used to generate a 3D statistical shape model.
A parametric ribcage geometry model accounting for variations among the adult population.
Wang, Yulong; Cao, Libo; Bai, Zhonghao; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen
2016-09-06
The objective of this study is to develop a parametric ribcage model that can account for morphological variations among the adult population. Ribcage geometries, including 12 pair of ribs, sternum, and thoracic spine, were collected from CT scans of 101 adult subjects through image segmentation, landmark identification (1016 for each subject), symmetry adjustment, and template mesh mapping (26,180 elements for each subject). Generalized procrustes analysis (GPA), principal component analysis (PCA), and regression analysis were used to develop a parametric ribcage model, which can predict nodal locations of the template mesh according to age, sex, height, and body mass index (BMI). Two regression models, a quadratic model for estimating the ribcage size and a linear model for estimating the ribcage shape, were developed. The results showed that the ribcage size was dominated by the height (p=0.000) and age-sex-interaction (p=0.007) and the ribcage shape was significantly affected by the age (p=0.0005), sex (p=0.0002), height (p=0.0064) and BMI (p=0.0000). Along with proper assignment of cortical bone thickness, material properties and failure properties, this parametric ribcage model can directly serve as the mesh of finite element ribcage models for quantifying effects of human characteristics on thoracic injury risks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Testing the cosmic conservation of photon number with type Ia supernovae and ages of old objects
NASA Astrophysics Data System (ADS)
Jesus, J. F.; Holanda, R. F. L.; Dantas, M. A.
2017-12-01
In this paper, we obtain luminosity distances by using ages of 32 old passive galaxies distributed over the redshift interval 0.11< z < 1.84 and test the cosmic conservation of photon number by comparing them with 580 distance moduli of type Ia supernovae (SNe Ia) from the so-called Union 2.1 compilation. Our analyses are based on the fact that the method of obtaining ages of galaxies relies on the detailed shape of galaxy spectra but not on galaxy luminosity. Possible departures from cosmic conservation of photon number is parametrized by τ (z) = 2 ɛ z and τ (z) = ɛ z/(1+z) (for ɛ =0 the conservation of photon number is recovered). We find ɛ =0.016^{+0.078}_{-0.075} from the first parametrization and ɛ =- 0.18^{+0.25}_{-0.24} from the second parametrization, both limits at 95% c.l. In this way, no significant departure from cosmic conservation of photon number is verified. In addition, by considering the total age as inferred from Planck (2015) analysis, we find the incubation time t_{inc}=1.66± 0.29 Gyr and t_{inc}=1.23± 0.27 Gyr at 68% c.l. for each parametrization, respectively.
NASA Astrophysics Data System (ADS)
Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.
2018-01-01
This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.
Parametric Architecture in the Urban Space
NASA Astrophysics Data System (ADS)
Januszkiewicz, Krystyna; Kowalski, Karol G.
2017-10-01
The paper deals with the parametric architecture which is trying to introduce a new spatial language in the context for urban tissue that correspond to the artistic consciousness and the attitude of information and digital technologies era. The first part of the paper defines the main features of parametric architecture (such as: folding, continuity and curvilinearity) which are are characteristic of the new style of named the “parametricism”. This architecture is a strong emphasis on geometry, materiality, feasibility and sustainability, what emerges is an explicit agenda promoting material ornamentation, spatial spectacle and formal theatricality. The second part presents result of case study, especially parametric public use buildings, within the tissue of city. The analyzed objects are: The Sage Gateshead (1998-2004) in Gateshead, Kunsthaus in Graz (2000-2003), the Weltstadthaus (2003-2005) in Cologne, The Golden Terraces in Warsaw (2000-2007), the Metropol Parasol in Seville (2005-2011) the King Cross Station (2005-2012) in London, the headquarters of the Pathé Foundation (2006-2014) in Paris. Each of the enumerated examples shows a diverse approach to designing in the urban space, which reflect the age of digital technologies and the information society. In conclusion emphasizes, that new concept of the spatialization of architecture is the equivalent of the democratization of the political system, the liberalization of the economy, among other examples.
Lectures in Complex Systems, (1992). Volume 5
1993-05-01
Lattice Gas Methods for Partial Differential Equations, 1989 V P. W. Anderson, K. Arrow, The Economy as an Evolving Complex System, D. Pines 1988 VI C...to Improve EEG Classification and to Explore GA Parametrization Cathleen Barczys, Laura Bloom, and Leslie Kay 569 Symbiosis in Society and Monopoly in...Appeal of Evolution 1.2 Elements of Genetic Algorithms 1.3 A Simple GA 1.4 Overview of Some Applications of Genetic Algorithms 1.5 A Brief Example
Robust simulation of buckled structures using reduced order modeling
NASA Astrophysics Data System (ADS)
Wiebe, R.; Perez, R. A.; Spottswood, S. M.
2016-09-01
Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties.
Sun, Chao; Feng, Wenquan; Du, Songlin
2018-01-01
As multipath is one of the dominating error sources for high accuracy Global Navigation Satellite System (GNSS) applications, multipath mitigation approaches are employed to minimize this hazardous error in receivers. Binary offset carrier modulation (BOC), as a modernized signal structure, is adopted to achieve significant enhancement. However, because of its multi-peak autocorrelation function, conventional multipath mitigation techniques for binary phase shift keying (BPSK) signal would not be optimal. Currently, non-parametric and parametric approaches have been studied specifically aiming at multipath mitigation for BOC signals. Non-parametric techniques, such as Code Correlation Reference Waveforms (CCRW), usually have good feasibility with simple structures, but suffer from low universal applicability for different BOC signals. Parametric approaches can thoroughly eliminate multipath error by estimating multipath parameters. The problems with this category are at the high computation complexity and vulnerability to the noise. To tackle the problem, we present a practical parametric multipath estimation method in the frequency domain for BOC signals. The received signal is transferred to the frequency domain to separate out the multipath channel transfer function for multipath parameter estimation. During this process, we take the operations of segmentation and averaging to reduce both noise effect and computational load. The performance of the proposed method is evaluated and compared with the previous work in three scenarios. Results indicate that the proposed averaging-Fast Fourier Transform (averaging-FFT) method achieves good robustness in severe multipath environments with lower computational load for both low-order and high-order BOC signals. PMID:29495589
NASA Technical Reports Server (NTRS)
Kamhawi, Hilmi N.
2012-01-01
This report documents the work performed from March 2010 to March 2012. The Integrated Design and Engineering Analysis (IDEA) environment is a collaborative environment based on an object-oriented, multidisciplinary, distributed framework using the Adaptive Modeling Language (AML) as a framework and supporting the configuration design and parametric CFD grid generation. This report will focus on describing the work in the area of parametric CFD grid generation using novel concepts for defining the interaction between the mesh topology and the geometry in such a way as to separate the mesh topology from the geometric topology while maintaining the link between the mesh topology and the actual geometry.
SHIPS: Spectral Hierarchical Clustering for the Inference of Population Structure in Genetic Studies
Bouaziz, Matthieu; Paccard, Caroline; Guedj, Mickael; Ambroise, Christophe
2012-01-01
Inferring the structure of populations has many applications for genetic research. In addition to providing information for evolutionary studies, it can be used to account for the bias induced by population stratification in association studies. To this end, many algorithms have been proposed to cluster individuals into genetically homogeneous sub-populations. The parametric algorithms, such as Structure, are very popular but their underlying complexity and their high computational cost led to the development of faster parametric alternatives such as Admixture. Alternatives to these methods are the non-parametric approaches. Among this category, AWclust has proven efficient but fails to properly identify population structure for complex datasets. We present in this article a new clustering algorithm called Spectral Hierarchical clustering for the Inference of Population Structure (SHIPS), based on a divisive hierarchical clustering strategy, allowing a progressive investigation of population structure. This method takes genetic data as input to cluster individuals into homogeneous sub-populations and with the use of the gap statistic estimates the optimal number of such sub-populations. SHIPS was applied to a set of simulated discrete and admixed datasets and to real SNP datasets, that are data from the HapMap and Pan-Asian SNP consortium. The programs Structure, Admixture, AWclust and PCAclust were also investigated in a comparison study. SHIPS and the parametric approach Structure were the most accurate when applied to simulated datasets both in terms of individual assignments and estimation of the correct number of clusters. The analysis of the results on the real datasets highlighted that the clusterings of SHIPS were the more consistent with the population labels or those produced by the Admixture program. The performances of SHIPS when applied to SNP data, along with its relatively low computational cost and its ease of use make this method a promising solution to infer fine-scale genetic patterns. PMID:23077494
Revisiting dark energy models using differential ages of galaxies
NASA Astrophysics Data System (ADS)
Rani, Nisha; Jain, Deepak; Mahajan, Shobhit; Mukherjee, Amitabha; Biesiada, Marek
2017-03-01
In this work, we use a test based on the differential ages of galaxies for distinguishing the dark energy models. As proposed by Jimenez and Loeb in [1], relative ages of galaxies can be used to put constraints on various cosmological parameters. In the same vein, we reconstruct H0dt/dz and its derivative (H0d2t/dz2) using a model independent technique called non-parametric smoothing. Basically, dt/dz is the change in the age of the object as a function of redshift which is directly linked with the Hubble parameter. Hence for reconstruction of this quantity, we use the most recent H(z) data. Further, we calculate H0dt/dz and its derivative for several models like Phantom, Einstein de Sitter (EdS), ΛCDM, Chevallier-Polarski-Linder (CPL) parametrization, Jassal-Bagla-Padmanabhan (JBP) parametrization and Feng-Shen-Li-Li (FSLL) parametrization. We check the consistency of these models with the results of reconstruction obtained in a model independent way from the data. It is observed that H0dt/dz as a tool is not able to distinguish between the ΛCDM, CPL, JBP and FSLL parametrizations but, as expected, EdS and Phantom models show noticeable deviation from the reconstructed results. Further, the derivative of H0dt/dz for various dark energy models is more sensitive at low redshift. It is found that the FSLL model is not consistent with the reconstructed results, however, the ΛCDM model is in concordance with the 3σ region of the reconstruction at redshift z>= 0.3.
NASA Astrophysics Data System (ADS)
Gosselin, Jeremy M.; Dosso, Stan E.; Cassidy, John F.; Quijano, Jorge E.; Molnar, Sheri; Dettmer, Jan
2017-10-01
This paper develops and applies a Bernstein-polynomial parametrization to efficiently represent general, gradient-based profiles in nonlinear geophysical inversion, with application to ambient-noise Rayleigh-wave dispersion data. Bernstein polynomials provide a stable parametrization in that small perturbations to the model parameters (basis-function coefficients) result in only small perturbations to the geophysical parameter profile. A fully nonlinear Bayesian inversion methodology is applied to estimate shear wave velocity (VS) profiles and uncertainties from surface wave dispersion data extracted from ambient seismic noise. The Bayesian information criterion is used to determine the appropriate polynomial order consistent with the resolving power of the data. Data error correlations are accounted for in the inversion using a parametric autoregressive model. The inversion solution is defined in terms of marginal posterior probability profiles for VS as a function of depth, estimated using Metropolis-Hastings sampling with parallel tempering. This methodology is applied to synthetic dispersion data as well as data processed from passive array recordings collected on the Fraser River Delta in British Columbia, Canada. Results from this work are in good agreement with previous studies, as well as with co-located invasive measurements. The approach considered here is better suited than `layered' modelling approaches in applications where smooth gradients in geophysical parameters are expected, such as soil/sediment profiles. Further, the Bernstein polynomial representation is more general than smooth models based on a fixed choice of gradient type (e.g. power-law gradient) because the form of the gradient is determined objectively by the data, rather than by a subjective parametrization choice.
Martín H., José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711
Parametric Studies for Scenario Earthquakes: Site Effects and Differential Motion
NASA Astrophysics Data System (ADS)
Panza, G. F.; Panza, G. F.; Romanelli, F.
2001-12-01
In presence of strong lateral heterogeneities, the generation of local surface waves and local resonance can give rise to a complicated pattern in the spatial groundshaking scenario. For any object of the built environment with dimensions greater than the characteristic length of the ground motion, different parts of its foundations can experience severe non-synchronous seismic input. In order to perform an accurate estimate of the site effects, and of differential motion, in realistic geometries, it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models, allows us the construction of damage scenarios that are out of reach of stochastic models. Synthetic signals, to be used as seismic input in a subsequent engineering analysis, e.g. for the design of earthquake-resistant structures or for the estimation of differential motion, can be produced at a very low cost/benefit ratio. We illustrate the work done in the framework of a large international cooperation following the guidelines of the UNESCO IUGS IGCP Project 414 "Realistic Modeling of Seismic Input for Megacities and Large Urban Areas" and show the very recent numerical experiments carried out within the EC project "Advanced methods for assessing the seismic vulnerability of existing motorway bridges" (VAB) to assess the importance of non-synchronous seismic excitation of long structures. >http://www.ictp.trieste.it/www_users/sand/projects.html
Arregui-Dalmases, Carlos; Del Pozo, Eduardo; Duprey, Sonia; Lopez-Valdes, Francisco J; Lau, Anthony; Subit, Damien; Kent, Richard
2010-06-01
The objectives of this study were to examine the axial response of the clavicle under quasistatic compressions replicating the body boundary conditions and to quantify the sensitivity of finite element-predicted fracture in the clavicle to several parameters. Clavicles were harvested from 14 donors (age range 14-56 years). Quasistatic axial compression tests were performed using a custom rig designed to replicate in situ boundary conditions. Prior to testing, high-resolution computed tomography (CT) scans were taken of each clavicle. From those images, finite element models were constructed. Factors varied parametrically included the density used to threshold cortical bone in the CT scans, the presence of trabecular bone, the mesh density, Young's modulus, the maximum stress, and the element type (shell vs. solid, triangular vs. quadrilateral surface elements). The experiments revealed significant variability in the peak force (2.41 +/- 0.72 kN) and displacement to peak force (4.9 +/- 1.1 mm), with age (p < .05) and with some geometrical traits of the specimens. In the finite element models, the failure force and location were moderately dependent upon the Young's modulus. The fracture force was highly sensitive to the yield stress (80-110 MPa). Neither fracture location nor force was strongly dependent on mesh density as long as the element size was less than 5 x 5 mm(2). Both the fracture location and force were strongly dependent upon the threshold density used to define the thickness of the cortical shell.
Development of a parametric kinematic model of the human hand and a novel robotic exoskeleton.
Burton, T M W; Vaidyanathan, R; Burgess, S C; Turton, A J; Melhuish, C
2011-01-01
This paper reports the integration of a kinematic model of the human hand during cylindrical grasping, with specific focus on the accurate mapping of thumb movement during grasping motions, and a novel, multi-degree-of-freedom assistive exoskeleton mechanism based on this model. The model includes thumb maximum hyper-extension for grasping large objects (~> 50 mm). The exoskeleton includes a novel four-bar mechanism designed to reproduce natural thumb opposition and a novel synchro-motion pulley mechanism for coordinated finger motion. A computer aided design environment is used to allow the exoskeleton to be rapidly customized to the hand dimensions of a specific patient. Trials comparing the kinematic model to observed data of hand movement show the model to be capable of mapping thumb and finger joint flexion angles during grasping motions. Simulations show the exoskeleton to be capable of reproducing the complex motion of the thumb to oppose the fingers during cylindrical and pinch grip motions. © 2011 IEEE
A Computational Model of Multidimensional Shape
Liu, Xiuwen; Shi, Yonggang; Dinov, Ivo
2010-01-01
We develop a computational model of shape that extends existing Riemannian models of curves to multidimensional objects of general topological type. We construct shape spaces equipped with geodesic metrics that measure how costly it is to interpolate two shapes through elastic deformations. The model employs a representation of shape based on the discrete exterior derivative of parametrizations over a finite simplicial complex. We develop algorithms to calculate geodesics and geodesic distances, as well as tools to quantify local shape similarities and contrasts, thus obtaining a formulation that accounts for regional differences and integrates them into a global measure of dissimilarity. The Riemannian shape spaces provide a common framework to treat numerous problems such as the statistical modeling of shapes, the comparison of shapes associated with different individuals or groups, and modeling and simulation of shape dynamics. We give multiple examples of geodesic interpolations and illustrations of the use of the models in brain mapping, particularly, the analysis of anatomical variation based on neuroimaging data. PMID:21057668
On the Use of CAD and Cartesian Methods for Aerodynamic Optimization
NASA Technical Reports Server (NTRS)
Nemec, M.; Aftosmis, M. J.; Pulliam, T. H.
2004-01-01
The objective for this paper is to present the development of an optimization capability for Curt3D, a Cartesian inviscid-flow analysis package. We present the construction of a new optimization framework and we focus on the following issues: 1) Component-based geometry parameterization approach using parametric-CAD models and CAPRI. A novel geometry server is introduced that addresses the issue of parallel efficiency while only sparingly consuming CAD resources; 2) The use of genetic and gradient-based algorithms for three-dimensional aerodynamic design problems. The influence of noise on the optimization methods is studied. Our goal is to create a responsive and automated framework that efficiently identifies design modifications that result in substantial performance improvements. In addition, we examine the architectural issues associated with the deployment of a CAD-based approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute engines. We demonstrate the effectiveness of the framework for a design problem that features topology changes and complex geometry.
NASA Technical Reports Server (NTRS)
Stone, N. H.
1981-01-01
The objectives are to provide a parametric description of the electrostatic interaction of a mesosonic, collisionless plasma with conducting bodies on the order of 1 to 10 Debye lengths in size, and to extend this description to the satellite-ionospheric interaction, where possible. Experimental findings include: the wake of the geometrically complex body appears to be a linear superposition of the wakes of its simple geometric components; and vector ion flux measurements show converging ion streams at the wake axis and direct evidence of ion streams deflected from the wake axis by the positive space charge potential associated with the axial ion peak. The extension to the satellite-ionospheric interaction utilizes qualitative scaling and indicates that similar, but smaller amplitude, wake structures may be expected for small or highly charged bodies. However, for large bodies at small potentials, the structure may be diffused by the thermal ion motion and the dispersion resulting for space charge potentials.
Introduction and Highlights of the Workshop
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Venneri, Samuel L.
1997-01-01
Four generations of CAD/CAM systems can be identified, corresponding to changes in both modeling functionality and software architecture. The systems evolved from 2D and wireframes to solid modeling, to parametric/variational modelers to the current simulation-embedded systems. Recent developments have enabled design engineers to perform many of the complex analysis tasks, typically performed by analysis experts. Some of the characteristics of the current and emerging CAD/CAM/CAE systems are described in subsequent presentations. The focus of the workshop is on the potential of CAD/CAM/CAE systems for use in simulating the entire mission and life-cycle of future aerospace systems, and the needed development to realize this potential. First, the major features of the emerging computing, communication and networking environment are outlined; second, the characteristics and design drivers of future aerospace systems are identified; third, the concept of intelligent synthesis environment being planned by NASA, the UVA ACT Center and JPL is presented; and fourth, the objectives and format of the workshop are outlined.
Design and performance of an analysis-by-synthesis class of predictive speech coders
NASA Technical Reports Server (NTRS)
Rose, Richard C.; Barnwell, Thomas P., III
1990-01-01
The performance of a broad class of analysis-by-synthesis linear predictive speech coders is quantified experimentally. The class of coders includes a number of well-known techniques as well as a very large number of speech coders which have not been named or studied. A general formulation for deriving the parametric representation used in all of the coders in the class is presented. A new coder, named the self-excited vocoder, is discussed because of its good performance with low complexity, and because of the insight this coder gives to analysis-by-synthesis coders in general. The results of a study comparing the performances of different members of this class are presented. The study takes the form of a series of formal subjective and objective speech quality tests performed on selected coders. The results of this study lead to some interesting and important observations concerning the controlling parameters for analysis-by-synthesis speech coders.
Integrating Multibody Simulation and CFD: toward Complex Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Pieri, Stefano; Poloni, Carlo; Mühlmeier, Martin
This paper describes the use of integrated multidisciplinary analysis and optimization of a race car model on a predefined circuit. The objective is the definition of the most efficient geometric configuration that can guarantee the lowest lap time. In order to carry out this study it has been necessary to interface the design optimization software modeFRONTIER with the following softwares: CATIA v5, a three dimensional CAD software, used for the definition of the parametric geometry; A.D.A.M.S./Motorsport, a multi-body dynamic simulation software; IcemCFD, a mesh generator, for the automatic generation of the CFD grid; CFX, a Navier-Stokes code, for the fluid-dynamic forces prediction. The process integration gives the possibility to compute, for each geometrical configuration, a set of aerodynamic coefficients that are then used in the multiboby simulation for the computation of the lap time. Finally an automatic optimization procedure is started and the lap-time minimized. The whole process is executed on a Linux cluster running CFD simulations in parallel.
Single-stage-to-orbit versus two-stage-two-orbit: A cost perspective
NASA Astrophysics Data System (ADS)
Hamaker, Joseph W.
1996-03-01
This paper considers the possible life-cycle costs of single-stage-to-orbit (SSTO) and two-stage-to-orbit (TSTO) reusable launch vehicles (RLV's). The analysis parametrically addresses the issue such that the preferred economic choice comes down to the relative complexity of the TSTO compared to the SSTO. The analysis defines the boundary complexity conditions at which the two configurations have equal life-cycle costs, and finally, makes a case for the economic preference of SSTO over TSTO.
2009-02-01
topology changes. We used a subset of the TOSCA shape database , [10], consisting of four different objects: cat, dog, male, and female. Each of the...often encountered as acquisition imperfections when the shapes are acquired using a 3D scanner. We used a subset of the TOSCA shape database , consisting...object recognition, Point Based Graphics, Prague, 2007. 18 44. A. Spira and R. Kimmel, An efficient solution to the eikonal equation on parametric
EMISSION TEST REPORT, OMSS FIELD TEST ON CARBON INJECTION FOR MERCURY CONTROL
The report discusses results of a parametric evaluation of powdered activated carbon for control of mercury (Hg) emission from a municipal waste cornbustor (MWC) equipped with a lime spray dryer absorber/fabric filter (SD/FF). The primary test objectives were to evaluate the effe...
Application of Transformations in Parametric Inference
ERIC Educational Resources Information Center
Brownstein, Naomi; Pensky, Marianna
2008-01-01
The objective of the present paper is to provide a simple approach to statistical inference using the method of transformations of variables. We demonstrate performance of this powerful tool on examples of constructions of various estimation procedures, hypothesis testing, Bayes analysis and statistical inference for the stress-strength systems.…
The report describes the use of a pilot-scale catalytic incineration unit/solvent generation system to investigate the effectiveness of catalytic incineration as a way to destroy volatile organic compounds (VOCs) and hazardous/toxic air pollutants (HAPs). Objectives of the study ...
NASA Technical Reports Server (NTRS)
Pandya, Shishir; Chaderjian, Neal; Ahmad, Jasim; Kwak, Dochan (Technical Monitor)
2002-01-01
A process is described which enables the generation of 35 time-dependent viscous solutions for a YAV-8B Harrier in ground effect in one week. Overset grids are used to model the complex geometry of the Harrier aircraft and the interaction of its jets with the ground plane and low-speed ambient flow. The time required to complete this parametric study is drastically reduced through the use of process automation, modern computational platforms, and parallel computing. Moreover, a dual-time-stepping algorithm is described which improves solution robustness. Unsteady flow visualization and a frequency domain analysis are also used to identify and correlated key flow structures with the time variation of lift.
Joon Kim, Kyoung; Bar-Cohen, Avram; Han, Bongtae
2012-02-20
This study reports both analytical and numerical thermal-structural models of polymer Bragg grating (PBG) waveguides illuminated by a light emitting diode (LED). A polymethyl methacrylate (PMMA) Bragg grating (BG) waveguide is chosen as an analysis vehicle to explore parametric effects of incident optical powers and substrate materials on the thermal-structural behavior of the BG. Analytical models are verified by comparing analytically predicted average excess temperatures, and thermally induced axial strains and stresses with numerical predictions. A parametric study demonstrates that the PMMA substrate induces more adverse effects, such as higher excess temperatures, complex axial temperature profiles, and greater and more complicated thermally induced strains in the BG compared with the Si substrate. © 2012 Optical Society of America
NASA Astrophysics Data System (ADS)
Vasilkin, Andrey
2018-03-01
The more designing solutions at the search stage for design for high-rise buildings can be synthesized by the engineer, the more likely that the final adopted version will be the most efficient and economical. However, in modern market conditions, taking into account the complexity and responsibility of high-rise buildings the designer does not have the necessary time to develop, analyze and compare any significant number of options. To solve this problem, it is expedient to use the high potential of computer-aided designing. To implement automated search for design solutions, it is proposed to develop the computing facilities, the application of which will significantly increase the productivity of the designer and reduce the complexity of designing. Methods of structural and parametric optimization have been adopted as the basis of the computing facilities. Their efficiency in the synthesis of design solutions is shown, also the schemes, that illustrate and explain the introduction of structural optimization in the traditional design of steel frames, are constructed. To solve the problem of synthesis and comparison of design solutions for steel frames, it is proposed to develop the computing facilities that significantly reduces the complexity of search designing and based on the use of methods of structural and parametric optimization.
Three-Dimensional Modeling of Aircraft High-Lift Components with Vehicle Sketch Pad
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2016-01-01
Vehicle Sketch Pad (OpenVSP) is a parametric geometry modeler that has been used extensively for conceptual design studies of aircraft, including studies using higher-order analysis. OpenVSP can model flap and slat surfaces using simple shearing of the airfoil coordinates, which is an appropriate level of complexity for lower-order aerodynamic analysis methods. For three-dimensional analysis, however, there is not a built-in method for defining the high-lift components in OpenVSP in a realistic manner, or for controlling their complex motions in a parametric manner that is intuitive to the designer. This paper seeks instead to utilize OpenVSP's existing capabilities, and establish a set of best practices for modeling high-lift components at a level of complexity suitable for higher-order analysis methods. Techniques are described for modeling the flap and slat components as separate three-dimensional surfaces, and for controlling their motion using simple parameters defined in the local hinge-axis frame of reference. To demonstrate the methodology, an OpenVSP model for the Energy-Efficient Transport (EET) AR12 wind-tunnel model has been created, taking advantage of OpenVSP's Advanced Parameter Linking capability to translate the motions of the high-lift components from the hinge-axis coordinate system to a set of transformations in OpenVSP's frame of reference.
Stability of uncertain impulsive complex-variable chaotic systems with time-varying delays.
Zheng, Song
2015-09-01
In this paper, the robust exponential stabilization of uncertain impulsive complex-variable chaotic delayed systems is considered with parameters perturbation and delayed impulses. It is assumed that the considered complex-variable chaotic systems have bounded parametric uncertainties together with the state variables on the impulses related to the time-varying delays. Based on the theories of adaptive control and impulsive control, some less conservative and easily verified stability criteria are established for a class of complex-variable chaotic delayed systems with delayed impulses. Some numerical simulations are given to validate the effectiveness of the proposed criteria of impulsive stabilization for uncertain complex-variable chaotic delayed systems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Viswanath, Satish; Bloch, B. Nicholas; Chappelow, Jonathan; Patel, Pratik; Rofsky, Neil; Lenkinski, Robert; Genega, Elizabeth; Madabhushi, Anant
2011-03-01
Currently, there is significant interest in developing methods for quantitative integration of multi-parametric (structural, functional) imaging data with the objective of building automated meta-classifiers to improve disease detection, diagnosis, and prognosis. Such techniques are required to address the differences in dimensionalities and scales of individual protocols, while deriving an integrated multi-parametric data representation which best captures all disease-pertinent information available. In this paper, we present a scheme called Enhanced Multi-Protocol Analysis via Intelligent Supervised Embedding (EMPrAvISE); a powerful, generalizable framework applicable to a variety of domains for multi-parametric data representation and fusion. Our scheme utilizes an ensemble of embeddings (via dimensionality reduction, DR); thereby exploiting the variance amongst multiple uncorrelated embeddings in a manner similar to ensemble classifier schemes (e.g. Bagging, Boosting). We apply this framework to the problem of prostate cancer (CaP) detection on 12 3 Tesla pre-operative in vivo multi-parametric (T2-weighted, Dynamic Contrast Enhanced, and Diffusion-weighted) magnetic resonance imaging (MRI) studies, in turn comprising a total of 39 2D planar MR images. We first align the different imaging protocols via automated image registration, followed by quantification of image attributes from individual protocols. Multiple embeddings are generated from the resultant high-dimensional feature space which are then combined intelligently to yield a single stable solution. Our scheme is employed in conjunction with graph embedding (for DR) and probabilistic boosting trees (PBTs) to detect CaP on multi-parametric MRI. Finally, a probabilistic pairwise Markov Random Field algorithm is used to apply spatial constraints to the result of the PBT classifier, yielding a per-voxel classification of CaP presence. Per-voxel evaluation of detection results against ground truth for CaP extent on MRI (obtained by spatially registering pre-operative MRI with available whole-mount histological specimens) reveals that EMPrAvISE yields a statistically significant improvement (AUC=0.77) over classifiers constructed from individual protocols (AUC=0.62, 0.62, 0.65, for T2w, DCE, DWI respectively) as well as one trained using multi-parametric feature concatenation (AUC=0.67).
A Backward-Lagrangian-Stochastic Footprint Model for the Urban Environment
NASA Astrophysics Data System (ADS)
Wang, Chenghao; Wang, Zhi-Hua; Yang, Jiachuan; Li, Qi
2018-02-01
Built terrains, with their complexity in morphology, high heterogeneity, and anthropogenic impact, impose substantial challenges in Earth-system modelling. In particular, estimation of the source areas and footprints of atmospheric measurements in cities requires realistic representation of the landscape characteristics and flow physics in urban areas, but has hitherto been heavily reliant on large-eddy simulations. In this study, we developed physical parametrization schemes for estimating urban footprints based on the backward-Lagrangian-stochastic algorithm, with the built environment represented by street canyons. The vertical profile of mean streamwise velocity is parametrized for the urban canopy and boundary layer. Flux footprints estimated by the proposed model show reasonable agreement with analytical predictions over flat surfaces without roughness elements, and with experimental observations over sparse plant canopies. Furthermore, comparisons of canyon flow and turbulence profiles and the subsequent footprints were made between the proposed model and large-eddy simulation data. The results suggest that the parametrized canyon wind and turbulence statistics, based on the simple similarity theory used, need to be further improved to yield more realistic urban footprint modelling.
Pluripotency gene network dynamics: System views from parametric analysis.
Akberdin, Ilya R; Omelyanchuk, Nadezda A; Fadeev, Stanislav I; Leskova, Natalya E; Oschepkova, Evgeniya A; Kazantsev, Fedor V; Matushkin, Yury G; Afonnikov, Dmitry A; Kolchanov, Nikolay A
2018-01-01
Multiple experimental data demonstrated that the core gene network orchestrating self-renewal and differentiation of mouse embryonic stem cells involves activity of Oct4, Sox2 and Nanog genes by means of a number of positive feedback loops among them. However, recent studies indicated that the architecture of the core gene network should also incorporate negative Nanog autoregulation and might not include positive feedbacks from Nanog to Oct4 and Sox2. Thorough parametric analysis of the mathematical model based on this revisited core regulatory circuit identified that there are substantial changes in model dynamics occurred depending on the strength of Oct4 and Sox2 activation and molecular complexity of Nanog autorepression. The analysis showed the existence of four dynamical domains with different numbers of stable and unstable steady states. We hypothesize that these domains can constitute the checkpoints in a developmental progression from naïve to primed pluripotency and vice versa. During this transition, parametric conditions exist, which generate an oscillatory behavior of the system explaining heterogeneity in expression of pluripotent and differentiation factors in serum ESC cultures. Eventually, simulations showed that addition of positive feedbacks from Nanog to Oct4 and Sox2 leads mainly to increase of the parametric space for the naïve ESC state, in which pluripotency factors are strongly expressed while differentiation ones are repressed.
Villanueva, Pia; Newbury, Dianne F; Jara, Lilian; De Barbieri, Zulema; Mirza, Ghazala; Palomino, Hernán M; Fernández, María Angélica; Cazier, Jean-Baptiste; Monaco, Anthony P; Palomino, Hernán
2011-01-01
Specific language impairment (SLI) is an unexpected deficit in the acquisition of language skills and affects between 5 and 8% of pre-school children. Despite its prevalence and high heritability, our understanding of the aetiology of this disorder is only emerging. In this paper, we apply genome-wide techniques to investigate an isolated Chilean population who exhibit an increased frequency of SLI. Loss of heterozygosity (LOH) mapping and parametric and non-parametric linkage analyses indicate that complex genetic factors are likely to underlie susceptibility to SLI in this population. Across all analyses performed, the most consistently implicated locus was on chromosome 7q. This locus achieved highly significant linkage under all three non-parametric models (max NPL=6.73, P=4.0 × 10−11). In addition, it yielded a HLOD of 1.24 in the recessive parametric linkage analyses and contained a segment that was homozygous in two affected individuals. Further, investigation of this region identified a two-SNP haplotype that occurs at an increased frequency in language-impaired individuals (P=0.008). We hypothesise that the linkage regions identified here, in particular that on chromosome 7, may contain variants that underlie the high prevalence of SLI observed in this isolated population and may be of relevance to other populations affected by language impairments. PMID:21248734
Evolution of complexity following a quantum quench in free field theory
NASA Astrophysics Data System (ADS)
Alves, Daniel W. F.; Camilo, Giancarlo
2018-06-01
Using a recent proposal of circuit complexity in quantum field theories introduced by Jefferson and Myers, we compute the time evolution of the complexity following a smooth mass quench characterized by a time scale δ t in a free scalar field theory. We show that the dynamics has two distinct phases, namely an early regime of approximately linear evolution followed by a saturation phase characterized by oscillations around a mean value. The behavior is similar to previous conjectures for the complexity growth in chaotic and holographic systems, although here we have found that the complexity may grow or decrease depending on whether the quench increases or decreases the mass, and also that the time scale for saturation of the complexity is of order δ t (not parametrically larger).
A Simpli ed, General Approach to Simulating from Multivariate Copula Functions
Barry Goodwin
2012-01-01
Copulas have become an important analytic tool for characterizing multivariate distributions and dependence. One is often interested in simulating data from copula estimates. The process can be analytically and computationally complex and usually involves steps that are unique to a given parametric copula. We describe an alternative approach that uses \\probability{...
An Empirical Study of Eight Nonparametric Tests in Hierarchical Regression.
ERIC Educational Resources Information Center
Harwell, Michael; Serlin, Ronald C.
When normality does not hold, nonparametric tests represent an important data-analytic alternative to parametric tests. However, the use of nonparametric tests in educational research has been limited by the absence of easily performed tests for complex experimental designs and analyses, such as factorial designs and multiple regression analyses,…
NASA Astrophysics Data System (ADS)
Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim
2018-01-01
The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.
Multi-Node Thermal System Model for Lithium-Ion Battery Packs: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Ying; Smith, Kandler; Wood, Eric
Temperature is one of the main factors that controls the degradation in lithium ion batteries. Accurate knowledge and control of cell temperatures in a pack helps the battery management system (BMS) to maximize cell utilization and ensure pack safety and service life. In a pack with arrays of cells, a cells temperature is not only affected by its own thermal characteristics but also by its neighbors, the cooling system and pack configuration, which increase the noise level and the complexity of cell temperatures prediction. This work proposes to model lithium ion packs thermal behavior using a multi-node thermal network model,more » which predicts the cell temperatures by zones. The model was parametrized and validated using commercial lithium-ion battery packs. neighbors, the cooling system and pack configuration, which increase the noise level and the complexity of cell temperatures prediction. This work proposes to model lithium ion packs thermal behavior using a multi-node thermal network model, which predicts the cell temperatures by zones. The model was parametrized and validated using commercial lithium-ion battery packs.« less
Nanoscopic Dynamic Mechanical Properties of Intertubular and Peritubular Dentin
Ryou, Heon; Romberg, Elaine; Pashley, David H.; Tay, Franklin R.; Arola, Dwayne
2011-01-01
An experimental evaluation of intertubular and peritubular dentin was performed using nanoindentation and Dynamic Mechanical Analysis (DMA). The objective of the investigation was to evaluate the differences in dynamic mechanical behavior of these two constituents and to assess if their response is frequency dependent. Specimens of hydrated coronal dentin were evaluated by DMA using single indents over a range in parametric conditions and using scanning probe microscopy. The complex (E*), storage (E’) and loss moduli (E”) of the intertubular and peritubular dentin were evaluated as a function of the dynamic loading frequency and static load in the fully hydrated condition. The mean complex E* (19.6 GPa) and storage E’ (19.2 GPa) moduli of the intertubular dentin were significantly lower than those quantities of peritubular dentin (E* = 31.1 GPa, p< 0.05; E’ = 30.3 GPa, p< 0.05). There was no significant influence of dynamic loading frequency on these measures. Though there was no significant difference in the loss modulus (E”) between the two materials (p> 0.05), both constituents exhibited a significant increase in E” with dynamic load frequency and reduction in the quasi-static component of indentation load. The largest difference in dynamic behavior of the two tissues was noted at small quasi-static indentation loads and the highest frequency. PMID:22340680
Clasificación de asterismos utilizando datos astrométricos
NASA Astrophysics Data System (ADS)
de Biasi, M. S.; Orellana, R. B.
Based on accurate positions and proper motion data up to faint magnitudes, we have studied the regions of twenty three objects known in the literature as asterisms. A parametric method was applied to confirm the nature of these objects. The following objects have been classified: Alessi 11, Alessi 17, Brosch 1, Collinder 21, Dol-Dzim 1, Dolidze 31, Dolidze 43, Dolidze 50, Dolidze 51, NGC 272, NGC2063, NGC 2413, NGC 2664, NGC 5155, NGC 5284, NGC 6222, NGC 6360, NGC 6447, NGC 6476, NGC 6480, NGC 6605, NGC 6659, NGC 6728. FULL TEXT IN SPANISH
Parametrization of local CR automorphisms by finite jets and applications
NASA Astrophysics Data System (ADS)
Lamel, Bernhard; Mir, Nordine
2007-04-01
For any real-analytic hypersurface Msubset {C}^N , which does not contain any complex-analytic subvariety of positive dimension, we show that for every point pin M the local real-analytic CR automorphisms of M fixing p can be parametrized real-analytically by their ell_p jets at p . As a direct application, we derive a Lie group structure for the topological group operatorname{Aut}(M,p) . Furthermore, we also show that the order ell_p of the jet space in which the group operatorname{Aut}(M,p) embeds can be chosen to depend upper-semicontinuously on p . As a first consequence, it follows that given any compact real-analytic hypersurface M in {C}^N , there exists an integer k depending only on M such that for every point pin M germs at p of CR diffeomorphisms mapping M into another real-analytic hypersurface in {C}^N are uniquely determined by their k -jet at that point. Another consequence is the following boundary version of H. Cartan's uniqueness theorem: given any bounded domain Ω with smooth real-analytic boundary, there exists an integer k depending only on partial Ω such that if H\\colon Ωto Ω is a proper holomorphic mapping extending smoothly up to partial Ω near some point pin partial Ω with the same k -jet at p with that of the identity mapping, then necessarily H=Id . Our parametrization theorem also holds for the stability group of any essentially finite minimal real-analytic CR manifold of arbitrary codimension. One of the new main tools developed in the paper, which may be of independent interest, is a parametrization theorem for invertible solutions of a certain kind of singular analytic equations, which roughly speaking consists of inverting certain families of parametrized maps with singularities.
Oks, E; Dalimier, E; Faenov, A Ya; Angelo, P; Pikuz, S A; Tubman, E; Butler, N M H; Dance, R J; Pikuz, T A; Skobelev, I Yu; Alkhimova, M A; Booth, N; Green, J; Gregory, C; Andreev, A; Zhidkov, A; Kodama, R; McKenna, P; Woolsey, N
2017-02-06
By analyzing profiles of experimental x-ray spectral lines of Si XIV and Al XIII, we found that both Langmuir and ion acoustic waves developed in plasmas produced via irradiation of thin Si foils by relativistic laser pulses (intensities ~1021 W/cm2). We prove that these waves are due to the parametric decay instability (PDI). This is the first time that the PDI-induced ion acoustic turbulence was discovered by the x-ray spectroscopy in laser-produced plasmas. These conclusions are also supported by PIC simulations. Our results can be used for laboratory modeling of physical processes in astrophysical objects and a better understanding of intense laser-plasma interactions.
Dlouhý, Martin
2018-01-01
The existence of geographic differences in health resources, health expenditures, the utilization of health services, and health outcomes have been documented by a lot of studies from various countries of the world. In a publicly financed health system, equal access is one of the main objectives of the national health policy. That is why inequalities in the geographic allocation of health resources are an important health policy issue. Measures of inequality express the complexity of variation in the observed variable by a single number, and there is a variety of inequality measures available. The objective of this study is to develop a measure of the geographic inequality in the case of multiple health resources. The measure uses data envelopment analysis (DEA), which is a non-parametric method of production function estimation, to transform multiple resources into a single virtual health resource. The study shows that the DEA originally developed for measuring efficiency can be used successfully to measure inequality. For the illustrative purpose, the inequality measure is calculated for the Czech Republic. The values of separate Robin Hood Indexes (RHIs) are 6.64% for physicians and 3.96% for nurses. In the next step, we use combined RHI for both health resources. Its value 5.06% takes into account that the combinations of two health resources serve regional populations.
- and Graph-Based Point Cloud Segmentation of 3d Scenes Using Perceptual Grouping Laws
NASA Astrophysics Data System (ADS)
Xu, Y.; Hoegner, L.; Tuttas, S.; Stilla, U.
2017-05-01
Segmentation is the fundamental step for recognizing and extracting objects from point clouds of 3D scene. In this paper, we present a strategy for point cloud segmentation using voxel structure and graph-based clustering with perceptual grouping laws, which allows a learning-free and completely automatic but parametric solution for segmenting 3D point cloud. To speak precisely, two segmentation methods utilizing voxel and supervoxel structures are reported and tested. The voxel-based data structure can increase efficiency and robustness of the segmentation process, suppressing the negative effect of noise, outliers, and uneven points densities. The clustering of voxels and supervoxel is carried out using graph theory on the basis of the local contextual information, which commonly conducted utilizing merely pairwise information in conventional clustering algorithms. By the use of perceptual laws, our method conducts the segmentation in a pure geometric way avoiding the use of RGB color and intensity information, so that it can be applied to more general applications. Experiments using different datasets have demonstrated that our proposed methods can achieve good results, especially for complex scenes and nonplanar surfaces of objects. Quantitative comparisons between our methods and other representative segmentation methods also confirms the effectiveness and efficiency of our proposals.
Comparison of Survival Models for Analyzing Prognostic Factors in Gastric Cancer Patients
Habibi, Danial; Rafiei, Mohammad; Chehrei, Ali; Shayan, Zahra; Tafaqodi, Soheil
2018-03-27
Objective: There are a number of models for determining risk factors for survival of patients with gastric cancer. This study was conducted to select the model showing the best fit with available data. Methods: Cox regression and parametric models (Exponential, Weibull, Gompertz, Log normal, Log logistic and Generalized Gamma) were utilized in unadjusted and adjusted forms to detect factors influencing mortality of patients. Comparisons were made with Akaike Information Criterion (AIC) by using STATA 13 and R 3.1.3 softwares. Results: The results of this study indicated that all parametric models outperform the Cox regression model. The Log normal, Log logistic and Generalized Gamma provided the best performance in terms of AIC values (179.2, 179.4 and 181.1, respectively). On unadjusted analysis, the results of the Cox regression and parametric models indicated stage, grade, largest diameter of metastatic nest, largest diameter of LM, number of involved lymph nodes and the largest ratio of metastatic nests to lymph nodes, to be variables influencing the survival of patients with gastric cancer. On adjusted analysis, according to the best model (log normal), grade was found as the significant variable. Conclusion: The results suggested that all parametric models outperform the Cox model. The log normal model provides the best fit and is a good substitute for Cox regression. Creative Commons Attribution License
Parametric Coding of the Size and Clutter of Natural Scenes in the Human Brain
Park, Soojin; Konkle, Talia; Oliva, Aude
2015-01-01
Estimating the size of a space and its degree of clutter are effortless and ubiquitous tasks of moving agents in a natural environment. Here, we examine how regions along the occipital–temporal lobe respond to pictures of indoor real-world scenes that parametrically vary in their physical “size” (the spatial extent of a space bounded by walls) and functional “clutter” (the organization and quantity of objects that fill up the space). Using a linear regression model on multivoxel pattern activity across regions of interest, we find evidence that both properties of size and clutter are represented in the patterns of parahippocampal cortex, while the retrosplenial cortex activity patterns are predominantly sensitive to the size of a space, rather than the degree of clutter. Parametric whole-brain analyses confirmed these results. Importantly, this size and clutter information was represented in a way that generalized across different semantic categories. These data provide support for a property-based representation of spaces, distributed across multiple scene-selective regions of the cerebral cortex. PMID:24436318
Estimation of option-implied risk-neutral into real-world density by using calibration function
NASA Astrophysics Data System (ADS)
Bahaludin, Hafizah; Abdullah, Mimi Hafizah
2017-04-01
Option prices contain crucial information that can be used as a reflection of future development of an underlying assets' price. The main objective of this study is to extract the risk-neutral density (RND) and the risk-world density (RWD) of option prices. A volatility function technique is applied by using a fourth order polynomial interpolation to obtain the RNDs. Then, a calibration function is used to convert the RNDs into RWDs. There are two types of calibration function which are parametric and non-parametric calibrations. The density is extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity from January 2009 until December 2015. The performance of RNDs and RWDs extracted are evaluated by using a density forecasting test. This study found out that the RWDs obtain can provide an accurate information regarding the price of the underlying asset in future compared to that of the RNDs. In addition, empirical evidence suggests that RWDs from a non-parametric calibration has a better accuracy than other densities.
Advanced transportation system studies. Alternate propulsion subsystem concepts: Propulsion database
NASA Technical Reports Server (NTRS)
Levack, Daniel
1993-01-01
The Advanced Transportation System Studies alternate propulsion subsystem concepts propulsion database interim report is presented. The objective of the database development task is to produce a propulsion database which is easy to use and modify while also being comprehensive in the level of detail available. The database is to be available on the Macintosh computer system. The task is to extend across all three years of the contract. Consequently, a significant fraction of the effort in this first year of the task was devoted to the development of the database structure to ensure a robust base for the following years' efforts. Nonetheless, significant point design propulsion system descriptions and parametric models were also produced. Each of the two propulsion databases, parametric propulsion database and propulsion system database, are described. The descriptions include a user's guide to each code, write-ups for models used, and sample output. The parametric database has models for LOX/H2 and LOX/RP liquid engines, solid rocket boosters using three different propellants, a hybrid rocket booster, and a NERVA derived nuclear thermal rocket engine.
Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions
NASA Astrophysics Data System (ADS)
Chen, N.; Majda, A.
2017-12-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
NASA Astrophysics Data System (ADS)
Enescu (Balaş, M. L.; Alexandru, C.
2016-08-01
The paper deals with the optimal design of the control system for a 6-DOF robot used in thin layers deposition. The optimization is based on parametric technique, by modelling the design objective as a numerical function, and then establishing the optimal values of the design variables so that to minimize the objective function. The robotic system is a mechatronic product, which integrates the mechanical device and the controlled operating device.The mechanical device of the robot was designed in the CAD (Computer Aided Design) software CATIA, the 3D-model being then transferred to the MBS (Multi-Body Systems) environment ADAMS/View. The control system was developed in the concurrent engineering concept, through the integration with the MBS mechanical model, by using the DFC (Design for Control) software solution EASY5. The necessary angular motions in the six joints of the robot, in order to obtain the imposed trajectory of the end-effector, have been established by performing the inverse kinematic analysis. The positioning error in each joint of the robot is used as design objective, the optimization goal being to minimize the root mean square during simulation, which is a measure of the magnitude of the positioning error varying quantity.
Delineating parameter unidentifiabilities in complex models
NASA Astrophysics Data System (ADS)
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
May-Concha, Irving; Guerenstein, Pablo G; Ramsey, Janine M; Rojas, Julio C; Catalá, Silvia
2016-06-01
Triatoma dimidiata (Latreille) is a species complex that spans North, Central, and South America and which is a key vector of all known discrete typing units (DTU) of Trypanosoma cruzi, the etiologic agent of Chagas disease. Morphological and genetic studies indicate that T. dimidiata is a species complex with three principal haplogroups (hg) in Mexico. Different markers and traits are still inconclusive regarding if other morphological differentiation may indicate probable behavioral and vectorial divergences within this complex. In this paper we compared the antennae of three Mexican haplogroups (previously verified by molecular markers ND4 and ITS-2) and discussed possible relationships with their capacity to disperse and colonized new habitats. The abundance of each type of sensillum (bristles, basiconics, thick- and thin-walled trichoids) on the antennae of the three haplogroups, were measured under light microscopy and compared using Kruskal-Wallis non-parametric and multivariate non-parametric analyses. Discriminant analyses indicate significant differences among the antennal phenotype of haplogroups either for adults and some nymphal stages, indicating consistency of the character to analyze intraspecific variability within the complex. The present study shows that the adult antennal pedicel of the T. dimidiata complex have abundant chemosensory sensilla, according with good capacity for dispersal and invasion of different habitats also related to their high capacity to adapt to conserved as well as modified habitats. However, the numerical differences among the haplogroups are suggesting variations in that capacity. The results here presented support the evidence of T. dimidiata as a species complex but show females and males in a different way. Given the close link between the bug's sensory system and its habitat and host-seeking behavior, AP characterization could be useful to complement genetic, neurological and ethological studies of the closely related Dimidiata Complex haplogroups for a better knowledge of their vectorial capacity and a more robust species differentiation. Copyright © 2016 Elsevier B.V. All rights reserved.
Zou, Kelly H; Resnic, Frederic S; Talos, Ion-Florin; Goldberg-Zimring, Daniel; Bhagwat, Jui G; Haker, Steven J; Kikinis, Ron; Jolesz, Ferenc A; Ohno-Machado, Lucila
2005-10-01
Medical classification accuracy studies often yield continuous data based on predictive models for treatment outcomes. A popular method for evaluating the performance of diagnostic tests is the receiver operating characteristic (ROC) curve analysis. The main objective was to develop a global statistical hypothesis test for assessing the goodness-of-fit (GOF) for parametric ROC curves via the bootstrap. A simple log (or logit) and a more flexible Box-Cox normality transformations were applied to untransformed or transformed data from two clinical studies to predict complications following percutaneous coronary interventions (PCIs) and for image-guided neurosurgical resection results predicted by tumor volume, respectively. We compared a non-parametric with a parametric binormal estimate of the underlying ROC curve. To construct such a GOF test, we used the non-parametric and parametric areas under the curve (AUCs) as the metrics, with a resulting p value reported. In the interventional cardiology example, logit and Box-Cox transformations of the predictive probabilities led to satisfactory AUCs (AUC=0.888; p=0.78, and AUC=0.888; p=0.73, respectively), while in the brain tumor resection example, log and Box-Cox transformations of the tumor size also led to satisfactory AUCs (AUC=0.898; p=0.61, and AUC=0.899; p=0.42, respectively). In contrast, significant departures from GOF were observed without applying any transformation prior to assuming a binormal model (AUC=0.766; p=0.004, and AUC=0.831; p=0.03), respectively. In both studies the p values suggested that transformations were important to consider before applying any binormal model to estimate the AUC. Our analyses also demonstrated and confirmed the predictive values of different classifiers for determining the interventional complications following PCIs and resection outcomes in image-guided neurosurgery.
Revisiting dark energy models using differential ages of galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rani, Nisha; Mahajan, Shobhit; Mukherjee, Amitabha
In this work, we use a test based on the differential ages of galaxies for distinguishing the dark energy models. As proposed by Jimenez and Loeb in [1], relative ages of galaxies can be used to put constraints on various cosmological parameters. In the same vein, we reconstruct H {sub 0} {sub dt} / dz and its derivative ( H {sub 0} {sub d} {sup 2} {sup t} / dz {sup 2}) using a model independent technique called non-parametric smoothing . Basically, dt / dz is the change in the age of the object as a function of redshift whichmore » is directly linked with the Hubble parameter. Hence for reconstruction of this quantity, we use the most recent H ( z ) data. Further, we calculate H {sub 0} {sub dt} / dz and its derivative for several models like Phantom, Einstein de Sitter (EdS), ΛCDM, Chevallier-Polarski-Linder (CPL) parametrization, Jassal-Bagla-Padmanabhan (JBP) parametrization and Feng-Shen-Li-Li (FSLL) parametrization. We check the consistency of these models with the results of reconstruction obtained in a model independent way from the data. It is observed that H {sub 0} {sub dt} / dz as a tool is not able to distinguish between the ΛCDM, CPL, JBP and FSLL parametrizations but, as expected, EdS and Phantom models show noticeable deviation from the reconstructed results. Further, the derivative of H {sub 0} {sub dt} / dz for various dark energy models is more sensitive at low redshift. It is found that the FSLL model is not consistent with the reconstructed results, however, the ΛCDM model is in concordance with the 3σ region of the reconstruction at redshift z ≥ 0.3.« less
CAD-Based Aerodynamic Design of Complex Configurations using a Cartesian Method
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.
2003-01-01
A modular framework for aerodynamic optimization of complex geometries is developed. By working directly with a parametric CAD system, complex-geometry models are modified nnd tessellated in an automatic fashion. The use of a component-based Cartesian method significantly reduces the demands on the CAD system, and also provides for robust and efficient flowfield analysis. The optimization is controlled using either a genetic or quasi-Newton algorithm. Parallel efficiency of the framework is maintained even when subject to limited CAD resources by dynamically re-allocating the processors of the flow solver. Overall, the resulting framework can explore designs incorporating large shape modifications and changes in topology.
The Parameter of Preposition Stranding: A View from Child English
ERIC Educational Resources Information Center
Sugisaki, Koji; Snyder, William
2006-01-01
In this squib we examine the time course of children's acquisition of English to evaluate the basic insights of Kayne's (1981; 1984) proposals on preposition stranding. Kayne argued that the availability of preposition stranding (P-stranding) in English is parametrically linked to the availability of double object datives and the prepositional…
2005-10-06
The objective of this study was to perform a parametric evaluation of the performance and interface characteristics of a dense plasma focus (DPF...dense plasma focus (DPF) fusion power and propulsion technology, with advanced waverider-like airframe configurations utilizing air-breathing MHD
Noncontact measurement of vibration using airborne ultrasound.
Mater, O B; Remenieras, J P; Bruneel, C; Roncin, A; Patat, F
1998-01-01
A noncontact ultrasonic method for measuring the surface normal vibration of objects was studied. The instrument consists of a pair of 420 kHz ultrasonic air transducers. One is used to emit ultrasounds toward the moving surface, and the other receives the ultrasound reflected from the object under test. Two effects induce a phase modulation on the received signal. The first effect results from the variation of the round trip time interval tau required for the wavefront to go from the emitter to the moving surface and back to the receiver. This is the Doppler effect directly proportional to the surface displacement. The second effect results from the nonlinear parametric interactions of the ultrasonic beams (forward and backward) with the low frequency sound field emitted in the air by the vibrating surface. This latter phenomenon, which is a volume effect, is proportional to the velocity of the vibrating surface and increases with the distance between the transducers and the surface under test. The relative contribution of the Doppler and parametric effects are evaluated, and both have to be taken into account for ultrasonic interferometry in air.
Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives.
Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan
2014-01-01
The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context.
Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives
Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan
2014-01-01
The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context. PMID:24550798
Thermofluid Analysis of Magnetocaloric Refrigeration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdelaziz, Omar; Gluesenkamp, Kyle R; Vineyard, Edward Allan
While there have been extensive studies on thermofluid characteristics of different magnetocaloric refrigeration systems, a conclusive optimization study using non-dimensional parameters which can be applied to a generic system has not been reported yet. In this study, a numerical model has been developed for optimization of active magnetic refrigerator (AMR). This model is computationally efficient and robust, making it appropriate for running the thousands of simulations required for parametric study and optimization. The governing equations have been non-dimensionalized and numerically solved using finite difference method. A parametric study on a wide range of non-dimensional numbers has been performed. While themore » goal of AMR systems is to improve the performance of competitive parameters including COP, cooling capacity and temperature span, new parameters called AMR performance index-1 have been introduced in order to perform multi objective optimization and simultaneously exploit all these parameters. The multi-objective optimization is carried out for a wide range of the non-dimensional parameters. The results of this study will provide general guidelines for designing high performance AMR systems.« less
Marginal Shape Deep Learning: Applications to Pediatric Lung Field Segmentation.
Mansoor, Awais; Cerrolaza, Juan J; Perez, Geovanny; Biggs, Elijah; Nino, Gustavo; Linguraru, Marius George
2017-02-11
Representation learning through deep learning (DL) architecture has shown tremendous potential for identification, localization, and texture classification in various medical imaging modalities. However, DL applications to segmentation of objects especially to deformable objects are rather limited and mostly restricted to pixel classification. In this work, we propose marginal shape deep learning (MaShDL), a framework that extends the application of DL to deformable shape segmentation by using deep classifiers to estimate the shape parameters. MaShDL combines the strength of statistical shape models with the automated feature learning architecture of DL. Unlike the iterative shape parameters estimation approach of classical shape models that often leads to a local minima, the proposed framework is robust to local minima optimization and illumination changes. Furthermore, since the direct application of DL framework to a multi-parameter estimation problem results in a very high complexity, our framework provides an excellent run-time performance solution by independently learning shape parameter classifiers in marginal eigenspaces in the decreasing order of variation. We evaluated MaShDL for segmenting the lung field from 314 normal and abnormal pediatric chest radiographs and obtained a mean Dice similarity coefficient of 0.927 using only the four highest modes of variation (compared to 0.888 with classical ASM 1 (p-value=0.01) using same configuration). To the best of our knowledge this is the first demonstration of using DL framework for parametrized shape learning for the delineation of deformable objects.
Marginal shape deep learning: applications to pediatric lung field segmentation
NASA Astrophysics Data System (ADS)
Mansoor, Awais; Cerrolaza, Juan J.; Perez, Geovany; Biggs, Elijah; Nino, Gustavo; Linguraru, Marius George
2017-02-01
Representation learning through deep learning (DL) architecture has shown tremendous potential for identification, local- ization, and texture classification in various medical imaging modalities. However, DL applications to segmentation of objects especially to deformable objects are rather limited and mostly restricted to pixel classification. In this work, we propose marginal shape deep learning (MaShDL), a framework that extends the application of DL to deformable shape segmentation by using deep classifiers to estimate the shape parameters. MaShDL combines the strength of statistical shape models with the automated feature learning architecture of DL. Unlike the iterative shape parameters estimation approach of classical shape models that often leads to a local minima, the proposed framework is robust to local minima optimization and illumination changes. Furthermore, since the direct application of DL framework to a multi-parameter estimation problem results in a very high complexity, our framework provides an excellent run-time performance solution by independently learning shape parameter classifiers in marginal eigenspaces in the decreasing order of variation. We evaluated MaShDL for segmenting the lung field from 314 normal and abnormal pediatric chest radiographs and obtained a mean Dice similarity coefficient of 0:927 using only the four highest modes of variation (compared to 0:888 with classical ASM1 (p-value=0:01) using same configuration). To the best of our knowledge this is the first demonstration of using DL framework for parametrized shape learning for the delineation of deformable objects.
Marginal Shape Deep Learning: Applications to Pediatric Lung Field Segmentation
Mansoor, Awais; Cerrolaza, Juan J.; Perez, Geovanny; Biggs, Elijah; Nino, Gustavo; Linguraru, Marius George
2017-01-01
Representation learning through deep learning (DL) architecture has shown tremendous potential for identification, localization, and texture classification in various medical imaging modalities. However, DL applications to segmentation of objects especially to deformable objects are rather limited and mostly restricted to pixel classification. In this work, we propose marginal shape deep learning (MaShDL), a framework that extends the application of DL to deformable shape segmentation by using deep classifiers to estimate the shape parameters. MaShDL combines the strength of statistical shape models with the automated feature learning architecture of DL. Unlike the iterative shape parameters estimation approach of classical shape models that often leads to a local minima, the proposed framework is robust to local minima optimization and illumination changes. Furthermore, since the direct application of DL framework to a multi-parameter estimation problem results in a very high complexity, our framework provides an excellent run-time performance solution by independently learning shape parameter classifiers in marginal eigenspaces in the decreasing order of variation. We evaluated MaShDL for segmenting the lung field from 314 normal and abnormal pediatric chest radiographs and obtained a mean Dice similarity coefficient of 0.927 using only the four highest modes of variation (compared to 0.888 with classical ASM1 (p-value=0.01) using same configuration). To the best of our knowledge this is the first demonstration of using DL framework for parametrized shape learning for the delineation of deformable objects. PMID:28592911
Azorin-Lopez, Jorge; Fuster-Guillo, Andres; Saval-Calvo, Marcelo; Mora-Mora, Higinio; Garcia-Chamizo, Juan Manuel
2017-01-01
The use of visual information is a very well known input from different kinds of sensors. However, most of the perception problems are individually modeled and tackled. It is necessary to provide a general imaging model that allows us to parametrize different input systems as well as their problems and possible solutions. In this paper, we present an active vision model considering the imaging system as a whole (including camera, lighting system, object to be perceived) in order to propose solutions to automated visual systems that present problems that we perceive. As a concrete case study, we instantiate the model in a real application and still challenging problem: automated visual inspection. It is one of the most used quality control systems to detect defects on manufactured objects. However, it presents problems for specular products. We model these perception problems taking into account environmental conditions and camera parameters that allow a system to properly perceive the specific object characteristics to determine defects on surfaces. The validation of the model has been carried out using simulations providing an efficient way to perform a large set of tests (different environment conditions and camera parameters) as a previous step of experimentation in real manufacturing environments, which more complex in terms of instrumentation and more expensive. Results prove the success of the model application adjusting scale, viewpoint and lighting conditions to detect structural and color defects on specular surfaces. PMID:28640211
Numerical study on 3D composite morphing actuators
NASA Astrophysics Data System (ADS)
Oishi, Kazuma; Saito, Makoto; Anandan, Nishita; Kadooka, Kevin; Taya, Minoru
2015-04-01
There are a number of actuators using the deformation of electroactive polymer (EAP), where fewer papers seem to have focused on the performance of 3D morphing actuators based on the analytical approach, due mainly to their complexity. The present paper introduces a numerical analysis approach on the large scale deformation and motion of a 3D half dome shaped actuator composed of thin soft membrane (passive material) and EAP strip actuators (EAP active coupon with electrodes on both surfaces), where the locations of the active EAP strips is a key parameter. Simulia/Abaqus Static and Implicit analysis code, whose main feature is the high precision contact analysis capability among structures, are used focusing on the whole process of the membrane to touch and wrap around the object. The unidirectional properties of the EAP coupon actuator are used as input data set for the material properties for the simulation and the verification of our numerical model, where the verification is made as compared to the existing 2D solution. The numerical results can demonstrate the whole deformation process of the membrane to wrap around not only smooth shaped objects like a sphere or an egg, but also irregularly shaped objects. A parametric study reveals the proper placement of the EAP coupon actuators, with the modification of the dome shape to induce the relevant large scale deformation. The numerical simulation for the 3D soft actuators shown in this paper could be applied to a wider range of soft 3D morphing actuators.
Cortical Circuit for Binding Object Identity and Location During Multiple-Object Tracking
Nummenmaa, Lauri; Oksama, Lauri; Glerean, Erico; Hyönä, Jukka
2017-01-01
Abstract Sustained multifocal attention for moving targets requires binding object identities with their locations. The brain mechanisms of identity-location binding during attentive tracking have remained unresolved. In 2 functional magnetic resonance imaging experiments, we measured participants’ hemodynamic activity during attentive tracking of multiple objects with equivalent (multiple-object tracking) versus distinct (multiple identity tracking, MIT) identities. Task load was manipulated parametrically. Both tasks activated large frontoparietal circuits. MIT led to significantly increased activity in frontoparietal and temporal systems subserving object recognition and working memory. These effects were replicated when eye movements were prohibited. MIT was associated with significantly increased functional connectivity between lateral temporal and frontal and parietal regions. We propose that coordinated activity of this network subserves identity-location binding during attentive tracking. PMID:27913430
Towards a Next-Generation Catalogue Cross-Match Service
NASA Astrophysics Data System (ADS)
Pineau, F.; Boch, T.; Derriere, S.; Arches Consortium
2015-09-01
We have been developing in the past several catalogue cross-match tools. On one hand the CDS XMatch service (Pineau et al. 2011), able to perform basic but very efficient cross-matches, scalable to the largest catalogues on a single regular server. On the other hand, as part of the European project ARCHES1, we have been developing a generic and flexible tool which performs potentially complex multi-catalogue cross-matches and which computes probabilities of association based on a novel statistical framework. Although the two approaches have been managed so far as different tracks, the need for next generation cross-match services dealing with both efficiency and complexity is becoming pressing with forthcoming projects which will produce huge high quality catalogues. We are addressing this challenge which is both theoretical and technical. In ARCHES we generalize to N catalogues the candidate selection criteria - based on the chi-square distribution - described in Pineau et al. (2011). We formulate and test a number of Bayesian hypothesis which necessarily increases dramatically with the number of catalogues. To assign a probability to each hypotheses, we rely on estimated priors which account for local densities of sources. We validated our developments by comparing the theoretical curves we derived with the results of Monte-Carlo simulations. The current prototype is able to take into account heterogeneous positional errors, object extension and proper motion. The technical complexity is managed by OO programming design patterns and SQL-like functionalities. Large tasks are split into smaller independent pieces for scalability. Performances are achieved resorting to multi-threading, sequential reads and several tree data-structures. In addition to kd-trees, we account for heterogeneous positional errors and object's extension using M-trees. Proper-motions are supported using a modified M-tree we developed, inspired from Time Parametrized R-trees (TPR-tree). Quantitative tests in comparison with the basic cross-match will be presented.
Statistical Analysis of Complexity Generators for Cost Estimation
NASA Technical Reports Server (NTRS)
Rowell, Ginger Holmes
1999-01-01
Predicting the cost of cutting edge new technologies involved with spacecraft hardware can be quite complicated. A new feature of the NASA Air Force Cost Model (NAFCOM), called the Complexity Generator, is being developed to model the complexity factors that drive the cost of space hardware. This parametric approach is also designed to account for the differences in cost, based on factors that are unique to each system and subsystem. The cost driver categories included in this model are weight, inheritance from previous missions, technical complexity, and management factors. This paper explains the Complexity Generator framework, the statistical methods used to select the best model within this framework, and the procedures used to find the region of predictability and the prediction intervals for the cost of a mission.
Nonreciprocal Gain in Non-Hermitian Time-Floquet Systems
NASA Astrophysics Data System (ADS)
Koutserimpas, Theodoros T.; Fleury, Romain
2018-02-01
We explore the unconventional wave scattering properties of non-Hermitian systems in which amplification or damping are induced by time-periodic modulation. These non-Hermitian time-Floquet systems are capable of nonreciprocal operations in the frequency domain, which can be exploited to induce novel physical phenomena such as unidirectional wave amplification and perfect nonreciprocal response with zero or even negative insertion losses. This unique behavior is obtained by imparting a specific low-frequency time-periodic modulation to the complex coupling between lossless resonators, promoting only upward frequency conversion, and leading to nonreciprocal parametric gain. We provide a full-wave demonstration of our findings in a one-way microwave amplifier, and establish the potential of non-Hermitian time-Floquet devices for insertion-loss free microwave isolation and unidirectional parametric amplification.
Parametric optimal control of uncertain systems under an optimistic value criterion
NASA Astrophysics Data System (ADS)
Li, Bo; Zhu, Yuanguo
2018-01-01
It is well known that the optimal control of a linear quadratic model is characterized by the solution of a Riccati differential equation. In many cases, the corresponding Riccati differential equation cannot be solved exactly such that the optimal feedback control may be a complex time-oriented function. In this article, a parametric optimal control problem of an uncertain linear quadratic model under an optimistic value criterion is considered for simplifying the expression of optimal control. Based on the equation of optimality for the uncertain optimal control problem, an approximation method is presented to solve it. As an application, a two-spool turbofan engine optimal control problem is given to show the utility of the proposed model and the efficiency of the presented approximation method.
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina; Ryashko, Lev; Ryazanova, Tatyana
2018-01-01
A problem of mathematical modeling of complex stochastic processes in macroeconomics is discussed. For the description of dynamics of income and capital stock, the well-known Kaldor model of business cycles is used as a basic example. The aim of the paper is to give an overview of the variety of stochastic phenomena which occur in Kaldor model forced by additive and parametric random noise. We study a generation of small- and large-amplitude stochastic oscillations, and their mixed-mode intermittency. To analyze these phenomena, we suggest a constructive approach combining the study of the peculiarities of deterministic phase portrait, and stochastic sensitivity of attractors. We show how parametric noise can stabilize the unstable equilibrium and transform dynamics of Kaldor system from order to chaos.
Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan
2014-11-01
This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system.
Maity, Arnab; Carroll, Raymond J; Mammen, Enno; Chatterjee, Nilanjan
2009-01-01
Motivated from the problem of testing for genetic effects on complex traits in the presence of gene-environment interaction, we develop score tests in general semiparametric regression problems that involves Tukey style 1 degree-of-freedom form of interaction between parametrically and non-parametrically modelled covariates. We find that the score test in this type of model, as recently developed by Chatterjee and co-workers in the fully parametric setting, is biased and requires undersmoothing to be valid in the presence of non-parametric components. Moreover, in the presence of repeated outcomes, the asymptotic distribution of the score test depends on the estimation of functions which are defined as solutions of integral equations, making implementation difficult and computationally taxing. We develop profiled score statistics which are unbiased and asymptotically efficient and can be performed by using standard bandwidth selection methods. In addition, to overcome the difficulty of solving functional equations, we give easy interpretations of the target functions, which in turn allow us to develop estimation procedures that can be easily implemented by using standard computational methods. We present simulation studies to evaluate type I error and power of the method proposed compared with a naive test that does not consider interaction. Finally, we illustrate our methodology by analysing data from a case-control study of colorectal adenoma that was designed to investigate the association between colorectal adenoma and the candidate gene NAT2 in relation to smoking history.
NASA Astrophysics Data System (ADS)
Sabater, A. B.; Rhoads, J. F.
2017-02-01
The parametric system identification of macroscale resonators operating in a nonlinear response regime can be a challenging research problem, but at the micro- and nanoscales, experimental constraints add additional complexities. For example, due to the small and noisy signals micro/nanoresonators produce, a lock-in amplifier is commonly used to characterize the amplitude and phase responses of the systems. While the lock-in enables detection, it also prohibits the use of established time-domain, multi-harmonic, and frequency-domain methods, which rely upon time-domain measurements. As such, the only methods that can be used for parametric system identification are those based on fitting experimental data to an approximate solution, typically derived via perturbation methods and/or Galerkin methods, of a reduced-order model. Thus, one could view the parametric system identification of micro/nanosystems operating in a nonlinear response regime as the amalgamation of four coupled sub-problems: nonparametric system identification, or proper experimental design and data acquisition; the generation of physically consistent reduced-order models; the calculation of accurate approximate responses; and the application of nonlinear least-squares parameter estimation. This work is focused on the theoretical foundations that underpin each of these sub-problems, as the methods used to address one sub-problem can strongly influence the results of another. To provide context, an electromagnetically transduced microresonator is used as an example. This example provides a concrete reference for the presented findings and conclusions.
NASA Astrophysics Data System (ADS)
Mai, J.; Cuntz, M.; Zink, M.; Schaefer, D.; Thober, S.; Samaniego, L. E.; Shafii, M.; Tolson, B.
2015-12-01
Hydrologic models are traditionally calibrated against discharge. Recent studies have shown however, that only a few global model parameters are constrained using the integral discharge measurements. It is therefore advisable to use additional information to calibrate those models. Snow pack data, for example, could improve the parametrization of snow-related processes, which might be underrepresented when using only discharge. One common approach is to combine these multiple objectives into one single objective function and allow the use of a single-objective algorithm. Another strategy is to consider the different objectives separately and apply a Pareto-optimizing algorithm. Both methods are challenging in the choice of appropriate multiple objectives with either conflicting interests or the focus on different model processes. A first aim of this study is to compare the two approaches employing the mesoscale Hydrologic Model mHM at several distinct river basins over Europe and North America. This comparison will allow the identification of the single-objective solution on the Pareto front. It is elucidated if this position is determined by the weighting and scaling of the multiple objectives when combing them to the single objective. The principal second aim is to guide the selection of proper objectives employing sensitivity analyses. These analyses are used to determine if an additional information would help to constrain additional model parameters. The additional information are either multiple data sources or multiple signatures of one measurement. It is evaluated if specific discharge signatures can inform different parts of the hydrologic model. The results show that an appropriate selection of discharge signatures increased the number of constrained parameters by more than 50% compared to using only NSE of the discharge time series. It is further assessed if the use of these signatures impose conflicting objectives on the hydrologic model. The usage of signatures is furthermore contrasted to the use of additional observations such as soil moisture or snow height. The gain of using an auxiliary dataset is determined using the parametric sensitivity on the respective modeled variable.
An Efficient Bundle Adjustment Model Based on Parallax Parametrization for Environmental Monitoring
NASA Astrophysics Data System (ADS)
Chen, R.; Sun, Y. Y.; Lei, Y.
2017-12-01
With the rapid development of Unmanned Aircraft Systems (UAS), more and more research fields have been successfully equipped with this mature technology, among which is environmental monitoring. One difficult task is how to acquire accurate position of ground object in order to reconstruct the scene more accurate. To handle this problem, we combine bundle adjustment method from Photogrammetry with parallax parametrization from Computer Vision to create a new method call APCP (aerial polar-coordinate photogrammetry). One impressive advantage of this method compared with traditional method is that the 3-dimensional point in space is represented using three angles (elevation angle, azimuth angle and parallax angle) rather than the XYZ value. As the basis for APCP, bundle adjustment could be used to optimize the UAS sensors' pose accurately, reconstruct the 3D models of environment, thus serving as the criterion of accurate position for monitoring. To verity the effectiveness of the proposed method, we test on several UAV dataset obtained by non-metric digital cameras with large attitude angles, and we find that our methods could achieve 1 or 2 times better efficiency with no loss of accuracy than traditional ones. For the classical nonlinear optimization of bundle adjustment model based on the rectangular coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a stable state. On the contrary, APCP method could deal with quite complex condition of UAS when conducting monitoring as it represent the points in space with angles, including the condition that the sequential images focusing on one object have zero parallax angle. In brief, this paper presents the parameterization of 3D feature points based on APCP, and derives a full bundle adjustment model and the corresponding nonlinear optimization problems based on this method. In addition, we analyze the influence of convergence and dependence on the initial values through math formulas. At last this paper conducts experiments using real aviation data, and proves that the new model can effectively solve bottlenecks of the classical method in a certain degree, that is, this paper provides a new idea and solution for faster and more efficient environmental monitoring.
NASA Astrophysics Data System (ADS)
Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas
2017-12-01
Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.
New technologies for advanced three-dimensional optimum shape design in aeronautics
NASA Astrophysics Data System (ADS)
Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno
1999-05-01
The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright
GridTool: A surface modeling and grid generation tool
NASA Technical Reports Server (NTRS)
Samareh-Abolhassani, Jamshid
1995-01-01
GridTool is designed around the concept that the surface grids are generated on a set of bi-linear patches. This type of grid generation is quite easy to implement, and it avoids the problems associated with complex CAD surface representations and associated surface parameterizations. However, the resulting surface grids are close to but not on the original CAD surfaces. This problem can be alleviated by projecting the resulting surface grids onto the original CAD surfaces. GridTool is designed primary for unstructured grid generation systems. Currently, GridTool supports VGRID and FELISA systems, and it can be easily extended to support other unstructured grid generation systems. The data in GridTool is stored parametrically so that once the problem is set up, one can modify the surfaces and the entire set of points, curves and patches will be updated automatically. This is very useful in a multidisciplinary design and optimization process. GridTool is written entirely in ANSI 'C', the interface is based on the FORMS library, and the graphics is based on the GL library. The code has been tested successfully on IRIS workstations running IRIX4.0 and above. The memory is allocated dynamically, therefore, memory size will depend on the complexity of geometry/grid. GridTool data structure is based on a link-list structure which allows the required memory to expand and contract dynamically according to the user's data size and action. Data structure contains several types of objects such as points, curves, patches, sources and surfaces. At any given time, there is always an active object which is drawn in magenta, or in their highlighted colors as defined by the resource file which will be discussed later.
Overgaard, Martin; Pedersen, Susanne Møller
2017-10-26
Hyperprolactinemia diagnosis and treatment is often compromised by the presence of biologically inactive and clinically irrelevant higher-molecular-weight complexes of prolactin, macroprolactin. The objective of this study was to evaluate the performance of two macroprolactin screening regimes across commonly used automated immunoassay platforms. Parametric total and monomeric gender-specific reference intervals were determined for six immunoassay methods using female (n=96) and male sera (n=127) from healthy donors. The reference intervals were validated using 27 hyperprolactinemic and macroprolactinemic sera, whose presence of monomeric and macroforms of prolactin were determined using gel filtration chromatography (GFC). Normative data for six prolactin assays included the range of values (2.5th-97.5th percentiles). Validation sera (hyperprolactinemic and macroprolactinemic; n=27) showed higher discordant classification [mean=2.8; 95% confidence interval (CI) 1.2-4.4] for the monomer reference interval method compared to the post-polyethylene glycol (PEG) recovery cutoff method (mean=1.8; 95% CI 0.8-2.8). The two monomer/macroprolactin discrimination methods did not differ significantly (p=0.089). Among macroprolactinemic sera evaluated by both discrimination methods, the Cobas and Architect/Kryptor prolactin assays showed the lowest and the highest number of misclassifications, respectively. Current automated immunoassays for prolactin testing require macroprolactin screening methods based on PEG precipitation in order to discriminate truly from falsely elevated serum prolactin. While the recovery cutoff and monomeric reference interval macroprolactin screening methods demonstrate similar discriminative ability, the latter method also provides the clinician with an easy interpretable monomeric prolactin concentration along with a monomeric reference interval.
Quantitative Imaging Biomarkers of NAFLD
Kinner, Sonja; Reeder, Scott B.
2016-01-01
Conventional imaging modalities, including ultrasonography (US), computed tomography (CT), and magnetic resonance (MR), play an important role in the diagnosis and management of patients with nonalcoholic fatty liver disease (NAFLD) by allowing noninvasive diagnosis of hepatic steatosis. However, conventional imaging modalities are limited as biomarkers of NAFLD for various reasons. Multi-parametric quantitative MRI techniques overcome many of the shortcomings of conventional imaging and allow comprehensive and objective evaluation of NAFLD. MRI can provide unconfounded biomarkers of hepatic fat, iron, and fibrosis in a single examination—a virtual biopsy has become a clinical reality. In this article, we will review the utility and limitation of conventional US, CT, and MR imaging for the diagnosis NAFLD. Recent advances in imaging biomarkers of NAFLD are also discussed with an emphasis in multi-parametric quantitative MRI. PMID:26848588
NASA Astrophysics Data System (ADS)
Izmaylov, R.; Lebedev, A.
2015-08-01
Centrifugal compressors are complex energy equipment. Automotive control and protection system should meet the requirements: of operation reliability and durability. In turbocompressors there are at least two dangerous areas: surge and rotating stall. Antisurge protecting systems usually use parametric or feature methods. As a rule industrial system are parametric. The main disadvantages of anti-surge parametric systems are difficulties in mass flow measurements in natural gas pipeline compressor. The principal idea of feature method is based on the experimental fact: as a rule just before the onset of surge rotating or precursor stall established in compressor. In this case the problem consists in detecting of unsteady pressure or velocity fluctuations characteristic signals. Wavelet analysis is the best method for detecting onset of rotating stall in spite of high level of spurious signals (rotating wakes, turbulence, etc.). This method is compatible with state of the art DSP systems of industrial control. Examples of wavelet analysis application for detecting onset of rotating stall in typical stages centrifugal compressor are presented. Experimental investigations include unsteady pressure measurement and sophisticated data acquisition system. Wavelet transforms used biorthogonal wavelets in Mathlab systems.
Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D., E-mail: sergei.ivanov@uni-rostock.de
2015-06-28
Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied,more » usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.« less
Crosswind Shear Gradient Affect on Wake Vortices
NASA Technical Reports Server (NTRS)
Proctor, Fred H.; Ahmad, Nashat N.
2011-01-01
Parametric simulations with a Large Eddy Simulation (LES) model are used to explore the influence of crosswind shear on aircraft wake vortices. Previous studies based on field measurements, laboratory experiments, as well as LES, have shown that the vertical gradient of crosswind shear, i.e. the second vertical derivative of the environmental crosswind, can influence wake vortex transport. The presence of nonlinear vertical shear of the crosswind velocity can reduce the descent rate, causing a wake vortex pair to tilt and change in its lateral separation. The LES parametric studies confirm that the vertical gradient of crosswind shear does influence vortex trajectories. The parametric results also show that vortex decay from the effects of shear are complex since the crosswind shear, along with the vertical gradient of crosswind shear, can affect whether the lateral separation between wake vortices is increased or decreased. If the separation is decreased, the vortex linking time is decreased, and a more rapid decay of wake vortex circulation occurs. If the separation is increased, the time to link is increased, and at least one of the vortices of the vortex pair may have a longer life time than in the case without shear. In some cases, the wake vortices may never link.
NASA Astrophysics Data System (ADS)
Krupa, Katarzyna; Tonello, Alessandro; Barthélémy, Alain; Couderc, Vincent; Shalaby, Badr Mohamed; Bendahmane, Abdelkrim; Millot, Guy; Wabnitz, Stefan
2016-05-01
Spatiotemporal mode coupling in highly multimode physical systems permits new routes for exploring complex instabilities and forming coherent wave structures. We present here the first experimental demonstration of multiple geometric parametric instability sidebands, generated in the frequency domain through resonant space-time coupling, owing to the natural periodic spatial self-imaging of a multimode quasi-continuous-wave beam in a standard graded-index multimode fiber. The input beam was launched in the fiber by means of an amplified microchip laser emitting sub-ns pulses at 1064 nm. The experimentally observed frequency spacing among sidebands agrees well with analytical predictions and numerical simulations. The first-order peaks are located at the considerably large detuning of 123.5 THz from the pump. These results open the remarkable possibility to convert a near-infrared laser directly into a broad spectral range spanning visible and infrared wavelengths, by means of a single resonant parametric nonlinear effect occurring in the normal dispersion regime. As further evidence of our strong space-time coupling regime, we observed the striking effect that all of the different sideband peaks were carried by a well-defined and stable bell-shaped spatial profile.
Coherent scattering from semi-infinite non-Hermitian potentials
NASA Astrophysics Data System (ADS)
Ahmed, Zafar; Ghosh, Dona; Kumar, Sachin
2018-02-01
When two identical (coherent) beams are injected at a semi-infinite non-Hermitian medium from left and right, we show that both reflection (rL,rR) and transmission (tL,tR) amplitudes are nonreciprocal. In a parametric domain, there exists spectral singularity (SS) at a real energy E =E*=k*2 and the determinant of the time-reversed two port scattering matrix, i.e., |det(S (-k ) ) |=| tL(-k ) tR(-k ) -rL(-k ) rR(-k ) | , vanishes sharply at k =k* , displaying the phenomenon of coherent perfect absorption (CPA). In the complementary parametric domain, the potential becomes either left or right reflectionless at E =Ez . We rule out the existence of invisibility despite rR(Ei) =0 and tR(Ei) =1 but T (Ei)≠1 , in this avenue. We present two simple exactly solvable models where expressions for E*, Ez, Ei, and parametric conditions on the potential have been obtained in explicit and simple forms. Earlier, the phenomena of SS and CPA have been found to occur only in the scattering complex potentials which are spatially localized (vanish asymptotically) and have tL=tR .
Increased Reliability for Single-Case Research Results: Is the Bootstrap the Answer?
ERIC Educational Resources Information Center
Parker, Richard I.
2006-01-01
There is need for objective and reliable single-case research (SCR) results in the movement toward evidence-based interventions (EBI), for inclusion in meta-analyses, and for funding accountability in clinical contexts. Yet SCR deals with data that often do not conform to parametric data assumptions and that yield results of low reliability. A…
Parametric Cost and Schedule Modeling for Early Technology Development
2018-04-02
Best Paper in the Analysis Methods Category and 2017 Best Paper Overall awards. It was also presented at the 2017 NASA Cost and Schedule Symposium... Methods over the Project Life Cycle .............................................................................................. 2 Figure 2. Average...information contribute to the lack of data, objective models, and methods that can be broadly applied in early planning stages. Scientific
ERIC Educational Resources Information Center
Samejima, Fumiko
This paper is the final report of a multi-year project sponsored by the Office of Naval Research (ONR) in 1987 through 1990. The main objectives of the research summarized were to: investigate the non-parametric approach to the estimation of the operating characteristics of discrete item responses; revise and strengthen the package computer…
ERIC Educational Resources Information Center
Stephens, Torrance; Braithwaite, Harold; Johnson, Larry; Harris, Catrell; Katkowsky, Steven; Troutman, Adewale
2008-01-01
Objective: To examine impact of CVD risk reduction intervention for African-American men in the Atlanta Empowerment Zone (AEZ) designed to target anger management. Design: Wilcoxon Signed-Rank Test was employed as a non-parametric alternative to the t-test for independent samples. This test was employed because the data used in this analysis…
Walke, Vaishali A; Gunjkar, Gajanan
2017-01-01
Fine needle aspiration cytology (FNAC) is a quick method to assess the tumor grade before its removal which will help clinicians to decide on the appropriate neo adjuvant therapy. This is essentially true in developing countries where core needle biopsy still is not used as a standard practice to sample breast carcinoma. Assessment of biological aggressiveness by cytological grading (CG) without removing the would be of immense value. The National Cancer Institute, Bethesda, sponsored conference had recommended that tumor grading on FNA material should be incorporated in cytology reports for prognostication. The present study was carried out to evaluate which among the two, five parametric Robinson or three parametric Scarf-BloomRichardson (SBR) cytology grading method corresponds better with the histological grading (HG) in breast carcinoma. FNAC of 150 cases of ductal carcinoma breast with subsequent histological confirmation was studied to assess the tumor grade on cytology by two distinct methods Robinson and Howell's modification of SBRmethod and then correlated with histologic grade. Comparative analysis revealed concordance of 76% by Robinson and 68% by SBR with Kappa value of 0.6683 and 0.4505 and diagnostic accuracy of 86.7% and 78.7%, respectively. We conclude that Robinson method showed a better correlation and higher kappa value of agreement in comparison with SBR method. Robinson method of CG is simpler, objective, and easily reproducible for grading breast carcinomas.
Quantification of soil water retention parameters using multi-section TDR-waveform analysis
NASA Astrophysics Data System (ADS)
Baviskar, S. M.; Heimovaara, T. J.
2017-06-01
Soil water retention parameters are important for describing flow in variably saturated soils. TDR is one of the standard methods used for determining water content in soil samples. In this study, we present an approach to estimate water retention parameters of a sample which is initially saturated and subjected to an incremental decrease in boundary head causing it to drain in a multi-step fashion. TDR waveforms are measured along the height of the sample at assumed different hydrostatic conditions at daily interval. The cumulative discharge outflow drained from the sample is also recorded. The saturated water content is obtained using volumetric analysis after the final step involved in multi-step drainage. The equation obtained by coupling the unsaturated parametric function and the apparent dielectric permittivity is fitted to a TDR wave propagation forward model. The unsaturated parametric function is used to spatially interpolate the water contents along TDR probe. The cumulative discharge outflow data is fitted with cumulative discharge estimated using the unsaturated parametric function. The weight of water inside the sample estimated at the first and final boundary head in multi-step drainage is fitted with the corresponding weights calculated using unsaturated parametric function. A Bayesian optimization scheme is used to obtain optimized water retention parameters for these different objective functions. This approach can be used for samples with long heights and is especially suitable for characterizing sands with a uniform particle size distribution at low capillary heads.
A note on a simplified and general approach to simulating from multivariate copula functions
Barry K. Goodwin
2013-01-01
Copulas have become an important analytic tool for characterizing multivariate distributions and dependence. One is often interested in simulating data from copula estimates. The process can be analytically and computationally complex and usually involves steps that are unique to a given parametric copula. We describe an alternative approach that uses âProbability-...
Meta II: Multi-Model Language Suite for Cyber Physical Systems
2013-03-01
AVM META) projects have developed tools for designing cyber physical (or Mechatronic ) Systems . These systems are increasingly complex, take much...projects have developed tools for designing cyber physical (CPS) (or Mechatronic ) systems . Exemplified by modern amphibious and ground military...and parametric interface of Simulink models and defines associations with CyPhy components and component interfaces. 2. Embedded Systems Modeling
Nishino, Jo; Kochi, Yuta; Shigemizu, Daichi; Kato, Mamoru; Ikari, Katsunori; Ochi, Hidenori; Noma, Hisashi; Matsui, Kota; Morizono, Takashi; Boroevich, Keith A.; Tsunoda, Tatsuhiko; Matsui, Shigeyuki
2018-01-01
Genome-wide association studies (GWAS) suggest that the genetic architecture of complex diseases consists of unexpectedly numerous variants with small effect sizes. However, the polygenic architectures of many diseases have not been well characterized due to lack of simple and fast methods for unbiased estimation of the underlying proportion of disease-associated variants and their effect-size distribution. Applying empirical Bayes estimation of semi-parametric hierarchical mixture models to GWAS summary statistics, we confirmed that schizophrenia was extremely polygenic [~40% of independent genome-wide SNPs are risk variants, most within odds ratio (OR = 1.03)], whereas rheumatoid arthritis was less polygenic (~4 to 8% risk variants, significant portion reaching OR = 1.05 to 1.1). For rheumatoid arthritis, stratified estimations revealed that expression quantitative loci in blood explained large genetic variance, and low- and high-frequency derived alleles were prone to be risk and protective, respectively, suggesting a predominance of deleterious-risk and advantageous-protective mutations. Despite genetic correlation, effect-size distributions for schizophrenia and bipolar disorder differed across allele frequency. These analyses distinguished disease polygenic architectures and provided clues for etiological differences in complex diseases. PMID:29740473
Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes
NASA Astrophysics Data System (ADS)
van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.
2017-12-01
Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.
Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie
2013-06-04
Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.
Integrative genetic risk prediction using non-parametric empirical Bayes classification.
Zhao, Sihai Dave
2017-06-01
Genetic risk prediction is an important component of individualized medicine, but prediction accuracies remain low for many complex diseases. A fundamental limitation is the sample sizes of the studies on which the prediction algorithms are trained. One way to increase the effective sample size is to integrate information from previously existing studies. However, it can be difficult to find existing data that examine the target disease of interest, especially if that disease is rare or poorly studied. Furthermore, individual-level genotype data from these auxiliary studies are typically difficult to obtain. This article proposes a new approach to integrative genetic risk prediction of complex diseases with binary phenotypes. It accommodates possible heterogeneity in the genetic etiologies of the target and auxiliary diseases using a tuning parameter-free non-parametric empirical Bayes procedure, and can be trained using only auxiliary summary statistics. Simulation studies show that the proposed method can provide superior predictive accuracy relative to non-integrative as well as integrative classifiers. The method is applied to a recent study of pediatric autoimmune diseases, where it substantially reduces prediction error for certain target/auxiliary disease combinations. The proposed method is implemented in the R package ssa. © 2016, The International Biometric Society.
A Multivariate Quality Loss Function Approach for Optimization of Spinning Processes
NASA Astrophysics Data System (ADS)
Chakraborty, Shankar; Mitra, Ankan
2018-05-01
Recent advancements in textile industry have given rise to several spinning techniques, such as ring spinning, rotor spinning etc., which can be used to produce a wide variety of textile apparels so as to fulfil the end requirements of the customers. To achieve the best out of these processes, they should be utilized at their optimal parametric settings. However, in presence of multiple yarn characteristics which are often conflicting in nature, it becomes a challenging task for the spinning industry personnel to identify the best parametric mix which would simultaneously optimize all the responses. Hence, in this paper, the applicability of a new systematic approach in the form of multivariate quality loss function technique is explored for optimizing multiple quality characteristics of yarns while identifying the ideal settings of two spinning processes. It is observed that this approach performs well against the other multi-objective optimization techniques, such as desirability function, distance function and mean squared error methods. With slight modifications in the upper and lower specification limits of the considered quality characteristics, and constraints of the non-linear optimization problem, it can be successfully applied to other processes in textile industry to determine their optimal parametric settings.
NASA Astrophysics Data System (ADS)
Protim Das, Partha; Gupta, P.; Das, S.; Pradhan, B. B.; Chakraborty, S.
2018-01-01
Maraging steel (MDN 300) find its application in many industries as it exhibits high hardness which are very difficult to machine material. Electro discharge machining (EDM) is an extensively popular machining process which can be used in machining of such materials. Optimization of response parameters are essential for effective machining of these materials. Past researchers have already used Taguchi for obtaining the optimal responses of EDM process for this material with responses such as material removal rate (MRR), tool wear rate (TWR), relative wear ratio (RWR), and surface roughness (SR) considering discharge current, pulse on time, pulse off time, arc gap, and duty cycle as process parameters. In this paper, grey relation analysis (GRA) with fuzzy logic is applied to this multi objective optimization problem to check the responses by an implementation of the derived parametric setting. It was found that the parametric setting derived by the proposed method results in better a response than those reported by the past researchers. Obtained results are also verified using the technique for order of preference by similarity to ideal solution (TOPSIS). The predicted result also shows that there is a significant improvement in comparison to the results of past researchers.
Parametric investigations of plasma characteristics in a remote inductively coupled plasma system
NASA Astrophysics Data System (ADS)
Shukla, Prasoon; Roy, Abhra; Jain, Kunal; Bhoj, Ananth
2016-09-01
Designing a remote plasma system involves source chamber sizing, selection of coils and/or electrodes to power the plasma, designing the downstream tubes, selection of materials used in the source and downstream regions, locations of inlets and outlets and finally optimizing the process parameter space of pressure, gas flow rates and power delivery. Simulations can aid in spatial and temporal plasma characterization in what are often inaccessible locations for experimental probes in the source chamber. In this paper, we report on simulations of a remote inductively coupled Argon plasma system using the modeling platform CFD-ACE +. The coupled multiphysics model description successfully address flow, chemistry, electromagnetics, heat transfer and plasma transport in the remote plasma system. The SimManager tool enables easy setup of parametric simulations to investigate the effect of varying the pressure, power, frequency, flow rates and downstream tube lengths. It can also enable the automatic solution of the varied parameters to optimize a user-defined objective function, which may be the integral ion and radical fluxes at the wafer. The fast run time coupled with the parametric and optimization capabilities can add significant insight and value in design and optimization.
Axially grooved heat pipe study
NASA Technical Reports Server (NTRS)
1977-01-01
A technology evaluation study on axially grooved heat pipes is presented. The state-of-the-art is reviewed and present and future requirements are identified. Analytical models, the Groove Analysis Program (GAP) and a closed form solution, were developed to facilitate parametric performance evaluations. GAP provides a numerical solution of the differential equations which govern the hydrodynamic flow. The model accounts for liquid recession, liquid/vapor shear interaction, puddle flow as well as laminar and turbulent vapor flow conditions. The closed form solution was developed to reduce computation time and complexity in parametric evaluations. It is applicable to laminar and ideal charge conditions, liquid/vapor shear interaction, and an empirical liquid flow factor which accounts for groove geometry and liquid recession effects. The validity of the closed form solution is verified by comparison with GAP predictions and measured data.
Georges, Anouk; Cambisano, Nadine; Ahariz, Naïma; Karim, Latifa; Georges, Michel
2013-01-01
A genome-wide linkage scan was conducted in a Northern-European multigenerational pedigree with nine of 40 related members affected with concomitant strabismus. Twenty-seven members of the pedigree including all affected individuals were genotyped using a SNP array interrogating > 300,000 common SNPs. We conducted parametric and non-parametric linkage analyses assuming segregation of an autosomal dominant mutation, yet allowing for incomplete penetrance and phenocopies. We detected two chromosome regions with near-suggestive evidence for linkage, respectively on chromosomes 8 and 18. The chromosome 8 linkage implied a penetrance of 0.80 and a rate of phenocopy of 0.11, while the chromosome 18 linkage implied a penetrance of 0.64 and a rate of phenocopy of 0. Our analysis excludes a simple genetic determinism of strabismus in this pedigree. PMID:24376720
Georges, Anouk; Cambisano, Nadine; Ahariz, Naïma; Karim, Latifa; Georges, Michel
2013-01-01
A genome-wide linkage scan was conducted in a Northern-European multigenerational pedigree with nine of 40 related members affected with concomitant strabismus. Twenty-seven members of the pedigree including all affected individuals were genotyped using a SNP array interrogating > 300,000 common SNPs. We conducted parametric and non-parametric linkage analyses assuming segregation of an autosomal dominant mutation, yet allowing for incomplete penetrance and phenocopies. We detected two chromosome regions with near-suggestive evidence for linkage, respectively on chromosomes 8 and 18. The chromosome 8 linkage implied a penetrance of 0.80 and a rate of phenocopy of 0.11, while the chromosome 18 linkage implied a penetrance of 0.64 and a rate of phenocopy of 0. Our analysis excludes a simple genetic determinism of strabismus in this pedigree.
Single block three-dimensional volume grids about complex aerodynamic vehicles
NASA Technical Reports Server (NTRS)
Alter, Stephen J.; Weilmuenster, K. James
1993-01-01
This paper presents an alternate approach for the generation of volumetric grids for supersonic and hypersonic flows about complex configurations. The method uses parametric two dimensional block face grid definition within the framework of GRIDGEN2D. The incorporation of face decomposition reduces complex surfaces to simple shapes. These simple shapes are combined to obtain the final face definition. The advantages of this method include the reduction of overall grid generation time through the use of vectorized computer code, the elimination of the need to generate matching block faces, and the implementation of simplified boundary conditions. A simple axisymmetric grid is used to illustrate this method. In addition, volume grids for two complex configurations, the Langley Lifting Body (HL-20) and the Space Shuttle Orbiter, are shown.
Equiangular tight frames and unistochastic matrices
NASA Astrophysics Data System (ADS)
Goyeneche, Dardo; Turek, Ondřej
2017-06-01
We demonstrate that a complex equiangular tight frame composed of N vectors in dimension d, denoted ETF (d, N), exists if and only if a certain bistochastic matrix, univocally determined by N and d, belongs to a special class of unistochastic matrices. This connection allows us to find new complex ETFs in infinitely many dimensions and to derive a method to introduce non-trivial free parameters in ETFs. We present an explicit six-parametric family of complex ETF(6,16), which defines a family of symmetric POVMs. Minimal and maximal possible average entanglement of the vectors within this qubit-qutrit family are described. Furthermore, we propose an efficient numerical procedure to compute the unitary matrix underlying a unistochastic matrix, which we apply to find all existing classes of complex ETFs containing up to 20 vectors.
Endurance Test and Evaluation of Alkaline Water Electrolysis Cells
NASA Technical Reports Server (NTRS)
Kovach, Andrew J.; Schubert, Franz H.; Chang, B. J.; Larkins, Jim T.
1985-01-01
The overall objective of this program is to assess the state of alkaline water electrolysis cell technology and its potential as part of a Regenerative Fuel Cell System (RFCS) of a multikilowatt orbiting powerplant. The program evaluates the endurance capabilities of alkaline electrolyte water electrolysis cells under various operating conditions, including constant condition testing, cyclic testing and high pressure testing. The RFCS demanded the scale-up of existing cell hardware from 0.1 sq ft active electrode area to 1.0 sq ft active electrode area. A single water electrolysis cell and two six-cell modules of 1.0 sq ft active electrode area were designed and fabricated. The two six-cell 1.0 sq ft modules incorporate 1.0 sq ft utilized cores, which allow for minimization of module assembly complexity and increased tolerance to pressure differential. A water electrolysis subsystem was designed and fabricated to allow testing of the six-cell modules. After completing checkout, shakedown, design verification and parametric testing, a module was incorporated into the Regenerative Fuel Cell System Breadboard (RFCSB) for testing at Life Systems, Inc., and at NASA JSC.
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
PRIMsrc is a novel implementation of a non-parametric bump hunting procedure, based on the Patient Rule Induction Method (PRIM), offering a unified treatment of outcome variables, including censored time-to-event (Survival), continuous (Regression) and discrete (Classification) responses. To fit the model, it uses a recursive peeling procedure with specific peeling criteria and stopping rules depending on the response. To validate the model, it provides an objective function based on prediction-error or other specific statistic, as well as two alternative cross-validation techniques, adapted to the task of decision-rule making and estimation in the three types of settings. PRIMsrc comes as an open source R package, including at this point: (i) a main function for fitting a Survival Bump Hunting model with various options allowing cross-validated model selection to control model size (#covariates) and model complexity (#peeling steps) and generation of cross-validated end-point estimates; (ii) parallel computing; (iii) various S3-generic and specific plotting functions for data visualization, diagnostic, prediction, summary and display of results. It is available on CRAN and GitHub. PMID:26798326
Analysis of explicit model predictive control for path-following control
2018-01-01
In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration. PMID:29534080
Analysis of explicit model predictive control for path-following control.
Lee, Junho; Chang, Hyuk-Jun
2018-01-01
In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration.
Multidisciplinary Conceptual Design for Reduced-Emission Rotorcraft
NASA Technical Reports Server (NTRS)
Silva, Christopher; Johnson, Wayne; Solis, Eduardo
2018-01-01
Python-based wrappers for OpenMDAO are used to integrate disparate software for practical conceptual design of rotorcraft. The suite of tools which are connected thus far include aircraft sizing, comprehensive analysis, and parametric geometry. The tools are exercised to design aircraft with aggressive goals for emission reductions relative to fielded state-of-the-art rotorcraft. Several advanced reduced-emission rotorcraft are designed and analyzed, demonstrating the flexibility of the tools to consider a wide variety of potentially transformative vertical flight vehicles. To explore scale effects, aircraft have been sized for 5, 24, or 76 passengers in their design missions. Aircraft types evaluated include tiltrotor, single-main-rotor, coaxial, and side-by-side helicopters. Energy and drive systems modeled include Lithium-ion battery, hydrogen fuel cell, turboelectric hybrid, and turboshaft drive systems. Observations include the complex nature of the trade space for this simple problem, with many potential aircraft design and operational solutions for achieving significant emission reductions. Also interesting is that achieving greatly reduced emissions may not require exotic component technologies, but may be achieved with a dedicated design objective of reducing emissions.
Optimum design of bolted composite lap joints under mechanical and thermal loading
NASA Astrophysics Data System (ADS)
Kradinov, Vladimir Yurievich
A new approach is developed for the analysis and design of mechanically fastened composite lap joints under mechanical and thermal loading. Based on the combined complex potential and variational formulation, the solution method satisfies the equilibrium equations exactly while the boundary conditions are satisfied by minimizing the total potential. This approach is capable of modeling finite laminate planform dimensions, uniform and variable laminate thickness, laminate lay-up, interaction among bolts, bolt torque, bolt flexibility, bolt size, bolt-hole clearance and interference, insert dimensions and insert material properties. Comparing to the finite element analysis, the robustness of the method does not decrease when modeling the interaction of many bolts; also, the method is more suitable for parametric study and design optimization. The Genetic Algorithm (GA), a powerful optimization technique for multiple extrema functions in multiple dimensions search spaces, is applied in conjunction with the complex potential and variational formulation to achieve optimum designs of bolted composite lap joints. The objective of the optimization is to acquire such a design that ensures the highest strength of the joint. The fitness function for the GA optimization is based on the average stress failure criterion predicting net-section, shear-out, and bearing failure modes in bolted lap joints. The criterion accounts for the stress distribution in the thickness direction at the bolt location by applying an approach utilizing a beam on an elastic foundation formulation.
Parametric Crowd Generation Software for MS&T Simulations and Training
2007-02-20
3 Technology Overview 5 Dynemotion System Components 5 Dynemotion System Architecture 6 Dynemotion-Enabled NPC Brain Cycles 9 Dynemotion API...Contents 10 Development Project Background Information 11 Potential Application and Impact for the DoD 13 Project Objectives, Scope...Methodology 13 Benefits of the Project 13 Project Innovation 14 *l_essons Learned and Open Questions 14 Research and Development Challenges 16
The Effects of Non-Normality on Type III Error for Comparing Independent Means
ERIC Educational Resources Information Center
Mendes, Mehmet
2007-01-01
The major objective of this study was to investigate the effects of non-normality on Type III error rates for ANOVA F its three commonly recommended parametric counterparts namely Welch, Brown-Forsythe, and Alexander-Govern test. Therefore these tests were compared in terms of Type III error rates across the variety of population distributions,…
Miles, J
1980-04-01
Transversely periodic solitary-wave solutions of the Boussinesq equations (which govern wave propagation in a weakly dispersive, weakly nonlinear physical system) are determined. The solutions for negative dispersion (e.g., gravity waves) are singular and therefore physically unacceptable. The solutions for positive dispersion (e.g., capillary waves or magnetosonic waves in a plasma) are physically acceptable except in a limited parametric interval, in which they are complex. The two end points of this interval are associated with (two different) resonant interactions among three basic solitary waves, two of which are two-dimensional complex conjugates and the third of which is one-dimensional and real.
Tensorial Minkowski functionals of triply periodic minimal surfaces
Mickel, Walter; Schröder-Turk, Gerd E.; Mecke, Klaus
2012-01-01
A fundamental understanding of the formation and properties of a complex spatial structure relies on robust quantitative tools to characterize morphology. A systematic approach to the characterization of average properties of anisotropic complex interfacial geometries is provided by integral geometry which furnishes a family of morphological descriptors known as tensorial Minkowski functionals. These functionals are curvature-weighted integrals of tensor products of position vectors and surface normal vectors over the interfacial surface. We here demonstrate their use by application to non-cubic triply periodic minimal surface model geometries, whose Weierstrass parametrizations allow for accurate numerical computation of the Minkowski tensors. PMID:24098847
NASA Astrophysics Data System (ADS)
Chrismianto, Deddy; Zakki, Ahmad Fauzan; Arswendo, Berlian; Kim, Dong Joon
2015-12-01
Optimization analysis and computational fluid dynamics (CFDs) have been applied simultaneously, in which a parametric model plays an important role in finding the optimal solution. However, it is difficult to create a parametric model for a complex shape with irregular curves, such as a submarine hull form. In this study, the cubic Bezier curve and curve-plane intersection method are used to generate a solid model of a parametric submarine hull form taking three input parameters into account: nose radius, tail radius, and length-height hull ratio ( L/ H). Application program interface (API) scripting is also used to write code in the ANSYS design modeler. The results show that the submarine shape can be generated with some variation of the input parameters. An example is given that shows how the proposed method can be applied successfully to a hull resistance optimization case. The parametric design of the middle submarine type was chosen to be modified. First, the original submarine model was analyzed, in advance, using CFD. Then, using the response surface graph, some candidate optimal designs with a minimum hull resistance coefficient were obtained. Further, the optimization method in goal-driven optimization (GDO) was implemented to find the submarine hull form with the minimum hull resistance coefficient ( C t ). The minimum C t was obtained. The calculated difference in C t values between the initial submarine and the optimum submarine is around 0.26%, with the C t of the initial submarine and the optimum submarine being 0.001 508 26 and 0.001 504 29, respectively. The results show that the optimum submarine hull form shows a higher nose radius ( r n ) and higher L/ H than those of the initial submarine shape, while the radius of the tail ( r t ) is smaller than that of the initial shape.
Nonparametric estimation of benchmark doses in environmental risk assessment
Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen
2013-01-01
Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Van, Luong
1992-01-01
The objective of this paper are to develop a multidisciplinary computational methodology to predict the hot-gas-side and coolant-side heat transfer and to use it in parametric studies to recommend optimized design of the coolant channels for a regeneratively cooled liquid rocket engine combustor. An integrated numerical model which incorporates CFD for the hot-gas thermal environment, and thermal analysis for the liner and coolant channels, was developed. This integrated CFD/thermal model was validated by comparing predicted heat fluxes with those of hot-firing test and industrial design methods for a 40 k calorimeter thrust chamber and the Space Shuttle Main Engine Main Combustion Chamber. Parametric studies were performed for the Advanced Main Combustion Chamber to find a strategy for a proposed combustion chamber coolant channel design.
NASA Astrophysics Data System (ADS)
Rout, Sachindra K.; Choudhury, Balaji K.; Sahoo, Ranjit K.; Sarangi, Sunil K.
2014-07-01
The modeling and optimization of a Pulse Tube Refrigerator is a complicated task, due to its complexity of geometry and nature. The aim of the present work is to optimize the dimensions of pulse tube and regenerator for an Inertance-Type Pulse Tube Refrigerator (ITPTR) by using Response Surface Methodology (RSM) and Non-Sorted Genetic Algorithm II (NSGA II). The Box-Behnken design of the response surface methodology is used in an experimental matrix, with four factors and two levels. The diameter and length of the pulse tube and regenerator are chosen as the design variables where the rest of the dimensions and operating conditions of the ITPTR are constant. The required output responses are the cold head temperature (Tcold) and compressor input power (Wcomp). Computational fluid dynamics (CFD) have been used to model and solve the ITPTR. The CFD results agreed well with those of the previously published paper. Also using the results from the 1-D simulation, RSM is conducted to analyse the effect of the independent variables on the responses. To check the accuracy of the model, the analysis of variance (ANOVA) method has been used. Based on the proposed mathematical RSM models a multi-objective optimization study, using the Non-sorted genetic algorithm II (NSGA-II) has been performed to optimize the responses.
Dlouhý, Martin
2018-01-01
The existence of geographic differences in health resources, health expenditures, the utilization of health services, and health outcomes have been documented by a lot of studies from various countries of the world. In a publicly financed health system, equal access is one of the main objectives of the national health policy. That is why inequalities in the geographic allocation of health resources are an important health policy issue. Measures of inequality express the complexity of variation in the observed variable by a single number, and there is a variety of inequality measures available. The objective of this study is to develop a measure of the geographic inequality in the case of multiple health resources. The measure uses data envelopment analysis (DEA), which is a non-parametric method of production function estimation, to transform multiple resources into a single virtual health resource. The study shows that the DEA originally developed for measuring efficiency can be used successfully to measure inequality. For the illustrative purpose, the inequality measure is calculated for the Czech Republic. The values of separate Robin Hood Indexes (RHIs) are 6.64% for physicians and 3.96% for nurses. In the next step, we use combined RHI for both health resources. Its value 5.06% takes into account that the combinations of two health resources serve regional populations. PMID:29541631
NASA Astrophysics Data System (ADS)
Bereau, Tristan; DiStasio, Robert A.; Tkatchenko, Alexandre; von Lilienfeld, O. Anatole
2018-06-01
Classical intermolecular potentials typically require an extensive parametrization procedure for any new compound considered. To do away with prior parametrization, we propose a combination of physics-based potentials with machine learning (ML), coined IPML, which is transferable across small neutral organic and biologically relevant molecules. ML models provide on-the-fly predictions for environment-dependent local atomic properties: electrostatic multipole coefficients (significant error reduction compared to previously reported), the population and decay rate of valence atomic densities, and polarizabilities across conformations and chemical compositions of H, C, N, and O atoms. These parameters enable accurate calculations of intermolecular contributions—electrostatics, charge penetration, repulsion, induction/polarization, and many-body dispersion. Unlike other potentials, this model is transferable in its ability to handle new molecules and conformations without explicit prior parametrization: All local atomic properties are predicted from ML, leaving only eight global parameters—optimized once and for all across compounds. We validate IPML on various gas-phase dimers at and away from equilibrium separation, where we obtain mean absolute errors between 0.4 and 0.7 kcal/mol for several chemically and conformationally diverse datasets representative of non-covalent interactions in biologically relevant molecules. We further focus on hydrogen-bonded complexes—essential but challenging due to their directional nature—where datasets of DNA base pairs and amino acids yield an extremely encouraging 1.4 kcal/mol error. Finally, and as a first look, we consider IPML for denser systems: water clusters, supramolecular host-guest complexes, and the benzene crystal.
Parodi, Katia; Mairani, Andrea; Sommerer, Florian
2013-07-01
Ion beam therapy using state-of-the-art pencil-beam scanning offers unprecedented tumour-dose conformality with superior sparing of healthy tissue and critical organs compared to conventional radiation modalities for external treatment of deep-seated tumours. For inverse plan optimization, the commonly employed analytical treatment-planning systems (TPSs) have to meet reasonable compromises in the accuracy of the pencil-beam modelling to ensure good performances in clinically tolerable execution times. In particular, the complex lateral spreading of ion beams in air and in the traversed tissue is typically approximated with ideal Gaussian-shaped distributions, enabling straightforward superimposition of several scattering contributions. This work presents the double Gaussian parametrization of scanned proton and carbon ion beams in water that has been introduced in an upgraded version of the worldwide first commercial ion TPS for clinical use at the Heidelberg Ion Beam Therapy Center (HIT). First, the Monte Carlo results obtained from a detailed implementation of the HIT beamline have been validated against available experimental data. Then, for generating the TPS lateral parametrization, radial beam broadening has been calculated in a water target placed at a representative position after scattering in the beamline elements and air for 20 initial beam energies for each ion species. The simulated profiles were finally fitted with an idealized double Gaussian distribution that did not perfectly describe the nature of the data, thus requiring a careful choice of the fitting conditions. The obtained parametrization is in clinical use not only at the HIT center, but also at the Centro Nazionale di Adroterapia Oncologica.
Parodi, Katia; Mairani, Andrea; Sommerer, Florian
2013-01-01
Ion beam therapy using state-of-the-art pencil-beam scanning offers unprecedented tumour-dose conformality with superior sparing of healthy tissue and critical organs compared to conventional radiation modalities for external treatment of deep-seated tumours. For inverse plan optimization, the commonly employed analytical treatment-planning systems (TPSs) have to meet reasonable compromises in the accuracy of the pencil-beam modelling to ensure good performances in clinically tolerable execution times. In particular, the complex lateral spreading of ion beams in air and in the traversed tissue is typically approximated with ideal Gaussian-shaped distributions, enabling straightforward superimposition of several scattering contributions. This work presents the double Gaussian parametrization of scanned proton and carbon ion beams in water that has been introduced in an upgraded version of the worldwide first commercial ion TPS for clinical use at the Heidelberg Ion Beam Therapy Center (HIT). First, the Monte Carlo results obtained from a detailed implementation of the HIT beamline have been validated against available experimental data. Then, for generating the TPS lateral parametrization, radial beam broadening has been calculated in a water target placed at a representative position after scattering in the beamline elements and air for 20 initial beam energies for each ion species. The simulated profiles were finally fitted with an idealized double Gaussian distribution that did not perfectly describe the nature of the data, thus requiring a careful choice of the fitting conditions. The obtained parametrization is in clinical use not only at the HIT center, but also at the Centro Nazionale di Adroterapia Oncologica. PMID:23824133
Age-dependent biochemical quantities: an approach for calculating reference intervals.
Bjerner, J
2007-01-01
A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.
NASA Astrophysics Data System (ADS)
Chatterjee, Sudip K.; Khan, Saba N.; Chaudhuri, Partha Roy
2014-12-01
An ultra-wide 1646 nm (1084-2730 nm), continuous-wave single pump parametric amplification spanning from near-infrared to short-wave infrared band (NIR-SWIR) in a host lead-silicate based binary multi-clad microstructure fiber (BMMF) is analyzed and reported. This ultra-broad band (widest reported to date) parametric amplification with gain more than 10 dB is theoretically achieved by a combination of low input pump power source ~7 W and a short-length of ~70 cm of nonlinear-BMMF through accurately engineered multi-order dispersion coefficients. A highly efficient theoretical formulation based on four-wave-mixing (FWM) is worked out to determine fiber's chromatic dispersion (D) profile which is used to optimise the gain-bandwidth and ripple of the parametric gain profile. It is seen that by appropriately controlling the higher-order dispersion coefficient (up-to sixth order), a great enhancement in the gain-bandwidth (2-3 times) can be achieved when operated very close to zero-dispersion wavelength (ZDW) in the anomalous dispersion regime. Moreover, the proposed theoretical model can predict the maximum realizable spectral width and the required pump-detuning (w.r.t ZDW) of any advanced complex microstructured fiber. Our thorough investigation of the wide variety of broadband gain spectra obtained as an integral part of this research work opens up the way for realizing amplification in the region (SWIR) located far from the pump (NIR) where good amplifiers currently do not exist.
A parametric LQ approach to multiobjective control system design
NASA Technical Reports Server (NTRS)
Kyr, Douglas E.; Buchner, Marc
1988-01-01
The synthesis of a constant parameter output feedback control law of constrained structure is set in a multiple objective linear quadratic regulator (MOLQR) framework. The use of intuitive objective functions such as model-following ability and closed-loop trajectory sensitivity, allow multiple objective decision making techniques, such as the surrogate worth tradeoff method, to be applied. For the continuous-time deterministic problem with an infinite time horizon, dynamic compensators as well as static output feedback controllers can be synthesized using a descent Anderson-Moore algorithm modified to impose linear equality constraints on the feedback gains by moving in feasible directions. Results of three different examples are presented, including a unique reformulation of the sensitivity reduction problem.
NASA Astrophysics Data System (ADS)
Montoya, Paula; Ballesteros, José; Gervás, Pablo
2015-04-01
The increasing complexity of space use and resource cycles in cities, demands an understanding of the built environment as "ecological": enabling mutation while remaining balanced and biologically sustainable. Designing man`s environment is no longer a question of defining types, but rather an act of inserting changes within a complex system. Architecture and urban planning have become increasingly aware of their condition as system-oriented disciplines, and they are in the process of developing the necessary languages, design tools, and alliances. We will argue the relevance of parametric maps as one of the most powerful of those tools, in terms of their potential for adaptive prototype design, convergence of disciplines, and collaborative work. Cities need to change in order to survive. As the main human landscape (by 2050 75% of the world's population will live in urban areas) cities follow biological patterns of behaviour, constantly replacing their cells, renovating infrastructure systems and refining methods for energy provision and waste management. They need to adapt constantly. As responsive entities, they develop their own protocols for reaction to environmental change and challenge the increasing pressure of several issues related to scale: population, mobility, water and energy supply, pollution... The representation of these urban issues on maps becomes crucial for understanding and addressing them in design. Maps enhanced with parametric tools are relational and not only they register environmental dynamics but they allow adaptation of the system through interwoven parameters of mutation. Citizens are taking part in decisions and becoming aware of their role as urban experts in a bottom-up design process of the cities where they live. Modern tools for dynamic visualisation and collaborative edition of maps have an important role to play in this process. More and more people consult maps on hand-held devices as part of their daily routine. The advent of open access collaborative maps allows them to actively extend and modify these maps by uploading data of their own design. This can generate an immense amount of unique information that is publicly available. The work of architects, planners, and political agents can be informed by the contributions of a community of volunteer cartographers. Counter-cartographies built through collaboration arise from spontaneous processes of knowledge and data collection, and demand continuous non-commercial revision. Both scientific and non-academic users have direct access to geostrategic information and actively take part in exploring, recording and inserting their contrasted contributions into the way in which our world is described. This proposal explores the idea of a counter-cartography as a collection of maps that unveil territorial environmental conditions different from those shown in official maps. By using parametric tools we can incorporate information of this type directly into architectural documents and generate interlaced changes in the design. A parametric map is a flexible yet accurate tool for design and discovery: it integrates multiple particular views into a precise physical context that culminates in a generative design. Working with complex maps in this way is gradually becoming the ultimate document for designing the city in an integrated manner.
Altschuler, Ted S.; Molholm, Sophie; Butler, John S.; Mercier, Manuel R.; Brandwein, Alice B.; Foxe, John J.
2014-01-01
The adult human visual system can efficiently fill-in missing object boundaries when low-level information from the retina is incomplete, but little is known about how these processes develop across childhood. A decade of visual-evoked potential (VEP) studies has produced a theoretical model identifying distinct phases of contour completion in adults. The first, termed a perceptual phase, occurs from approximately 100-200 ms and is associated with automatic boundary completion. The second is termed a conceptual phase occurring between 230-400 ms. The latter has been associated with the analysis of ambiguous objects which seem to require more effort to complete. The electrophysiological markers of these phases have both been localized to the lateral occipital complex, a cluster of ventral visual stream brain regions associated with object-processing. We presented Kanizsa-type illusory contour stimuli, often used for exploring contour completion processes, to neurotypical persons ages 6-31 (N= 63), while parametrically varying the spatial extent of these induced contours, in order to better understand how filling-in processes develop across childhood and adolescence. Our results suggest that, while adults complete contour boundaries in a single discrete period during the automatic perceptual phase, children display an immature response pattern - engaging in more protracted processing across both timeframes and appearing to recruit more widely distributed regions which resemble those evoked during adult processing of higher-order ambiguous figures. However, children older than 5 years of age were remarkably like adults in that the effects of contour processing were invariant to manipulation of contour extent. PMID:24365674
NASA Astrophysics Data System (ADS)
Fytanidis, D. K.; Wu, H.; Landry, B. J.; Garcia, M. H.
2017-12-01
Abandoned Unexploded Ordnances (UXOs) from wartime events, accidents, training or other military activities can be found in coastal environments. While the interest for these hazardous submerged objects is increased, there are still existing knowledge gaps regarding the mechanisms of incipient motion and flow behavior around UXOs lying on the seafloor. Numerical modeling of flow around near bed placed UXOs is conducted for unidirectional and oscillatory flow conditions using Computational Fluid Dynamics techniques. The Reynolds-Averaged Navier-Stokes (RANS) approach is used to simulate the complex turbulent flow field around UXOs. The numerical results are compared with two-dimensional Particle Image Velocimetry measurements from experiments conducted in unidirectional and oscillatory flow facilities within the Ven Te Chow Hydrosystems Laboratory to evaluate the accuracy of the applied RANS-based solver. Realistic boundary conditions are imposed in the numerical models to mimic the experimental conditions in the laboratory facilities. The comparison between the numerical results and the experimental data agrees well. In addition, the effect of the angle of attack on the forces that UXOs experience is examined. Numerical results suggest that the orientation of UXOs with respect to the mean flow is an important parameter for incipient motion under critical flow conditions which is in agreement with prior laboratory experimental results regarding the identification of critical flow conditions for the initiation of motion of UXOs. Finally, an extensive parametric analysis is conducted to evaluate the effect of the maximum current velocity and wave characteristics (maximum velocity and period) on the flow forces and the mean flow pattern around the objects.
Drawing dynamical and parameters planes of iterative families and methods.
Chicharro, Francisco I; Cordero, Alicia; Torregrosa, Juan R
2013-01-01
The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones).
Experimental generation of complex noisy photonic entanglement
NASA Astrophysics Data System (ADS)
Dobek, K.; Karpiński, M.; Demkowicz-Dobrzański, R.; Banaszek, K.; Horodecki, P.
2013-02-01
We present an experimental scheme based on spontaneous parametric down-conversion to produce multiple-photon pairs in maximally entangled polarization states using an arrangement of two type-I nonlinear crystals. By introducing correlated polarization noise in the paths of the generated photons we prepare mixed-entangled states whose properties illustrate fundamental results obtained recently in quantum information theory, in particular those concerning bound entanglement and privacy.
Free boundary skin current MHD (magnetohydrodynamic) equilibria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reusch, M.F.
1988-02-01
Function theoretic methods in the complex plane are used to develop simple parametric hodograph formulae which generate sharp boundary equilibria of arbitrary shape. The related method of Gorenflo and Merkel is discussed. A numerical technique for the construction of solutions, based on one of the methods is presented. A study is made of the bifurcations of an equilibrium of general form. 28 refs., 9 figs.
Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M
2006-04-21
Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association studies using the case-control design, the application of a combination of several methods, including the set association approach, MDR and the random forests approach, will likely be a useful strategy to find the important genes and interaction patterns involved in complex diseases.
NASA Astrophysics Data System (ADS)
Oreni, D.; Karimi, G.; Barazzetti, L.
2017-08-01
This paper presents the development of a BIM model for a stratified historic structure characterized by a complex geometry: Filarete's Ospedale Maggiore ice house, one of the few remaining historic ice houses in Milan (Fig. 1). Filarete, a well-known Renaissance architect and theorist, planned the hospital in the 15th century, but the ice house was built two centuries later with a double-storey irregular octagonal brick structure, half under and half above ground, that enclosed another circular structure called the ice room. The purpose of the double-walled structure was to store ice in the middle and store and preserve perishable food and medicine at the outer side of the ice room. During World War II, major portions of the hospital and the above-ground section of the ice house was bombed and heavily damaged. Later, in 1962, the hospital was restored and rehabilitated into a university, with the plan to conceal the ice house's remaining structure in the courtyard, which ultimately was excavated and incorporated into a new library for the university. A team of engineers, architects, and students from Politecnico di Milano and Carleton University conducted two heritage recording surveys in 2015 and 2016 to fully document the existing condition of the ice house, resulting in an inclusive laser scanner and photogrammetric point cloud dataset. The point cloud data was consolidated and imported into two leading parametric modelling software, Autodesk Revitand Graphisoft ArchiCAD©, with the goal to develop two BIMs in parallel in order to study and compare the software BIM workflow, parametric capabilities, attributes to capture the complex geometry with high accuracy, and the duration for parametric modelling. The comparison study of the two software revealed their workflow limitations, leading to integration of the BIM generative process with other pure modelling software such as Rhinoceros©. The integrative BIM process led to the production of a comprehensive BIM model that documented related historic data and the existing physical state of the ice house, to be used as a baseline for preventive maintenance, monitoring, and future conservation projects.
Nonlinear Tides in Close Binary Systems
NASA Astrophysics Data System (ADS)
Weinberg, Nevin N.; Arras, Phil; Quataert, Eliot; Burkart, Josh
2012-06-01
We study the excitation and damping of tides in close binary systems, accounting for the leading-order nonlinear corrections to linear tidal theory. These nonlinear corrections include two distinct physical effects: three-mode nonlinear interactions, i.e., the redistribution of energy among stellar modes of oscillation, and nonlinear excitation of stellar normal modes by the time-varying gravitational potential of the companion. This paper, the first in a series, presents the formalism for studying nonlinear tides and studies the nonlinear stability of the linear tidal flow. Although the formalism we present is applicable to binaries containing stars, planets, and/or compact objects, we focus on non-rotating solar-type stars with stellar or planetary companions. Our primary results include the following: (1) The linear tidal solution almost universally used in studies of binary evolution is unstable over much of the parameter space in which it is employed. More specifically, resonantly excited internal gravity waves in solar-type stars are nonlinearly unstable to parametric resonance for companion masses M' >~ 10-100 M ⊕ at orbital periods P ≈ 1-10 days. The nearly static "equilibrium" tidal distortion is, however, stable to parametric resonance except for solar binaries with P <~ 2-5 days. (2) For companion masses larger than a few Jupiter masses, the dynamical tide causes short length scale waves to grow so rapidly that they must be treated as traveling waves, rather than standing waves. (3) We show that the global three-wave treatment of parametric instability typically used in the astrophysics literature does not yield the fastest-growing daughter modes or instability threshold in many cases. We find a form of parametric instability in which a single parent wave excites a very large number of daughter waves (N ≈ 103[P/10 days] for a solar-type star) and drives them as a single coherent unit with growth rates that are a factor of ≈N faster than the standard three-wave parametric instability. These are local instabilities viewed through the lens of global analysis; the coherent global growth rate follows local rates in the regions where the shear is strongest. In solar-type stars, the dynamical tide is unstable to this collective version of the parametric instability for even sub-Jupiter companion masses with P <~ a month. (4) Independent of the parametric instability, the dynamical and equilibrium tides excite a wide range of stellar p-modes and g-modes by nonlinear inhomogeneous forcing; this coupling appears particularly efficient at draining energy out of the dynamical tide and may be more important than either wave breaking or parametric resonance at determining the nonlinear dissipation of the dynamical tide.
NASA Astrophysics Data System (ADS)
Radtke, T.; Fritzsche, S.
2008-11-01
Entanglement is known today as a key resource in many protocols from quantum computation and quantum information theory. However, despite the successful demonstration of several protocols, such as teleportation or quantum key distribution, there are still many open questions of how entanglement affects the efficiency of quantum algorithms or how it can be protected against noisy environments. The investigation of these and related questions often requires a search or optimization over the set of quantum states and, hence, a parametrization of them and various other objects. To facilitate this kind of studies in quantum information theory, here we present an extension of the FEYNMAN program that was developed during recent years as a toolbox for the simulation and analysis of quantum registers. In particular, we implement parameterizations of hermitian and unitary matrices (of arbitrary order), pure and mixed quantum states as well as separable states. In addition to being a prerequisite for the study of many optimization problems, these parameterizations also provide the necessary basis for heuristic studies which make use of random states, unitary matrices and other objects. Program summaryProgram title: FEYNMAN Catalogue identifier: ADWE_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWE_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 24 231 No. of bytes in distributed program, including test data, etc.: 1 416 085 Distribution format: tar.gz Programming language: Maple 11 Computer: Any computer with Maple software installed Operating system: Any system that supports Maple; program has been tested under Microsoft Windows XP, Linux Classification: 4.15 Does the new version supersede the previous version?: Yes Nature of problem: During the last decades, quantum information science has contributed to our understanding of quantum mechanics and has provided also new and efficient protocols, based on the use of entangled quantum states. To determine the behavior and entanglement of n-qubit quantum registers, symbolic and numerical simulations need to be applied in order to analyze how these quantum information protocols work and which role the entanglement plays hereby. Solution method: Using the computer algebra system Maple, we have developed a set of procedures that support the definition, manipulation and analysis of n-qubit quantum registers. These procedures also help to deal with (unitary) logic gates and (nonunitary) quantum operations that act upon the quantum registers. With the parameterization of various frequently-applied objects, that are implemented in the present version, the program now facilitates a wider range of symbolic and numerical studies. All commands can be used interactively in order to simulate and analyze the evolution of n-qubit quantum systems, both in ideal and noisy quantum circuits. Reasons for new version: In the first version of the FEYNMAN program [1], we implemented the data structures and tools that are necessary to create, manipulate and to analyze the state of quantum registers. Later [2,3], support was added to deal with quantum operations (noisy channels) as an ingredient which is essential for studying the effects of decoherence. With the present extension, we add a number of parametrizations of objects frequently utilized in decoherence and entanglement studies, such that as hermitian and unitary matrices, probability distributions, or various kinds of quantum states. This extension therefore provides the basis, for example, for the optimization of a given function over the set of pure states or the simple generation of random objects. Running time: Most commands that act upon quantum registers with five or less qubits take ⩽10 seconds of processor time on a Pentium 4 processor with ⩾2GHz or newer, and about 5-20 MB of working memory (in addition to the memory for the Maple environment). Especially when working with symbolic expressions, however, the requirements on CPU time and memory critically depend on the size of the quantum registers, owing to the exponential growth of the dimension of the associated Hilbert space. For example, complex (symbolic) noise models, i.e. with several symbolic Kraus operators, result for multi-qubit systems often in very large expressions that dramatically slow down the evaluation of e.g. distance measures or the final-state entropy, etc. In these cases, Maple's assume facility sometimes helps to reduce the complexity of the symbolic expressions, but more often only a numerical evaluation is possible eventually. Since the complexity of the various commands of the FEYNMAN program and the possible usage scenarios can be very different, no general scaling law for CPU time or the memory requirements can be given. References: [1] T. Radtke, S. Fritzsche, Comput. Phys. Comm. 173 (2005) 91. [2] T. Radtke, S. Fritzsche, Comput. Phys. Comm. 175 (2006) 145. [3] T. Radtke, S. Fritzsche, Comput. Phys. Comm. 176 (2007) 617.
D'Suze, Gina; Sandoval, Moisés; Sevcik, Carlos
2015-12-15
A characteristic of venom elution patterns, shared with many other complex systems, is that many their features cannot be properly described with statistical or euclidean concepts. The understanding of such systems became possible with Mandelbrot's fractal analysis. Venom elution patterns were produced using the reversed phase high performance liquid chromatography (HPLC) with 1 mg of venom. One reason for the lack of quantitative analyses of the sources of venom variability is parametrizing the venom chromatograms' complexity. We quantize this complexity by means of an algorithm which estimates the contortedness (Q) of a waveform. Fractal analysis was used to compare venoms and to measure inter- and intra-specific venom variability. We studied variations in venom complexity derived from gender, seasonal and environmental factors, duration of captivity in the laboratory, technique used to milk venom. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Yali; Wang, Jun
2017-09-01
In an attempt to investigate the nonlinear complex evolution of financial dynamics, a new financial price model - the multitype range-intensity contact (MRIC) financial model, is developed based on the multitype range-intensity interacting contact system, in which the interaction and transmission of different types of investment attitudes in a stock market are simulated by viruses spreading. Two new random visibility graph (VG) based analyses and Lempel-Ziv complexity (LZC) are applied to study the complex behaviors of return time series and the corresponding random sorted series. The VG method is the complex network theory, and the LZC is a non-parametric measure of complexity reflecting the rate of new pattern generation of a series. In this work, the real stock market indices are considered to be comparatively studied with the simulation data of the proposed model. Further, the numerical empirical study shows the similar complexity behaviors between the model and the real markets, the research confirms that the financial model is reasonable to some extent.
Bignardi, A B; El Faro, L; Cardoso, V L; Machado, P F; Albuquerque, L G
2009-09-01
The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.
NASA Technical Reports Server (NTRS)
Gurnett, D. A.; Maggs, J. E.; Gallagher, D. L.; Kurth, W. S.; Scarf, F. L.
1981-01-01
Observations are presented of the parametric decay and spatial collapse of Langmuir waves driven by an electron beam streaming into the solar wind from the Jovian bow shock. Long wavelength Langmuir waves upstream of the bow shock are effectively converted into short wavelength waves no longer in resonance with the beam. The conversion is shown to be the result of a nonlinear interaction involving the beam-driven pump, a sideband emission, and a low level of ion-acoustic turbulence. The beam-driven Langmuir wave emission breaks up into a complex sideband structure with both positive and negative Doppler shifts. In some cases, the sideband emission consists of isolated wave packets with very short duration bursts, which are very intense and are thought to consist of envelope solitons which have collapsed to spatial scales of only a few Debye lengths.
Synthesis and Control of Flexible Systems with Component-Level Uncertainties
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Lim, Kyong B.
2009-01-01
An efficient and computationally robust method for synthesis of component dynamics is developed. The method defines the interface forces/moments as feasible vectors in transformed coordinates to ensure that connectivity requirements of the combined structure are met. The synthesized system is then defined in a transformed set of feasible coordinates. The simplicity of form is exploited to effectively deal with modeling parametric and non-parametric uncertainties at the substructure level. Uncertainty models of reasonable size and complexity are synthesized for the combined structure from those in the substructure models. In particular, we address frequency and damping uncertainties at the component level. The approach first considers the robustness of synthesized flexible systems. It is then extended to deal with non-synthesized dynamic models with component-level uncertainties by projecting uncertainties to the system level. A numerical example is given to demonstrate the feasibility of the proposed approach.
NASA Technical Reports Server (NTRS)
Stagliano, T. R.; Witmer, E. A.; Rodal, J. J. A.
1979-01-01
Finite element modeling alternatives as well as the utility and limitations of the two dimensional structural response computer code CIVM-JET 4B for predicting the transient, large deflection, elastic plastic, structural responses of two dimensional beam and/or ring structures which are subjected to rigid fragment impact were investigated. The applicability of the CIVM-JET 4B analysis and code for the prediction of steel containment ring response to impact by complex deformable fragments from a trihub burst of a T58 turbine rotor was studied. Dimensional analysis considerations were used in a parametric examination of data from engine rotor burst containment experiments and data from sphere beam impact experiments. The use of the CIVM-JET 4B computer code for making parametric structural response studies on both fragment-containment structure and fragment-deflector structure was illustrated. Modifications to the analysis/computation procedure were developed to alleviate restrictions.
Stable finite element approximations of two-phase flow with soluble surfactant
NASA Astrophysics Data System (ADS)
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2015-09-01
A parametric finite element approximation of incompressible two-phase flow with soluble surfactants is presented. The Navier-Stokes equations are coupled to bulk and surfaces PDEs for the surfactant concentrations. At the interface adsorption, desorption and stress balances involving curvature effects and Marangoni forces have to be considered. A parametric finite element approximation for the advection of the interface, which maintains good mesh properties, is coupled to the evolving surface finite element method, which is used to discretize the surface PDE for the interface surfactant concentration. The resulting system is solved together with standard finite element approximations of the Navier-Stokes equations and of the bulk parabolic PDE for the surfactant concentration. Semidiscrete and fully discrete approximations are analyzed with respect to stability, conservation and existence/uniqueness issues. The approach is validated for simple test cases and for complex scenarios, including colliding drops in a shear flow, which are computed in two and three space dimensions.
NASA Astrophysics Data System (ADS)
AsséMat, Elie; Machnes, Shai; Tannor, David; Wilhelm-Mauch, Frank
In part I, we presented the theoretic foundations of the GOAT algorithm for the optimal control of quantum systems. Here in part II, we focus on several applications of GOAT to superconducting qubits architecture. First, we consider a control-Z gate on Xmons qubits with an Erf parametrization of the optimal pulse. We show that a fast and accurate gate can be obtained with only 16 parameters, as compared to hundreds of parameters required in other algorithms. We present numerical evidences that such parametrization should allow an efficient in-situ calibration of the pulse. Next, we consider the flux-tunable coupler by IBM. We show optimization can be carried out in a more realistic model of the system than was employed in the original study, which is expected to further simplify the calibration process. Moreover, GOAT reduced the complexity of the optimal pulse to only 6 Fourier components, composed with analytic wrappers.
NASA Astrophysics Data System (ADS)
Charroyer, L.; Chiello, O.; Sinou, J.-J.
2016-12-01
In this paper, the study of a damped mass-spring system of three degrees of freedom with friction is proposed in order to highlight the differences in mode coupling instabilities between planar and rectilinear friction assumptions. Well-known results on the effect of structural damping in the field of friction-induced vibration are extended to the specific case of a damped mechanical system with planar friction. It is emphasised that the lowering and smoothing effects are not so intuitive in this latter case. The stability analysis is performed by calculating the complex eigenvalues of the linearised system and by using the Routh-Hurwitz criterion. Parametric studies are carried out in order to evaluate the effects of various system parameters on stability. Special attention is paid to the understanding of the role of damping and the associated destabilisation paradox in mode-coupling instabilities with planar and rectilinear friction assumptions.
NASA Astrophysics Data System (ADS)
Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen
2013-08-01
We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.
2015-01-01
Highly charged metal ions act as catalytic centers and structural elements in a broad range of chemical complexes. The nonbonded model for metal ions is extensively used in molecular simulations due to its simple form, computational speed, and transferability. We have proposed and parametrized a 12-6-4 LJ (Lennard-Jones)-type nonbonded model for divalent metal ions in previous work, which showed a marked improvement over the 12-6 LJ nonbonded model. In the present study, by treating the experimental hydration free energies and ion–oxygen distances of the first solvation shell as targets for our parametrization, we evaluated 12-6 LJ parameters for 18 M(III) and 6 M(IV) metal ions for three widely used water models (TIP3P, SPC/E, and TIP4PEW). As expected, the interaction energy underestimation of the 12-6 LJ nonbonded model increases dramatically for the highly charged metal ions. We then parametrized the 12-6-4 LJ-type nonbonded model for these metal ions with the three water models. The final parameters reproduced the target values with good accuracy, which is consistent with our previous experience using this potential. Finally, tests were performed on a protein system, and the obtained results validate the transferability of these nonbonded model parameters. PMID:25145273
NASA Astrophysics Data System (ADS)
Paul, Subir; Nagesh Kumar, D.
2018-04-01
Hyperspectral (HS) data comprises of continuous spectral responses of hundreds of narrow spectral bands with very fine spectral resolution or bandwidth, which offer feature identification and classification with high accuracy. In the present study, Mutual Information (MI) based Segmented Stacked Autoencoder (S-SAE) approach for spectral-spatial classification of the HS data is proposed to reduce the complexity and computational time compared to Stacked Autoencoder (SAE) based feature extraction. A non-parametric dependency measure (MI) based spectral segmentation is proposed instead of linear and parametric dependency measure to take care of both linear and nonlinear inter-band dependency for spectral segmentation of the HS bands. Then morphological profiles are created corresponding to segmented spectral features to assimilate the spatial information in the spectral-spatial classification approach. Two non-parametric classifiers, Support Vector Machine (SVM) with Gaussian kernel and Random Forest (RF) are used for classification of the three most popularly used HS datasets. Results of the numerical experiments carried out in this study have shown that SVM with a Gaussian kernel is providing better results for the Pavia University and Botswana datasets whereas RF is performing better for Indian Pines dataset. The experiments performed with the proposed methodology provide encouraging results compared to numerous existing approaches.
Parametric fMRI analysis of visual encoding in the human medial temporal lobe.
Rombouts, S A; Scheltens, P; Machielson, W C; Barkhof, F; Hoogenraad, F G; Veltman, D J; Valk, J; Witter, M P
1999-01-01
A number of functional brain imaging studies indicate that the medial temporal lobe system is crucially involved in encoding new information into memory. However, most studies were based on differences in brain activity between encoding of familiar vs. novel stimuli. To further study the underlying cognitive processes, we applied a parametric design of encoding. Seven healthy subjects were instructed to encode complex color pictures into memory. Stimuli were presented in a parametric fashion at different rates, thus representing different loads of encoding. Functional magnetic resonance imaging (fMRI) was used to assess changes in brain activation. To determine the number of pictures successfully stored into memory, recognition scores were determined afterwards. During encoding, brain activation occurred in the medial temporal lobe, comparable to the results obtained by others. Increasing the encoding load resulted in an increase in the number of successfully stored items. This was reflected in a significant increase in brain activation in the left lingual gyrus, in the left and right parahippocampal gyrus, and in the right inferior frontal gyrus. This study shows that fMRI can detect changes in brain activation during variation of one aspect of higher cognitive tasks. Further, it strongly supports the notion that the human medial temporal lobe is involved in encoding novel visual information into memory.
Gottscho, Andrew D.; Wood, Dustin A.; Vandergast, Amy; Lemos Espinal, Julio A.; Gatesy, John; Reeder, Tod
2017-01-01
Multi-locus nuclear DNA data were used to delimit species of fringe-toed lizards of theUma notata complex, which are specialized for living in wind-blown sand habitats in the deserts of southwestern North America, and to infer whether Quaternary glacial cycles or Tertiary geological events were important in shaping the historical biogeography of this group. We analyzed ten nuclear loci collected using Sanger sequencing and genome-wide sequence and single-nucleotide polymorphism (SNP) data collected using restriction-associated DNA (RAD) sequencing. A combination of species discovery methods (concatenated phylogenies, parametric and non-parametric clustering algorithms) and species validation approaches (coalescent-based species tree/isolation-with-migration models) were used to delimit species, infer phylogenetic relationships, and to estimate effective population sizes, migration rates, and speciation times. Uma notata, U. inornata, U. cowlesi, and an undescribed species from Mohawk Dunes, Arizona (U. sp.) were supported as distinct in the concatenated analyses and by clustering algorithms, and all operational taxonomic units were decisively supported as distinct species by ranking hierarchical nested speciation models with Bayes factors based on coalescent-based species tree methods. However, significant unidirectional gene flow (2NM >1) from U. cowlesi and U. notata into U. rufopunctata was detected under the isolation-with-migration model. Therefore, we conservatively delimit four species-level lineages within this complex (U. inornata, U. notata, U. cowlesi, and U. sp.), treating U. rufopunctata as a hybrid population (U. notata x cowlesi). Both concatenated and coalescent-based estimates of speciation times support the hypotheses that speciation within the complex occurred during the late Pleistocene, and that the geological evolution of the Colorado River delta during this period was an important process shaping the observed phylogeographic patterns.
Accelerated failure time models for semi-competing risks data in the presence of complex censoring.
Lee, Kyu Ha; Rondeau, Virginie; Haneuse, Sebastien
2017-12-01
Statistical analyses that investigate risk factors for Alzheimer's disease (AD) are often subject to a number of challenges. Some of these challenges arise due to practical considerations regarding data collection such that the observation of AD events is subject to complex censoring including left-truncation and either interval or right-censoring. Additional challenges arise due to the fact that study participants under investigation are often subject to competing forces, most notably death, that may not be independent of AD. Towards resolving the latter, researchers may choose to embed the study of AD within the "semi-competing risks" framework for which the recent statistical literature has seen a number of advances including for the so-called illness-death model. To the best of our knowledge, however, the semi-competing risks literature has not fully considered analyses in contexts with complex censoring, as in studies of AD. This is particularly the case when interest lies with the accelerated failure time (AFT) model, an alternative to the traditional multiplicative Cox model that places emphasis away from the hazard function. In this article, we outline a new Bayesian framework for estimation/inference of an AFT illness-death model for semi-competing risks data subject to complex censoring. An efficient computational algorithm that gives researchers the flexibility to adopt either a fully parametric or a semi-parametric model specification is developed and implemented. The proposed methods are motivated by and illustrated with an analysis of data from the Adult Changes in Thought study, an on-going community-based prospective study of incident AD in western Washington State. © 2017, The International Biometric Society.
Learning of perceptual grouping for object segmentation on RGB-D data☆
Richtsfeld, Andreas; Mörwald, Thomas; Prankl, Johann; Zillich, Michael; Vincze, Markus
2014-01-01
Object segmentation of unknown objects with arbitrary shape in cluttered scenes is an ambitious goal in computer vision and became a great impulse with the introduction of cheap and powerful RGB-D sensors. We introduce a framework for segmenting RGB-D images where data is processed in a hierarchical fashion. After pre-clustering on pixel level parametric surface patches are estimated. Different relations between patch-pairs are calculated, which we derive from perceptual grouping principles, and support vector machine classification is employed to learn Perceptual Grouping. Finally, we show that object hypotheses generation with Graph-Cut finds a globally optimal solution and prevents wrong grouping. Our framework is able to segment objects, even if they are stacked or jumbled in cluttered scenes. We also tackle the problem of segmenting objects when they are partially occluded. The work is evaluated on publicly available object segmentation databases and also compared with state-of-the-art work of object segmentation. PMID:24478571
Graczynski, M R
2000-09-10
Index Copernicus is ranking system set up by members of the medical community in the Region. There were created five groups of parameters like scientific, editorial and technical quality, circulation and frequency-market stability, which allow for the generation of such a ranking system. The Authors of the Ranking System are aware of the deficiencies of parametrical analysis of science, however they believe the numbers at least set up clear, objective and just rules for all. Index Copernicus could be said the primary objectives of the system for which it has been created for.
Persistent model order reduction for complex dynamical systems using smooth orthogonal decomposition
NASA Astrophysics Data System (ADS)
Ilbeigi, Shahab; Chelidze, David
2017-11-01
Full-scale complex dynamic models are not effective for parametric studies due to the inherent constraints on available computational power and storage resources. A persistent reduced order model (ROM) that is robust, stable, and provides high-fidelity simulations for a relatively wide range of parameters and operating conditions can provide a solution to this problem. The fidelity of a new framework for persistent model order reduction of large and complex dynamical systems is investigated. The framework is validated using several numerical examples including a large linear system and two complex nonlinear systems with material and geometrical nonlinearities. While the framework is used for identifying the robust subspaces obtained from both proper and smooth orthogonal decompositions (POD and SOD, respectively), the results show that SOD outperforms POD in terms of stability, accuracy, and robustness.
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies.
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-07
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18 F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans-each containing 1/8th of the total number of events-were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18 F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of [Formula: see text], the tracer transport rate (ml · min -1 · ml -1 ), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced [Formula: see text] maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced [Formula: see text] estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in-vivo studies
Petibon, Yoann; Rakvongthai, Yothin; Fakhri, Georges El; Ouyang, Jinsong
2017-01-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves -TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in-vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans - each containing 1/8th of the total number of events - were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard Ordered Subset Expectation Maximization (OSEM) reconstruction algorithm on one side, and the One-Step Late Maximum a Posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of K1, the tracer transport rate (mL.min−1.mL−1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced K1 maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced K1 estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in-vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance. PMID:28379843
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies
NASA Astrophysics Data System (ADS)
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans—each containing 1/8th of the total number of events—were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
Artificial Intelligence Methods in Pursuit Evasion Differential Games
1990-07-30
objectives, sometimes with fuzzy ones. Classical optimization, control or game theoretic methods are insufficient for their resolution. I Solution...OVERALL SATISFACTION WITH SCHOOL 120 FIGURE 5.13 EXAMPLE AHP HIERARCHY FOR CHOOSING MOST APPROPRIATE DIFFERENTIAL GAME AND PARAMETRIZATION 125 FIGURE 5.14...the Analytical Hierarchy Process originated by T.L. Saaty of the Wharton School. The Analytic Hierarchy Process ( AHP ) is a general theory of
Problems in Bearings and Lubrication
1982-08-01
Dayton, Group Leader Lubrication Systems*** SUMMARY The objective of this analytical and experimental program is to develop t, long life bearing for...established on the basis of experimental , analytical, manufacturing, and production * experience. The parameters of the Group C bearings which were not...supplied experimental hardware which has been successfully evaluated in teats to speeds of 3.0 MDN. A total of 10 Group A parametric bearings were
3D Facial Pattern Analysis for Autism
2010-07-01
each individual’s data were scaled by the geometric mean of all possible linear distances between landmarks, following. The first two principal...over traditional template matching in that it can represent geometrical and non- geometrical changes of an object in the parametric template space...set of vertex templates can be generated from the root template by geometric or non- geometric transformation. Let Mtt ,...1 be M normalized vertex
The Scaling of Loss Pathways and Heat Transfer in Small Scale Internal Combustion Engines
2016-09-16
less than 5%. Two factors drove the high short-circuiting observed in the 405 studied engines: excess fresh charge delivered to the engines beyond...losses do not begin to increase substantially until engine displacement decreases below 10 cm3. The objective concluded with a parametric study ...81 4.1. Why Loss Pathways in ICEs Scale ................................................................ 82 4.2. Scaling Studies
Object recognition with hierarchical discriminant saliency networks.
Han, Sunhyoung; Vasconcelos, Nuno
2014-01-01
The benefits of integrating attention and object recognition are investigated. While attention is frequently modeled as a pre-processor for recognition, we investigate the hypothesis that attention is an intrinsic component of recognition and vice-versa. This hypothesis is tested with a recognition model, the hierarchical discriminant saliency network (HDSN), whose layers are top-down saliency detectors, tuned for a visual class according to the principles of discriminant saliency. As a model of neural computation, the HDSN has two possible implementations. In a biologically plausible implementation, all layers comply with the standard neurophysiological model of visual cortex, with sub-layers of simple and complex units that implement a combination of filtering, divisive normalization, pooling, and non-linearities. In a convolutional neural network implementation, all layers are convolutional and implement a combination of filtering, rectification, and pooling. The rectification is performed with a parametric extension of the now popular rectified linear units (ReLUs), whose parameters can be tuned for the detection of target object classes. This enables a number of functional enhancements over neural network models that lack a connection to saliency, including optimal feature denoising mechanisms for recognition, modulation of saliency responses by the discriminant power of the underlying features, and the ability to detect both feature presence and absence. In either implementation, each layer has a precise statistical interpretation, and all parameters are tuned by statistical learning. Each saliency detection layer learns more discriminant saliency templates than its predecessors and higher layers have larger pooling fields. This enables the HDSN to simultaneously achieve high selectivity to target object classes and invariance. The performance of the network in saliency and object recognition tasks is compared to those of models from the biological and computer vision literatures. This demonstrates benefits for all the functional enhancements of the HDSN, the class tuning inherent to discriminant saliency, and saliency layers based on templates of increasing target selectivity and invariance. Altogether, these experiments suggest that there are non-trivial benefits in integrating attention and recognition.
Modeling of Fuel Film Cooling on Chamber Hot Wall
2014-07-01
downstream, when the film has been depleted of its cooling and coking capacities, a second slot is needed to inject fresh cool fuel. All of these...pyrolysis and oxidation. 7. As discussed in the introductory section, sooting and coking are notoriously complex topics. Well- validated global...accurate models for soot formation and deposition. Instead, the potential impact of the coke layer is evaluated parametrically by representing the
Drawing Dynamical and Parameters Planes of Iterative Families and Methods
Chicharro, Francisco I.
2013-01-01
The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones). PMID:24376386
The Lewis Chemical Equilibrium Program with parametric study capability
NASA Technical Reports Server (NTRS)
Sevigny, R.
1981-01-01
The program was developed to determine chemical equilibrium in complex systems. Using a free energy minimization technique, the program permits calculations such as: chemical equilibrium for assigned thermodynamic states; theoretical rocket performance for both equilibrium and frozen compositions during expansion; incident and reflected shock properties; and Chapman-Jouget detonation properties. It is shown that the same program can handle solid coal in an entrained flow coal gasification problem.
Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.
2017-03-27
A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO 2-CH 4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this paper, we present a set ofmore » fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. Finally, the mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.« less
NASA Astrophysics Data System (ADS)
Yang, X.; Castleman, A. W., Jr.
1990-08-01
The kinetics and mechanisms of the reactions of Na+ṡ(X)n=0-3, X=water, ammonia, and methanol with CH3CN, CH3COCH3, CH3CHO, CH3COOH, CH3COOCH3, NH3, CH3OH, and CH3-O-C2H4-O-CH3(DMOE) were studied at ambient temperature under different pressures. All of the switching (substitution) reactions proceed at near-collision rate and show little dependence on the flow tube pressure, the nature and size of the ligand, or the type of core ions. Interestingly, all of the measured rate constants agree well with predictions based on the parametrized trajectory calculations of Su and Chesnavich [J. Chem. Phys. 76, 5183 (1982)]. The reactions of the bare sodium ion with all neutrals proceed via a three-body association mechanism and the measured rate constants cover a large range from a slow association reaction with NH3, to a near-collision rate with DMOE. The lifetimes and the dissociation rate constants of the intermediate complexes deduced using the parametrized trajectory results, combined with the experimentally determined rates, compare fairly well with predictions based on RRKM theory. The calculations also account for the large isotope effect observed for the clustering of ND3 and NH3 to Na+.
NASA Astrophysics Data System (ADS)
Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.
2017-06-01
A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO2-CH4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this work, we present a set of fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. The mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.
Zamboni, Giovanna; Gozzi, Marta; Krueger, Frank; Duhamel, Jean-René; Sirigu, Angela; Grafman, Jordan
2009-01-01
Politics is a manifestation of the uniquely human ability to debate, decide, and reach consensus on decisions affecting large groups over long durations of time. Recent neuroimaging studies on politics have focused on the association between brain regions and specific political behaviors by adopting party or ideological affiliation as a criterion to classify either experimental stimuli or subjects. However, it is unlikely that complex political beliefs (i.e., "the government should protect freedom of speech") are evaluated only on a liberal-to-conservative criterion. Here we used multidimensional scaling and parametric functional magnetic resonance imaging to identify which criteria/dimensions people use to structure complex political beliefs and which brain regions are concurrently activated. We found that three independent dimensions explained the variability of a set of statements expressing political beliefs and that each dimension was reflected in a distinctive pattern of neural activation: individualism (medial prefrontal cortex and temporoparietal junction), conservatism (dorsolateral prefrontal cortex), and radicalism (ventral striatum and posterior cingulate). The structures we identified are also known to be important in self-other processing, social decision-making in ambivalent situations, and reward prediction. Our results extend current knowledge on the neural correlates of the structure of political beliefs, a fundamental aspect of the human ability to coalesce into social entities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.
A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO 2-CH 4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this paper, we present a set ofmore » fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. Finally, the mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.« less
Cozzi, Bruno; De Giorgio, Andrea; Peruffo, A; Montelli, S; Panin, M; Bombardi, C; Grandis, A; Pirone, A; Zambenedetti, P; Corain, L; Granato, Alberto
2017-08-01
The architecture of the neocortex classically consists of six layers, based on cytological criteria and on the layout of intra/interlaminar connections. Yet, the comparison of cortical cytoarchitectonic features across different species proves overwhelmingly difficult, due to the lack of a reliable model to analyze the connection patterns of neuronal ensembles forming the different layers. We first defined a set of suitable morphometric cell features, obtained in digitized Nissl-stained sections of the motor cortex of the horse, chimpanzee, and crab-eating macaque. We then modeled them using a quite general non-parametric data representation model, showing that the assessment of neuronal cell complexity (i.e., how a given cell differs from its neighbors) can be performed using a suitable measure of statistical dispersion such as the mean absolute deviation-mean absolute deviation (MAD). Along with the non-parametric combination and permutation methodology, application of MAD allowed not only to estimate, but also to compare and rank the motor cortical complexity across different species. As to the instances presented in this paper, we show that the pyramidal layers of the motor cortex of the horse are far more irregular than those of primates. This feature could be related to the different organizations of the motor system in monodactylous mammals.
NASA Astrophysics Data System (ADS)
Wang, Zhen-yu; Yu, Jian-cheng; Zhang, Ai-qun; Wang, Ya-xing; Zhao, Wen-tao
2017-12-01
Combining high precision numerical analysis methods with optimization algorithms to make a systematic exploration of a design space has become an important topic in the modern design methods. During the design process of an underwater glider's flying-wing structure, a surrogate model is introduced to decrease the computation time for a high precision analysis. By these means, the contradiction between precision and efficiency is solved effectively. Based on the parametric geometry modeling, mesh generation and computational fluid dynamics analysis, a surrogate model is constructed by adopting the design of experiment (DOE) theory to solve the multi-objects design optimization problem of the underwater glider. The procedure of a surrogate model construction is presented, and the Gaussian kernel function is specifically discussed. The Particle Swarm Optimization (PSO) algorithm is applied to hydrodynamic design optimization. The hydrodynamic performance of the optimized flying-wing structure underwater glider increases by 9.1%.
Extraction of decision rules via imprecise probabilities
NASA Astrophysics Data System (ADS)
Abellán, Joaquín; López, Griselda; Garach, Laura; Castellano, Javier G.
2017-05-01
Data analysis techniques can be applied to discover important relations among features. This is the main objective of the Information Root Node Variation (IRNV) technique, a new method to extract knowledge from data via decision trees. The decision trees used by the original method were built using classic split criteria. The performance of new split criteria based on imprecise probabilities and uncertainty measures, called credal split criteria, differs significantly from the performance obtained using the classic criteria. This paper extends the IRNV method using two credal split criteria: one based on a mathematical parametric model, and other one based on a non-parametric model. The performance of the method is analyzed using a case study of traffic accident data to identify patterns related to the severity of an accident. We found that a larger number of rules is generated, significantly supplementing the information obtained using the classic split criteria.
Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization
Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.
2014-01-01
Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406
Results of the JIMO Follow-on Destinations Parametric Studies
NASA Technical Reports Server (NTRS)
Noca, Muriel A.; Hack, Kurt J.
2005-01-01
NASA's proposed Jupiter Icy Moon Orbiter (JIMO) mission currently in conceptual development is to be the first one of a series of highly capable Nuclear Electric Propulsion (NEP) science driven missions. To understand the implications of a multi-mission capability requirement on the JIMO vehicle and mission, the NASA Prometheus Program initiated a set of parametric high-level studies to be followed by a series of more in-depth studies. The JIMO potential follow-on destinations identified include a Saturn system tour, a Neptune system tour, a Kuiper Belt Objects rendezvous, an Interstellar Precursor mission, a Multiple Asteroid Sample Return and a Comet Sample Return. This paper shows that the baseline JIMO reactor and design envelop can satisfy five out of six of the follow-on destinations. Flight time to these destinations can significantly be reduced by increasing the launch energy or/and by inserting gravity assists to the heliocentric phase.
Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker
2012-08-01
Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.
Parametric Weight Study of Cryogenic Metallic Tanks for the ``Bimodal'' NTR Mars Vehicle Concept
NASA Astrophysics Data System (ADS)
Kosareo, Daniel N.; Roche, Joseph M.
2006-01-01
A parametric weight assessment of large cryogenic metallic tanks was conducted using the design optimization capabilities in the ANSYS ® finite element analysis code. This analysis was performed to support the sizing of a ``bimodal'' nuclear thermal rocket (NTR) Mars vehicle concept developed at the NASA Glenn Research Center. The tank design study was driven by two load conditions: an in-line, ``Shuttle-derived'' heavy-lift launch with the tanks filled and pressurized, and a burst-test pressure. The main tank structural arrangement is a state-of-the art metallic construction which uses an aluminum-lithium alloy stiffened internally with a ring and stringer framework. The tanks must carry liquid hydrogen in separate launches to orbit where all vehicle components will dock and mate. All tank designs stayed within the available mass and payload volume limits of both the in-line heavy lift and Shuttle derived launch vehicles. Weight trends were developed over a range of tank lengths with varying stiffener cross-sections and tank wall thicknesses. The object of this parametric study was to verify that the proper mass was allocated for the tanks in the overall vehicle sizing model. This paper summarizes the tank weights over a range of tank lengths.
Zhang, Kai; Cao, Libo; Wang, Yulong; Hwang, Eunjoo; Reed, Matthew P; Forman, Jason; Hu, Jingwen
2017-10-01
Field data analyses have shown that obesity significantly increases the occupant injury risks in motor vehicle crashes, but the injury assessment tools for people with obesity are largely lacking. The objectives of this study were to use a mesh morphing method to rapidly generate parametric finite element models with a wide range of obesity levels and to evaluate their biofidelity against impact tests using postmortem human subjects (PMHS). Frontal crash tests using three PMHS seated in a vehicle rear seat compartment with body mass index (BMI) from 24 to 40 kg/m 2 were selected. To develop the human models matching the PMHS geometry, statistical models of external body shape, rib cage, pelvis, and femur were applied to predict the target geometry using age, sex, stature, and BMI. A mesh morphing method based on radial basis functions was used to rapidly morph a baseline human model into the target geometry. The model-predicted body excursions and injury measures were compared to the PMHS tests. Comparisons of occupant kinematics and injury measures between the tests and simulations showed reasonable correlations across the wide range of BMI levels. The parametric human models have the capability to account for the obesity effects on the occupant impact responses and injury risks. © 2017 The Obesity Society.
Robust Control Design for Uncertain Nonlinear Dynamic Systems
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Crespo, Luis G.; Andrews, Lindsey; Giesy, Daniel P.
2012-01-01
Robustness to parametric uncertainty is fundamental to successful control system design and as such it has been at the core of many design methods developed over the decades. Despite its prominence, most of the work on robust control design has focused on linear models and uncertainties that are non-probabilistic in nature. Recently, researchers have acknowledged this disparity and have been developing theory to address a broader class of uncertainties. This paper presents an experimental application of robust control design for a hybrid class of probabilistic and non-probabilistic parametric uncertainties. The experimental apparatus is based upon the classic inverted pendulum on a cart. The physical uncertainty is realized by a known additional lumped mass at an unknown location on the pendulum. This unknown location has the effect of substantially altering the nominal frequency and controllability of the nonlinear system, and in the limit has the capability to make the system neutrally stable and uncontrollable. Another uncertainty to be considered is a direct current motor parameter. The control design objective is to design a controller that satisfies stability, tracking error, control power, and transient behavior requirements for the largest range of parametric uncertainties. This paper presents an overview of the theory behind the robust control design methodology and the experimental results.
An Interactive Software for Conceptual Wing Flutter Analysis and Parametric Study
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1996-01-01
An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate the flutter instability boundary of a flexible cantilever wing, when well-defined structural and aerodynamic data are not available, and then study the effect of change in Mach number, dynamic pressure, torsional frequency, sweep, mass ratio, aspect ratio, taper ratio, center of gravity, and pitch inertia, to guide the development of the concept. The software was developed for Macintosh or IBM compatible personal computers, on MathCad application software with integrated documentation, graphics, data base and symbolic mathematics. The analysis method was based on non-dimensional parametric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on torsional stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravity location and pitch inertia radius of gyration. The parametric plots were compiled in a Vought Corporation report from a vast data base of past experiments and wind-tunnel tests. The computer program was utilized for flutter analysis of the outer wing of a Blended-Wing-Body concept, proposed by McDonnell Douglas Corp. Using a set of assumed data, preliminary flutter boundary and flutter dynamic pressure variation with altitude, Mach number and torsional stiffness were determined.
Parametric study of the swimming performance of a fish robot propelled by a flexible caudal fin.
Low, K H; Chong, C W
2010-12-01
In this paper, we aim to study the swimming performance of fish robots by using a statistical approach. A fish robot employing a carangiform swimming mode had been used as an experimental platform for the performance study. The experiments conducted aim to investigate the effect of various design parameters on the thrust capability of the fish robot with a flexible caudal fin. The controllable parameters associated with the fin include frequency, amplitude of oscillation, aspect ratio and the rigidity of the caudal fin. The significance of these parameters was determined in the first set of experiments by using a statistical approach. A more detailed parametric experimental study was then conducted with only those significant parameters. As a result, the parametric study could be completed with a reduced number of experiments and time spent. With the obtained experimental result, we were able to understand the relationship between various parameters and a possible adjustment of parameters to obtain a higher thrust. The proposed statistical method for experimentation provides an objective and thorough analysis of the effects of individual or combinations of parameters on the swimming performance. Such an efficient experimental design helps to optimize the process and determine factors that influence variability.
Fagerland, Morten W; Sandvik, Leiv; Mowinckel, Petter
2011-04-13
The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. The Welch U test (the T test with adjustment for unequal variances) and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group). The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.
How to Make Data a Blessing to Parametric Uncertainty Quantification and Reduction?
NASA Astrophysics Data System (ADS)
Ye, M.; Shi, X.; Curtis, G. P.; Kohler, M.; Wu, J.
2013-12-01
In a Bayesian point of view, probability of model parameters and predictions are conditioned on data used for parameter inference and prediction analysis. It is critical to use appropriate data for quantifying parametric uncertainty and its propagation to model predictions. However, data are always limited and imperfect. When a dataset cannot properly constrain model parameters, it may lead to inaccurate uncertainty quantification. While in this case data appears to be a curse to uncertainty quantification, a comprehensive modeling analysis may help understand the cause and characteristics of parametric uncertainty and thus turns data into a blessing. In this study, we illustrate impacts of data on uncertainty quantification and reduction using an example of surface complexation model (SCM) developed to simulate uranyl (U(VI)) adsorption. The model includes two adsorption sites, referred to as strong and weak sites. The amount of uranium adsorption on these sites determines both the mean arrival time and the long tail of the breakthrough curves. There is one reaction on the weak site but two reactions on the strong site. The unknown parameters include fractions of the total surface site density of the two sites and surface complex formation constants of the three reactions. A total of seven experiments were conducted with different geochemical conditions to estimate these parameters. The experiments with low initial concentration of U(VI) result in a large amount of parametric uncertainty. A modeling analysis shows that it is because the experiments cannot distinguish the relative adsorption affinity of the strong and weak sites on uranium adsorption. Therefore, the experiments with high initial concentration of U(VI) are needed, because in the experiments the strong site is nearly saturated and the weak site can be determined. The experiments with high initial concentration of U(VI) are a blessing to uncertainty quantification, and the experiments with low initial concentration help modelers turn a curse into a blessing. The data impacts on uncertainty quantification and reduction are quantified using probability density functions of model parameters obtained from Markov Chain Monte Carlo simulation using the DREAM algorithm. This study provides insights to model calibration, uncertainty quantification, experiment design, and data collection in groundwater reactive transport modeling and other environmental modeling.
A new approach for measuring power spectra and reconstructing time series in active galactic nuclei
NASA Astrophysics Data System (ADS)
Li, Yan-Rong; Wang, Jian-Min
2018-05-01
We provide a new approach to measure power spectra and reconstruct time series in active galactic nuclei (AGNs) based on the fact that the Fourier transform of AGN stochastic variations is a series of complex Gaussian random variables. The approach parametrizes a stochastic series in frequency domain and transforms it back to time domain to fit the observed data. The parameters and their uncertainties are derived in a Bayesian framework, which also allows us to compare the relative merits of different power spectral density models. The well-developed fast Fourier transform algorithm together with parallel computation enables an acceptable time complexity for the approach.
Binquet, C; Abrahamowicz, M; Mahboubi, A; Jooste, V; Faivre, J; Bonithon-Kopp, C; Quantin, C
2008-12-30
Flexible survival models, which avoid assumptions about hazards proportionality (PH) or linearity of continuous covariates effects, bring the issues of model selection to a new level of complexity. Each 'candidate covariate' requires inter-dependent decisions regarding (i) its inclusion in the model, and representation of its effects on the log hazard as (ii) either constant over time or time-dependent (TD) and, for continuous covariates, (iii) either loglinear or non-loglinear (NL). Moreover, 'optimal' decisions for one covariate depend on the decisions regarding others. Thus, some efficient model-building strategy is necessary.We carried out an empirical study of the impact of the model selection strategy on the estimates obtained in flexible multivariable survival analyses of prognostic factors for mortality in 273 gastric cancer patients. We used 10 different strategies to select alternative multivariable parametric as well as spline-based models, allowing flexible modeling of non-parametric (TD and/or NL) effects. We employed 5-fold cross-validation to compare the predictive ability of alternative models.All flexible models indicated significant non-linearity and changes over time in the effect of age at diagnosis. Conventional 'parametric' models suggested the lack of period effect, whereas more flexible strategies indicated a significant NL effect. Cross-validation confirmed that flexible models predicted better mortality. The resulting differences in the 'final model' selected by various strategies had also impact on the risk prediction for individual subjects.Overall, our analyses underline (a) the importance of accounting for significant non-parametric effects of covariates and (b) the need for developing accurate model selection strategies for flexible survival analyses. Copyright 2008 John Wiley & Sons, Ltd.
Marmarelis, Vasilis Z.; Berger, Theodore W.
2009-01-01
Parametric and non-parametric modeling methods are combined to study the short-term plasticity (STP) of synapses in the central nervous system (CNS). The nonlinear dynamics of STP are modeled by means: (1) previously proposed parametric models based on mechanistic hypotheses and/or specific dynamical processes, and (2) non-parametric models (in the form of Volterra kernels) that transforms the presynaptic signals into postsynaptic signals. In order to synergistically use the two approaches, we estimate the Volterra kernels of the parametric models of STP for four types of synapses using synthetic broadband input–output data. Results show that the non-parametric models accurately and efficiently replicate the input–output transformations of the parametric models. Volterra kernels provide a general and quantitative representation of the STP. PMID:18506609
A review of parametric approaches specific to aerodynamic design process
NASA Astrophysics Data System (ADS)
Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li
2018-04-01
Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.
Optimal design of high-rise buildings with respect to fundamental eigenfrequency
NASA Astrophysics Data System (ADS)
Alavi, Arsalan; Rahgozar, Reza; Torkzadeh, Peyman; Hajabasi, Mohamad Ali
2017-12-01
In modern tall and slender structures, dynamic responses are usually the dominant design requirements, instead of strength criteria. Resonance is often a threatening phenomenon for such structures. To avoid this problem, the fundamental eigenfrequency, an eigenfrequency of higher order, should be maximized. An optimization problem with this objective is constructed in this paper and is applied to a high-rise building. Using variational method, the objective function is maximized, contributing to a particular profile for the first mode shape. Based on this preselected profile, a parametric formulation for flexural stiffness is calculated. Due to some near-zero values for stiffness, the obtained formulation will be modified by adding a lower bound constraint. To handle this constraint some new parameters are introduced; thereby allowing for construction of a model relating the unknown parameters. Based on this mathematical model, a design algorithmic procedure is presented. For the sake of convenience, a single-input design graph is presented as well. The main merit of the proposed method, compared to previous researches, is its hand calculation aspect, suitable for parametric studies and sensitivity analysis. As the presented formulations are dimensionless, they are applicable in any dimensional system. Accuracy and practicality of the proposed method is illustrated at the end by applying it to a real-life structure.
Paul, Sarbajit; Chang, Junghwan
2017-01-01
This paper presents a design approach for a magnetic sensor module to detect mover position using the proper orthogonal decomposition-dynamic mode decomposition (POD-DMD)-based nonlinear parametric model order reduction (PMOR). The parameterization of the sensor module is achieved by using the multipolar moment matching method. Several geometric variables of the sensor module are considered while developing the parametric study. The operation of the sensor module is based on the principle of the airgap flux density distribution detection by the Hall Effect IC. Therefore, the design objective is to achieve a peak flux density (PFD) greater than 0.1 T and total harmonic distortion (THD) less than 3%. To fulfill the constraint conditions, the specifications for the sensor module is achieved by using POD-DMD based reduced model. The POD-DMD based reduced model provides a platform to analyze the high number of design models very fast, with less computational burden. Finally, with the final specifications, the experimental prototype is designed and tested. Two different modes, 90° and 120° modes respectively are used to obtain the position information of the linear motor mover. The position information thus obtained are compared with that of the linear scale data, used as a reference signal. The position information obtained using the 120° mode has a standard deviation of 0.10 mm from the reference linear scale signal, whereas the 90° mode position signal shows a deviation of 0.23 mm from the reference. The deviation in the output arises due to the mechanical tolerances introduced into the specification during the manufacturing process. This provides a scope for coupling the reliability based design optimization in the design process as a future extension. PMID:28671580
Gulati, Shelly; Stubblefield, Ashley A; Hanlon, Jeremy S; Spier, Chelsea L; Stringfellow, William T
2014-03-01
Measuring the discharge of diffuse pollution from agricultural watersheds presents unique challenges. Flows in agricultural watersheds, particularly in Mediterranean climates, can be predominately irrigation runoff and exhibit large diurnal fluctuation in both volume and concentration. Flow and pollutant concentrations in these smaller watersheds dominated by human activity do not conform to a normal distribution and it is not clear if parametric methods are appropriate or accurate for load calculations. The objective of this study was to compare the accuracy of five load estimation methods to calculate pollutant loads from agricultural watersheds. Calculation of loads using results from discrete (grab) samples was compared with the true-load computed using in situ continuous monitoring measurements. A new method is introduced that uses a non-parametric measure of central tendency (the median) to calculate loads (median-load). The median-load method was compared to more commonly used parametric estimation methods which rely on using the mean as a measure of central tendency (mean-load and daily-load), a method that utilizes the total flow volume (volume-load), and a method that uses measure of flow at the time of sampling (instantaneous-load). Using measurements from ten watersheds in the San Joaquin Valley of California, the average percent error compared to the true-load for total dissolved solids (TDS) was 7.3% for the median-load, 6.9% for the mean-load, 6.9% for the volume-load, 16.9% for the instantaneous-load, and 18.7% for the daily-load methods of calculation. The results of this study show that parametric methods are surprisingly accurate, even for data that have starkly non-normal distributions and are highly skewed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Spacecraft Conceptual Design Compared to the Apollo Lunar Lander
NASA Technical Reports Server (NTRS)
Young, C.; Bowie, J.; Rust, R.; Lenius, J.; Anderson, M.; Connolly, J.
2011-01-01
Future human exploration of the Moon will require an optimized spacecraft design with each sub-system achieving the required minimum capability and maintaining high reliability. The objective of this study was to trade capability with reliability and minimize mass for the lunar lander spacecraft. The NASA parametric concept for a 3-person vehicle to the lunar surface with a 30% mass margin totaled was considerably heavier than the Apollo 15 Lunar Module "as flown" mass of 16.4 metric tons. The additional mass was attributed to mission requirements and system design choices that were made to meet the realities of modern spaceflight. The parametric tool used to size the current concept, Envision, accounts for primary and secondary mass requirements. For example, adding an astronaut increases the mass requirements for suits, water, food, oxygen, as well as, the increase in volume. The environmental control sub-systems becomes heavier with the increased requirements and more structure was needed to support the additional mass. There was also an increase in propellant usage. For comparison, an "Apollo-like" vehicle was created by removing these additional requirements. Utilizing the Envision parametric mass calculation tool and a quantitative reliability estimation tool designed by Valador Inc., it was determined that with today?s current technology a Lunar Module (LM) with Apollo capability could be built with less mass and similar reliability. The reliability of this new lander was compared to Apollo Lunar Module utilizing the same methodology, adjusting for mission timeline changes as well as component differences. Interestingly, the parametric concept's overall estimated risk for loss of mission (LOM) and loss of crew (LOC) did not significantly improve when compared to Apollo.
Paul, Sarbajit; Chang, Junghwan
2017-07-01
This paper presents a design approach for a magnetic sensor module to detect mover position using the proper orthogonal decomposition-dynamic mode decomposition (POD-DMD)-based nonlinear parametric model order reduction (PMOR). The parameterization of the sensor module is achieved by using the multipolar moment matching method. Several geometric variables of the sensor module are considered while developing the parametric study. The operation of the sensor module is based on the principle of the airgap flux density distribution detection by the Hall Effect IC. Therefore, the design objective is to achieve a peak flux density (PFD) greater than 0.1 T and total harmonic distortion (THD) less than 3%. To fulfill the constraint conditions, the specifications for the sensor module is achieved by using POD-DMD based reduced model. The POD-DMD based reduced model provides a platform to analyze the high number of design models very fast, with less computational burden. Finally, with the final specifications, the experimental prototype is designed and tested. Two different modes, 90° and 120° modes respectively are used to obtain the position information of the linear motor mover. The position information thus obtained are compared with that of the linear scale data, used as a reference signal. The position information obtained using the 120° mode has a standard deviation of 0.10 mm from the reference linear scale signal, whereas the 90° mode position signal shows a deviation of 0.23 mm from the reference. The deviation in the output arises due to the mechanical tolerances introduced into the specification during the manufacturing process. This provides a scope for coupling the reliability based design optimization in the design process as a future extension.
2006-09-01
Figure 17. Station line center of Magnus force vs. Mach number for spin-stabilized projectile...forces and moments on the projectile. It is also relatively easy to change the wind tunnel model to allow detailed parametric effects to be...such as pitch and roll damping, as well as, Magnus force and moment coefficients, are difficult to obtain in a wind tunnel and require a complex
Grid Resolution Effects on LES of a Piloted Methane-Air Flame
2009-05-20
respectively. In the LES momen- tum equation , Eq.(3), the Smagorinsky model is used to obtain the deviatoric part of the unclosed SGS stress τi j... accurately predicted from integra- tion of their LES evolution equations ; and (ii), the flamelet parametrization should adequately approximate the... effect of the complex small-scale turbulence/chemistry interactions is modeled in an affordable way by a combustion model. A question of how a particular
Free boundary skin current magnetohydrodynamic equilibria
NASA Astrophysics Data System (ADS)
Reusch, Michael F.
1988-10-01
Function theoretic methods in the complex plane are used to develop simple parametric hodograph formulas that generate sharp boundary equilibria of arbitrary shape. The related method of Gorenflo [Z. Angew. Math. Phys. 16, 279 (1965)] and Merkel (Ph.D. thesis, University of Munich, 1965) is discussed. A numerical technique for the construction of solutions, based on one of the methods, is presented. A study is made of the bifurcations of an equilibrium of general form.
Detached-Eddy Simulations of Attached and Detached Boundary Layers
NASA Astrophysics Data System (ADS)
Caruelle, B.; Ducros, F.
2003-12-01
This article presents Detached-Eddy Simulations (DESs) of attached and detached turbulent boundary layers. This hybrid Reynolds Averaged Navier-Stokes (RANS) / Large Eddy Simulation (LES) model goes continuously from RANS to LES according to the mesh definition. We propose a parametric study of the model over two "academic" configurations, in order to get information on the influence of the mesh to correctly treat complex flow with attached and detached boundary layers.
Braeye, Toon; Verheagen, Jan; Mignon, Annick; Flipse, Wim; Pierard, Denis; Huygen, Kris; Schirvel, Carole; Hens, Niel
2016-01-01
Introduction Surveillance networks are often not exhaustive nor completely complementary. In such situations, capture-recapture methods can be used for incidence estimation. The choice of estimator and their robustness with respect to the homogeneity and independence assumptions are however not well documented. Methods We investigated the performance of five different capture-recapture estimators in a simulation study. Eight different scenarios were used to detect and combine case-information. The scenarios increasingly violated assumptions of independence of samples and homogeneity of detection probabilities. Belgian datasets on invasive pneumococcal disease (IPD) and pertussis provided motivating examples. Results No estimator was unbiased in all scenarios. Performance of the parametric estimators depended on how much of the dependency and heterogeneity were correctly modelled. Model building was limited by parameter estimability, availability of additional information (e.g. covariates) and the possibilities inherent to the method. In the most complex scenario, methods that allowed for detection probabilities conditional on previous detections estimated the total population size within a 20–30% error-range. Parametric estimators remained stable if individual data sources lost up to 50% of their data. The investigated non-parametric methods were more susceptible to data loss and their performance was linked to the dependence between samples; overestimating in scenarios with little dependence, underestimating in others. Issues with parameter estimability made it impossible to model all suggested relations between samples for the IPD and pertussis datasets. For IPD, the estimates for the Belgian incidence for cases aged 50 years and older ranged from 44 to58/100,000 in 2010. The estimates for pertussis (all ages, Belgium, 2014) ranged from 24.2 to30.8/100,000. Conclusion We encourage the use of capture-recapture methods, but epidemiologists should preferably include datasets for which the underlying dependency structure is not too complex, a priori investigate this structure, compensate for it within the model and interpret the results with the remaining unmodelled heterogeneity in mind. PMID:27529167
Altschuler, Ted S; Molholm, Sophie; Butler, John S; Mercier, Manuel R; Brandwein, Alice B; Foxe, John J
2014-04-15
The adult human visual system can efficiently fill-in missing object boundaries when low-level information from the retina is incomplete, but little is known about how these processes develop across childhood. A decade of visual-evoked potential (VEP) studies has produced a theoretical model identifying distinct phases of contour completion in adults. The first, termed a perceptual phase, occurs from approximately 100-200 ms and is associated with automatic boundary completion. The second is termed a conceptual phase occurring between 230 and 400 ms. The latter has been associated with the analysis of ambiguous objects which seem to require more effort to complete. The electrophysiological markers of these phases have both been localized to the lateral occipital complex, a cluster of ventral visual stream brain regions associated with object-processing. We presented Kanizsa-type illusory contour stimuli, often used for exploring contour completion processes, to neurotypical persons ages 6-31 (N=63), while parametrically varying the spatial extent of these induced contours, in order to better understand how filling-in processes develop across childhood and adolescence. Our results suggest that, while adults complete contour boundaries in a single discrete period during the automatic perceptual phase, children display an immature response pattern-engaging in more protracted processing across both timeframes and appearing to recruit more widely distributed regions which resemble those evoked during adult processing of higher-order ambiguous figures. However, children older than 5years of age were remarkably like adults in that the effects of contour processing were invariant to manipulation of contour extent. Copyright © 2013 Elsevier Inc. All rights reserved.
Neural control of magnetic suspension systems
NASA Technical Reports Server (NTRS)
Gray, W. Steven
1993-01-01
The purpose of this research program is to design, build and test (in cooperation with NASA personnel from the NASA Langley Research Center) neural controllers for two different small air-gap magnetic suspension systems. The general objective of the program is to study neural network architectures for the purpose of control in an experimental setting and to demonstrate the feasibility of the concept. The specific objectives of the research program are: (1) to demonstrate through simulation and experimentation the feasibility of using neural controllers to stabilize a nonlinear magnetic suspension system; (2) to investigate through simulation and experimentation the performance of neural controllers designs under various types of parametric and nonparametric uncertainty; (3) to investigate through simulation and experimentation various types of neural architectures for real-time control with respect to performance and complexity; and (4) to benchmark in an experimental setting the performance of neural controllers against other types of existing linear and nonlinear compensator designs. To date, the first one-dimensional, small air-gap magnetic suspension system has been built, tested and delivered to the NASA Langley Research Center. The device is currently being stabilized with a digital linear phase-lead controller. The neural controller hardware is under construction. Two different neural network paradigms are under consideration, one based on hidden layer feedforward networks trained via back propagation and one based on using Gaussian radial basis functions trained by analytical methods related to stability conditions. Some advanced nonlinear control algorithms using feedback linearization and sliding mode control are in simulation studies.
Objective Assessment of Listening Effort: Coregistration of Pupillometry and EEG.
Miles, Kelly; McMahon, Catherine; Boisvert, Isabelle; Ibrahim, Ronny; de Lissa, Peter; Graham, Petra; Lyxell, Björn
2017-01-01
Listening to speech in noise is effortful, particularly for people with hearing impairment. While it is known that effort is related to a complex interplay between bottom-up and top-down processes, the cognitive and neurophysiological mechanisms contributing to effortful listening remain unknown. Therefore, a reliable physiological measure to assess effort remains elusive. This study aimed to determine whether pupil dilation and alpha power change, two physiological measures suggested to index listening effort, assess similar processes. Listening effort was manipulated by parametrically varying spectral resolution (16- and 6-channel noise vocoding) and speech reception thresholds (SRT; 50% and 80%) while 19 young, normal-hearing adults performed a speech recognition task in noise. Results of off-line sentence scoring showed discrepancies between the target SRTs and the true performance obtained during the speech recognition task. For example, in the SRT80% condition, participants scored an average of 64.7%. Participants' true performance levels were therefore used for subsequent statistical modelling. Results showed that both measures appeared to be sensitive to changes in spectral resolution (channel vocoding), while pupil dilation only was also significantly related to their true performance levels (%) and task accuracy (i.e., whether the response was correctly or partially recalled). The two measures were not correlated, suggesting they each may reflect different cognitive processes involved in listening effort. This combination of findings contributes to a growing body of research aiming to develop an objective measure of listening effort.
Rephasing invariant parametrization of flavor mixing
NASA Astrophysics Data System (ADS)
Lee, Tae-Hun
A new rephasing invariant parametrization for the 3 x 3 CKM matrix, called (x, y) parametrization, is introduced and the properties and applications of the parametrization are discussed. The overall phase condition leads this parametrization to have only six rephsing invariant parameters and two constraints. Its simplicity and regularity become apparent when it is applied to the one-loop RGE (renormalization group equations) for the Yukawa couplings. The implications of this parametrization for unification of the Yukawa couplings are also explored.
Review of Statistical Methods for Analysing Healthcare Resources and Costs
Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G
2011-01-01
We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344
Kutateladze, Andrei G; Mukhina, Olga A
2014-09-05
Spin-spin coupling constants in (1)H NMR carry a wealth of structural information and offer a powerful tool for deciphering molecular structures. However, accurate ab initio or DFT calculations of spin-spin coupling constants have been very challenging and expensive. Scaling of (easy) Fermi contacts, fc, especially in the context of recent findings by Bally and Rablen (Bally, T.; Rablen, P. R. J. Org. Chem. 2011, 76, 4818), offers a framework for achieving practical evaluation of spin-spin coupling constants. We report a faster and more precise parametrization approach utilizing a new basis set for hydrogen atoms optimized in conjunction with (i) inexpensive B3LYP/6-31G(d) molecular geometries, (ii) inexpensive 4-31G basis set for carbon atoms in fc calculations, and (iii) individual parametrization for different atom types/hybridizations, not unlike a force field in molecular mechanics, but designed for the fc's. With the training set of 608 experimental constants we achieved rmsd <0.19 Hz. The methodology performs very well as we illustrate with a set of complex organic natural products, including strychnine (rmsd 0.19 Hz), morphine (rmsd 0.24 Hz), etc. This precision is achieved with much shorter computational times: accurate spin-spin coupling constants for the two conformers of strychnine were computed in parallel on two 16-core nodes of a Linux cluster within 10 min.
NASA Astrophysics Data System (ADS)
Alshakova, E. L.
2017-01-01
The program in the AutoLISP language allows automatically to form parametrical drawings during the work in the AutoCAD software product. Students study development of programs on AutoLISP language with the use of the methodical complex containing methodical instructions in which real examples of creation of images and drawings are realized. Methodical instructions contain reference information necessary for the performance of the offered tasks. The method of step-by-step development of the program is the basis for training in programming on AutoLISP language: the program draws elements of the drawing of a detail by means of definitely created function which values of arguments register in that sequence in which AutoCAD gives out inquiries when performing the corresponding command in the editor. The process of the program design is reduced to the process of step-by-step formation of functions and sequence of their calls. The author considers the development of the AutoLISP program for the creation of parametrical drawings of details, the defined design, the user enters the dimensions of elements of details. These programs generate variants of tasks of the graphic works performed in educational process of "Engineering graphics", "Engineering and computer graphics" disciplines. Individual tasks allow to develop at students skills of independent work in reading and creation of drawings, as well as 3D modeling.
Intersection of three-dimensional geometric surfaces
NASA Technical Reports Server (NTRS)
Crisp, V. K.; Rehder, J. J.; Schwing, J. L.
1985-01-01
Calculating the line of intersection between two three-dimensional objects and using the information to generate a third object is a key element in a geometry development system. Techniques are presented for the generation of three-dimensional objects, the calculation of a line of intersection between two objects, and the construction of a resultant third object. The objects are closed surfaces consisting of adjacent bicubic parametric patches using Bezier basis functions. The intersection determination involves subdividing the patches that make up the objects until they are approximately planar and then calculating the intersection between planes. The resulting straight-line segments are connected to form the curve of intersection. The polygons in the neighborhood of the intersection are reconstructed and put back into the Bezier representation. A third object can be generated using various combinations of the original two. Several examples are presented. Special cases and problems were encountered, and the method for handling them is discussed. The special cases and problems included intersection of patch edges, gaps between adjacent patches because of unequal subdivision, holes, or islands within patches, and computer round-off error.
Obtaining Thickness-Limited Electrospray Deposition for 3D Coating.
Lei, Lin; Kovacevich, Dylan A; Nitzsche, Michael P; Ryu, Jihyun; Al-Marzoki, Kutaiba; Rodriguez, Gabriela; Klein, Lisa C; Jitianu, Andrei; Singer, Jonathan P
2018-04-04
Electrospray processing utilizes the balance of electrostatic forces and surface tension within a charged spray to produce charged microdroplets with a narrow dispersion in size. In electrospray deposition, each droplet carries a small quantity of suspended material to a target substrate. Past electrospray deposition results fall into two major categories: (1) continuous spray of films onto conducting substrates and (2) spray of isolated droplets onto insulating substrates. A crossover regime, or a self-limited spray, has only been limitedly observed in the spray of insulating materials onto conductive substrates. In such sprays, a limiting thickness emerges, where the accumulation of charge repels further spray. In this study, we examined the parametric spray of several glassy polymers to both categorize past electrospray deposition results and uncover the critical parameters for thickness-limited sprays. The key parameters for determining the limiting thickness were (1) field strength and (2) spray temperature, related to (i) the necessary repulsive field and (ii) the ability for the deposited materials to swell in the carrier solvent vapor and redistribute charge. These control mechanisms can be applied to the uniform or controllably-varied microscale coating of complex three-dimensional objects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.
New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less
Hu, Ting; Chen, Yuanzhu; Kiralis, Jeff W; Collins, Ryan L; Wejse, Christian; Sirugo, Giorgio; Williams, Scott M; Moore, Jason H
2013-01-01
Background Epistasis has been historically used to describe the phenomenon that the effect of a given gene on a phenotype can be dependent on one or more other genes, and is an essential element for understanding the association between genetic and phenotypic variations. Quantifying epistasis of orders higher than two is very challenging due to both the computational complexity of enumerating all possible combinations in genome-wide data and the lack of efficient and effective methodologies. Objectives In this study, we propose a fast, non-parametric, and model-free measure for three-way epistasis. Methods Such a measure is based on information gain, and is able to separate all lower order effects from pure three-way epistasis. Results Our method was verified on synthetic data and applied to real data from a candidate-gene study of tuberculosis in a West African population. In the tuberculosis data, we found a statistically significant pure three-way epistatic interaction effect that was stronger than any lower-order associations. Conclusion Our study provides a methodological basis for detecting and characterizing high-order gene-gene interactions in genetic association studies. PMID:23396514
NASA Technical Reports Server (NTRS)
Athavale, Mahesh; Przekwas, Andrzej
2004-01-01
The objectives of the program were to develop computational fluid dynamics (CFD) codes and simpler industrial codes for analyzing and designing advanced seals for air-breathing and space propulsion engines. The CFD code SCISEAL is capable of producing full three-dimensional flow field information for a variety of cylindrical configurations. An implicit multidomain capability allow the division of complex flow domains to allow optimum use of computational cells. SCISEAL also has the unique capability to produce cross-coupled stiffness and damping coefficients for rotordynamic computations. The industrial codes consist of a series of separate stand-alone modules designed for expeditious parametric analyses and optimization of a wide variety of cylindrical and face seals. Coupled through a Knowledge-Based System (KBS) that provides a user-friendly Graphical User Interface (GUI), the industrial codes are PC based using an OS/2 operating system. These codes were designed to treat film seals where a clearance exists between the rotating and stationary components. Leakage is inhibited by surface roughness, small but stiff clearance films, and viscous pumping devices. The codes have demonstrated to be a valuable resource for seal development of future air-breathing and space propulsion engines.
Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.; ...
2017-10-17
New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less
Adapting Active Shape Models for 3D segmentation of tubular structures in medical images.
de Bruijne, Marleen; van Ginneken, Bram; Viergever, Max A; Niessen, Wiro J
2003-07-01
Active Shape Models (ASM) have proven to be an effective approach for image segmentation. In some applications, however, the linear model of gray level appearance around a contour that is used in ASM is not sufficient for accurate boundary localization. Furthermore, the statistical shape model may be too restricted if the training set is limited. This paper describes modifications to both the shape and the appearance model of the original ASM formulation. Shape model flexibility is increased, for tubular objects, by modeling the axis deformation independent of the cross-sectional deformation, and by adding supplementary cylindrical deformation modes. Furthermore, a novel appearance modeling scheme that effectively deals with a highly varying background is developed. In contrast with the conventional ASM approach, the new appearance model is trained on both boundary and non-boundary points, and the probability that a given point belongs to the boundary is estimated non-parametrically. The methods are evaluated on the complex task of segmenting thrombus in abdominal aortic aneurysms (AAA). Shape approximation errors were successfully reduced using the two shape model extensions. Segmentation using the new appearance model significantly outperformed the original ASM scheme; average volume errors are 5.1% and 45% respectively.
An Integrated Approach to Damage Accommodation in Flight Control
NASA Technical Reports Server (NTRS)
Boskovic, Jovan D.; Knoebel, Nathan; Mehra, Raman K.; Gregory, Irene
2008-01-01
In this paper we present an integrated approach to in-flight damage accommodation in flight control. The approach is based on Multiple Models, Switching and Tuning (MMST), and consists of three steps: In the first step the main objective is to acquire a realistic aircraft damage model. Modeling of in-flight damage is a highly complex problem since there is a large number of issues that need to be addressed. One of the most important one is that there is strong coupling between structural dynamics, aerodynamics, and flight control. These effects cannot be studied separately due to this coupling. Once a realistic damage model is available, in the second step a large number of models corresponding to different damage cases are generated. One possibility is to generate many linear models and interpolate between them to cover a large portion of the flight envelope. Once these models have been generated, we will implement a recently developed-Model Set Reduction (MSR) technique. The technique is based on parameterizing damage in terms of uncertain parameters, and uses concepts from robust control theory to arrive at a small number of "centered" models such that the controllers corresponding to these models assure desired stability and robustness properties over a subset in the parametric space. By devising a suitable model placement strategy, the entire parametric set is covered with a relatively small number of models and controllers. The third step consists of designing a Multiple Models, Switching and Tuning (MMST) strategy for estimating the current operating regime (damage case) of the aircraft, and switching to the corresponding controller to achieve effective damage accommodation and the desired performance. In the paper present a comprehensive approach to damage accommodation using Model Set Design,MMST, and Variable Structure compensation for coupling nonlinearities. The approach was evaluated on a model of F/A-18 aircraft dynamics under control effector damage, augmented by nonlinear cross-coupling terms and a structural dynamics model. The proposed approach achieved excellent performance under severe damage effects.
NASA Technical Reports Server (NTRS)
Postma, Barry Dirk
2005-01-01
This thesis discusses application of a robust constrained optimization approach to control design to develop an Auto Balancing Controller (ABC) for a centrifuge rotor to be implemented on the International Space Station. The design goal is to minimize a performance objective of the system, while guaranteeing stability and proper performance for a range of uncertain plants. The Performance objective is to minimize the translational response of the centrifuge rotor due to a fixed worst-case rotor imbalance. The robustness constraints are posed with respect to parametric uncertainty in the plant. The proposed approach to control design allows for both of these objectives to be handled within the framework of constrained optimization. The resulting controller achieves acceptable performance and robustness characteristics.
A method of computer aided design with self-generative models in NX Siemens environment
NASA Astrophysics Data System (ADS)
Grabowik, C.; Kalinowski, K.; Kempa, W.; Paprocka, I.
2015-11-01
Currently in CAD/CAE/CAM systems it is possible to create 3D design virtual models which are able to capture certain amount of knowledge. These models are especially useful in an automation of routine design tasks. These models are known as self-generative or auto generative and they can behave in an intelligent way. The main difference between the auto generative and fully parametric models consists in the auto generative models ability to self-organizing. In this case design model self-organizing means that aside from the possibility of making of automatic changes of model quantitative features these models possess knowledge how these changes should be made. Moreover they are able to change quality features according to specific knowledge. In spite of undoubted good points of self-generative models they are not so often used in design constructional process which is mainly caused by usually great complexity of these models. This complexity makes the process of self-generative time and labour consuming. It also needs a quite great investment outlays. The creation process of self-generative model consists of the three stages it is knowledge and information acquisition, model type selection and model implementation. In this paper methods of the computer aided design with self-generative models in NX Siemens CAD/CAE/CAM software are presented. There are the five methods of self-generative models preparation in NX with: parametric relations model, part families, GRIP language application, knowledge fusion and OPEN API mechanism. In the paper examples of each type of the self-generative model are presented. These methods make the constructional design process much faster. It is suggested to prepare this kind of self-generative models when there is a need of design variants creation. The conducted research on assessing the usefulness of elaborated models showed that they are highly recommended in case of routine tasks automation. But it is still difficult to distinguish which method of self-generative preparation is most preferred. It always depends on a problem complexity. The easiest way for such a model preparation is this with the parametric relations model whilst the hardest one is this with the OPEN API mechanism. From knowledge processing point of view the best choice is application of the knowledge fusion.
Exploiting Complexity Information for Brain Activation Detection
Zhang, Yan; Liang, Jiali; Lin, Qiang; Hu, Zhenghui
2016-01-01
We present a complexity-based approach for the analysis of fMRI time series, in which sample entropy (SampEn) is introduced as a quantification of the voxel complexity. Under this hypothesis the voxel complexity could be modulated in pertinent cognitive tasks, and it changes through experimental paradigms. We calculate the complexity of sequential fMRI data for each voxel in two distinct experimental paradigms and use a nonparametric statistical strategy, the Wilcoxon signed rank test, to evaluate the difference in complexity between them. The results are compared with the well known general linear model based Statistical Parametric Mapping package (SPM12), where a decided difference has been observed. This is because SampEn method detects brain complexity changes in two experiments of different conditions and the data-driven method SampEn evaluates just the complexity of specific sequential fMRI data. Also, the larger and smaller SampEn values correspond to different meanings, and the neutral-blank design produces higher predictability than threat-neutral. Complexity information can be considered as a complementary method to the existing fMRI analysis strategies, and it may help improving the understanding of human brain functions from a different perspective. PMID:27045838
Note on the displacement of a trajectory of hyperbolic motion in curved space-time
NASA Astrophysics Data System (ADS)
Krikorian, R. A.
2012-04-01
The object of this note is to present a physical application of the theory of the infinitesimal deformations or displacements of curves developed by Yano using the concept of Lie derivative. It is shown that an infinitesimal point transformation which carries a given trajectory of hyperbolic motion into a trajectory of the same type, and preserves the affine parametrization of the trajectory, defines a homothetic motion.
Numerical study of the polarization effect of GPR systems on the detection of buried objects
NASA Astrophysics Data System (ADS)
Sagnard, Florence
2017-04-01
This work is in line with the studies carried out in our department over the last few years on object detection in civil engineering structures and soils. In parallel to building of the second version of the Sense-City test site where several pipeline networks will be buried [1], we are developing numerical models using the FIT and the FDTD approaches to study more precisely the contribution of the polarization diversity in the detection of conductive and dielectric buried objects using the GPR technique. The simulations developed are based on a ultra-wide band SFCW GPR system that have been designed and evaluated in our laboratory. A parametric study is proposed to evaluate the influence of the antenna configurations and the antenna geometry when considering the polarization diversity in the detection and characterization of canonical objects. [1] http://www.sense-city.univ-paris-est.fr/index.php
Carbide-reinforced metal matrix composite by direct metal deposition
NASA Astrophysics Data System (ADS)
Novichenko, D.; Thivillon, L.; Bertrand, Ph.; Smurov, I.
Direct metal deposition (DMD) is an automated 3D laser cladding technology with co-axial powder injection for industrial applications. The actual objective is to demonstrate the possibility to produce metal matrix composite objects in a single-step process. Powders of Fe-based alloy (16NCD13) and titanium carbide (TiC) are premixed before cladding. Volume content of the carbide-reinforced phase is varied. Relationships between the main laser cladding parameters and the geometry of the built-up objects (single track, 2D coating) are discussed. On the base of parametric study, a laser cladding process map for the deposition of individual tracks was established. Microstructure and composition of the laser-fabricated metal matrix composite objects are examined. Two different types of structures: (a) with the presence of undissolved and (b) precipitated titanium carbides are observed. Mechanism of formation of diverse precipitated titanium carbides is studied.
Numerical Leak Detection in a Pipeline Network of Complex Structure with Unsteady Flow
NASA Astrophysics Data System (ADS)
Aida-zade, K. R.; Ashrafova, E. R.
2017-12-01
An inverse problem for a pipeline network of complex loopback structure is solved numerically. The problem is to determine the locations and amounts of leaks from unsteady flow characteristics measured at some pipeline points. The features of the problem include impulse functions involved in a system of hyperbolic differential equations, the absence of classical initial conditions, and boundary conditions specified as nonseparated relations between the states at the endpoints of adjacent pipeline segments. The problem is reduced to a parametric optimal control problem without initial conditions, but with nonseparated boundary conditions. The latter problem is solved by applying first-order optimization methods. Results of numerical experiments are presented.
Mission activities planning for a Hermes mission by means of AI-technology
NASA Technical Reports Server (NTRS)
Pape, U.; Hajen, G.; Schielow, N.; Mitschdoerfer, P.; Allard, F.
1993-01-01
Mission Activities Planning is a complex task to be performed by mission control centers. AI technology can offer attractive solutions to the planning problem. This paper presents the use of a new AI-based Mission Planning System for crew activity planning. Based on a HERMES servicing mission to the COLUMBUS Man Tended Free Flyer (MTFF) with complex time and resource constraints, approximately 2000 activities with 50 different resources have been generated, processed, and planned with parametric variation of operationally sensitive parameters. The architecture, as well as the performance of the mission planning system, is discussed. An outlook to future planning scenarios, the requirements, and how a system like MARS can fulfill those requirements is given.
2014-01-01
Background Cost-effectiveness analyses (CEAs) that use patient-specific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCT-based CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCT-based CEA for the incorporation of external evidence. Methods We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions. Results In a proof-of-concept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods. Conclusions The bootstrap method of RCT-based CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes. PMID:24888356
Differential diagnosis of normal pressure hydrocephalus by MRI mean diffusivity histogram analysis.
Ivkovic, M; Liu, B; Ahmed, F; Moore, D; Huang, C; Raj, A; Kovanlikaya, I; Heier, L; Relkin, N
2013-01-01
Accurate diagnosis of normal pressure hydrocephalus is challenging because the clinical symptoms and radiographic appearance of NPH often overlap those of other conditions, including age-related neurodegenerative disorders such as Alzheimer and Parkinson diseases. We hypothesized that radiologic differences between NPH and AD/PD can be characterized by a robust and objective MR imaging DTI technique that does not require intersubject image registration or operator-defined regions of interest, thus avoiding many pitfalls common in DTI methods. We collected 3T DTI data from 15 patients with probable NPH and 25 controls with AD, PD, or dementia with Lewy bodies. We developed a parametric model for the shape of intracranial mean diffusivity histograms that separates brain and ventricular components from a third component composed mostly of partial volume voxels. To accurately fit the shape of the third component, we constructed a parametric function named the generalized Voss-Dyke function. We then examined the use of the fitting parameters for the differential diagnosis of NPH from AD, PD, and DLB. Using parameters for the MD histogram shape, we distinguished clinically probable NPH from the 3 other disorders with 86% sensitivity and 96% specificity. The technique yielded 86% sensitivity and 88% specificity when differentiating NPH from AD only. An adequate parametric model for the shape of intracranial MD histograms can distinguish NPH from AD, PD, or DLB with high sensitivity and specificity.
Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana
2015-03-01
The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology. Copyright © 2014 John Wiley & Sons, Ltd.
Sadatsafavi, Mohsen; Marra, Carlo; Aaron, Shawn; Bryan, Stirling
2014-06-03
Cost-effectiveness analyses (CEAs) that use patient-specific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCT-based CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCT-based CEA for the incorporation of external evidence. We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions. In a proof-of-concept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods. The bootstrap method of RCT-based CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes.
NASA Technical Reports Server (NTRS)
Kovach, L. S.; Zdankiewicz, E. M.
1987-01-01
Vapor compression distillation technology for phase change recovery of potable water from wastewater has evolved as a technically mature approach for use aboard the Space Station. A program to parametrically test an advanced preprototype Vapor Compression Distillation Subsystem (VCDS) was completed during 1985 and 1986. In parallel with parametric testing, a hardware improvement program was initiated to test the feasibility of incorporating several key improvements into the advanced preprototype VCDS following initial parametric tests. Specific areas of improvement included long-life, self-lubricated bearings, a lightweight, highly-efficient compressor, and a long-life magnetic drive. With the exception of the self-lubricated bearings, these improvements are incorporated. The advanced preprototype VCDS was designed to reclaim 95 percent of the available wastewater at a nominal water recovery rate of 1.36 kg/h achieved at a solids concentration of 2.3 percent and 308 K condenser temperature. While this performance was maintained for the initial testing, a 300 percent improvement in water production rate with a corresponding lower specific energy was achieved following incorporation of the improvements. Testing involved the characterization of key VCDS performance factors as a function of recycle loop solids concentration, distillation unit temperature and fluids pump speed. The objective of this effort was to expand the VCDS data base to enable defining optimum performance characteristics for flight hardware development.
Light Absorption Enhancement of Black Carbon Aerosol Constrained by Particle Morphology.
Wu, Yu; Cheng, Tianhai; Liu, Dantong; Allan, James D; Zheng, Lijuan; Chen, Hao
2018-06-19
The radiative forcing of black carbon aerosol (BC) is one of the largest sources of uncertainty in climate change assessments. Contrasting results of BC absorption enhancement ( E abs ) after aging are estimated by field measurements and modeling studies, causing ambiguous parametrizations of BC solar absorption in climate models. Here we quantify E abs using a theoretical model parametrized by the complex particle morphology of BC in different aging scales. We show that E abs continuously increases with aging and stabilizes with a maximum of ∼3.5, suggesting that previous seemingly contrast results of E abs can be explicitly described by BC aging with corresponding particle morphology. We also report that current climate models using Mie Core-Shell model may overestimate E abs at a certain aging stage with a rapid rise of E abs , which is commonly observed in the ambient. A correction coefficient for this overestimation is suggested to improve model predictions of BC climate impact.
Phase transition in the parametric natural visibility graph.
Snarskii, A A; Bezsudnov, I V
2016-10-01
We investigate time series by mapping them to the complex networks using a parametric natural visibility graph (PNVG) algorithm that generates graphs depending on arbitrary continuous parameter-the angle of view. We study the behavior of the relative number of clusters in PNVG near the critical value of the angle of view. Artificial and experimental time series of different nature are used for numerical PNVG investigations to find critical exponents above and below the critical point as well as the exponent in the finite size scaling regime. Altogether, they allow us to find the critical exponent of the correlation length for PNVG. The set of calculated critical exponents satisfies the basic Widom relation. The PNVG is found to demonstrate scaling behavior. Our results reveal the similarity between the behavior of the relative number of clusters in PNVG and the order parameter in the second-order phase transitions theory. We show that the PNVG is another example of a system (in addition to magnetic, percolation, superconductivity, etc.) with observed second-order phase transition.
NASA Technical Reports Server (NTRS)
Deepak, A.; Box, M. A.
1978-01-01
The paper presents a parametric study of the forwardscattering corrections for experimentally measured optical extinction coefficients in polydisperse particulate media, since some forward scattered light invariably enters, along with the direct beam, into the finite aperture of the detector. Forwardscattering corrections are computed by two methods: (1) using the exact Mie theory, and (2) the approximate Rayleigh diffraction formula for spherical particles. A parametric study of the dependence of the corrections on mode radii, real and imaginary parts of the complex refractive index, and half-angle of the detector's view cone has been carried out for three different size distribution functions of the modified gamma type. In addition, a study has been carried out to investigate the range of these parameters in which the approximate formulation is valid. The agreement is especially good for small-view cone angles and large particles, which improves significantly for slightly absorbing aerosol particles. Also discussed is the dependence of these corrections on the experimental design of the transmissometer systems.
Radial forcing and Edgar Allan Poe's lengthening pendulum
NASA Astrophysics Data System (ADS)
McMillan, Matthew; Blasing, David; Whitney, Heather M.
2013-09-01
Inspired by Edgar Allan Poe's The Pit and the Pendulum, we investigate a radially driven, lengthening pendulum. We first show that increasing the length of an undriven pendulum at a uniform rate does not amplify the oscillations in a manner consistent with the behavior of the scythe in Poe's story. We discuss parametric amplification and the transfer of energy (through the parameter of the pendulum's length) to the oscillating part of the system. In this manner, radial driving can easily and intuitively be understood, and the fundamental concept applied in many other areas. We propose and show by a numerical model that appropriately timed radial forcing can increase the oscillation amplitude in a manner consistent with Poe's story. Our analysis contributes a computational exploration of the complex harmonic motion that can result from radially driving a pendulum and sheds light on a mechanism by which oscillations can be amplified parametrically. These insights should prove especially valuable in the undergraduate physics classroom, where investigations into pendulums and oscillations are commonplace.
Murphy, Cynthia F; Kenig, George A; Allen, David T; Laurent, Jean-Philippe; Dyer, David E
2003-12-01
Currently available data suggest that most of the energy and material consumption related to the production of an integrated circuit is due to the wafer fabrication process. The complexity of wafer manufacturing, requiring hundreds of steps that vary from product to product and from facility to facility and which change every few years, has discouraged the development of material, energy, and emission inventory modules for the purpose of insertion into life cycle assessments. To address this difficulty, a flexible, process-based system for estimating material requirements, energy requirements, and emissions in wafer fabrication has been developed. The method accounts for mass and energy use atthe unit operation level. Parametric unit operation modules have been developed that can be used to predict changes in inventory as the result of changes in product design, equipment selection, or process flow. A case study of the application of the modules is given for energy consumption, but a similar methodology can be used for materials, individually or aggregated.
Phenomenon of low-alloy steel parametrization transformation at cyclic loading in low-cyclic area
NASA Astrophysics Data System (ADS)
Shipachev, A. M.; Nazarova, M. N.
2017-10-01
Following the results of measurements of hardness, magnetizing force and the rate of ultrasonic longitudinal waves of 09G2S steel samples at various cyclic operating time values, there is a phenomenon of transformation from the normal law of speed distribution of these parameters in power-mode distribution. It shows the submission of the behavior of metal as a complex system to the theory of the self-organized criticality.
NASA Astrophysics Data System (ADS)
Jin, Yulin; Lu, Kuan; Hou, Lei; Chen, Yushu
2017-12-01
The proper orthogonal decomposition (POD) method is a main and efficient tool for order reduction of high-dimensional complex systems in many research fields. However, the robustness problem of this method is always unsolved, although there are some modified POD methods which were proposed to solve this problem. In this paper, a new adaptive POD method called the interpolation Grassmann manifold (IGM) method is proposed to address the weakness of local property of the interpolation tangent-space of Grassmann manifold (ITGM) method in a wider parametric region. This method is demonstrated here by a nonlinear rotor system of 33-degrees of freedom (DOFs) with a pair of liquid-film bearings and a pedestal looseness fault. The motion region of the rotor system is divided into two parts: simple motion region and complex motion region. The adaptive POD method is compared with the ITGM method for the large and small spans of parameter in the two parametric regions to present the advantage of this method and disadvantage of the ITGM method. The comparisons of the responses are applied to verify the accuracy and robustness of the adaptive POD method, as well as the computational efficiency is also analyzed. As a result, the new adaptive POD method has a strong robustness and high computational efficiency and accuracy in a wide scope of parameter.
A Critical Review of the Transport and Decay of Wake Vortices in Ground Effect
NASA Technical Reports Server (NTRS)
Sarpkaya, T.
2004-01-01
This slide presentation reviews the transport and decay of wake vortices in ground effect and cites a need for a physics-based parametric model. The encounter of a vortex with a solid body is always a complex event involving turbulence enhancement, unsteadiness, and very large gradients of velocity and pressure. Wake counter in ground effect is the most dangerous of them all. The interaction of diverging, area-varying, and decaying aircraft wake vortices with the ground is very complex because both the vortices and the flow field generated by them are altered to accommodate the presence of the ground (where there is very little room to maneuver) and the background turbulent flow. Previous research regarding vortex models, wake vortex decay mechanisms, time evolution within in ground effect of a wake vortex pair, laminar flow in ground effect, and the interaction of the existing boundary layer with a convected vortex are reviewed. Additionally, numerical simulations, 3-dimensional large-eddy simulations, a probabilistic 2-phase wake vortex decay and transport model and a vortex element method are discussed. The devising of physics-based, parametric models for the prediction of (operational) real-time response, mindful of the highly three-dimensional and unsteady structure of vortices, boundary layers, atmospheric thermodynamics, and weather convective phenomena is required. In creating a model, LES and field data will be the most powerful tools.
Evaluating Cellular Polyfunctionality with a Novel Polyfunctionality Index
Larsen, Martin; Sauce, Delphine; Arnaud, Laurent; Fastenackels, Solène; Appay, Victor; Gorochov, Guy
2012-01-01
Functional evaluation of naturally occurring or vaccination-induced T cell responses in mice, men and monkeys has in recent years advanced from single-parameter (e.g. IFN-γ-secretion) to much more complex multidimensional measurements. Co-secretion of multiple functional molecules (such as cytokines and chemokines) at the single-cell level is now measurable due primarily to major advances in multiparametric flow cytometry. The very extensive and complex datasets generated by this technology raise the demand for proper analytical tools that enable the analysis of combinatorial functional properties of T cells, hence polyfunctionality. Presently, multidimensional functional measures are analysed either by evaluating all combinations of parameters individually or by summing frequencies of combinations that include the same number of simultaneous functions. Often these evaluations are visualized as pie charts. Whereas pie charts effectively represent and compare average polyfunctionality profiles of particular T cell subsets or patient groups, they do not document the degree or variation of polyfunctionality within a group nor does it allow more sophisticated statistical analysis. Here we propose a novel polyfunctionality index that numerically evaluates the degree and variation of polyfuntionality, and enable comparative and correlative parametric and non-parametric statistical tests. Moreover, it allows the usage of more advanced statistical approaches, such as cluster analysis. We believe that the polyfunctionality index will render polyfunctionality an appropriate end-point measure in future studies of T cell responsiveness. PMID:22860124
Effect of quantum nuclear motion on hydrogen bonding
NASA Astrophysics Data System (ADS)
McKenzie, Ross H.; Bekker, Christiaan; Athokpam, Bijyalaxmi; Ramesh, Sai G.
2014-05-01
This work considers how the properties of hydrogen bonded complexes, X-H⋯Y, are modified by the quantum motion of the shared proton. Using a simple two-diabatic state model Hamiltonian, the analysis of the symmetric case, where the donor (X) and acceptor (Y) have the same proton affinity, is carried out. For quantitative comparisons, a parametrization specific to the O-H⋯O complexes is used. The vibrational energy levels of the one-dimensional ground state adiabatic potential of the model are used to make quantitative comparisons with a vast body of condensed phase data, spanning a donor-acceptor separation (R) range of about 2.4 - 3.0 Å, i.e., from strong to weak hydrogen bonds. The position of the proton (which determines the X-H bond length) and its longitudinal vibrational frequency, along with the isotope effects in both are described quantitatively. An analysis of the secondary geometric isotope effect, using a simple extension of the two-state model, yields an improved agreement of the predicted variation with R of frequency isotope effects. The role of bending modes is also considered: their quantum effects compete with those of the stretching mode for weak to moderate H-bond strengths. In spite of the economy in the parametrization of the model used, it offers key insights into the defining features of H-bonds, and semi-quantitatively captures several trends.
Bayesian inversion using a geologically realistic and discrete model space
NASA Astrophysics Data System (ADS)
Jaeggli, C.; Julien, S.; Renard, P.
2017-12-01
Since the early days of groundwater modeling, inverse methods play a crucial role. Many research and engineering groups aim to infer extensive knowledge of aquifer parameters from a sparse set of observations. Despite decades of dedicated research on this topic, there are still several major issues to be solved. In the hydrogeological framework, one is often confronted with underground structures that present very sharp contrasts of geophysical properties. In particular, subsoil structures such as karst conduits, channels, faults, or lenses, strongly influence groundwater flow and transport behavior of the underground. For this reason it can be essential to identify their location and shape very precisely. Unfortunately, when inverse methods are specially trained to consider such complex features, their computation effort often becomes unaffordably high. The following work is an attempt to solve this dilemma. We present a new method that is, in some sense, a compromise between the ergodicity of Markov chain Monte Carlo (McMC) methods and the efficient handling of data by the ensemble based Kalmann filters. The realistic and complex random fields are generated by a Multiple-Point Statistics (MPS) tool. Nonetheless, it is applicable with any conditional geostatistical simulation tool. Furthermore, the algorithm is independent of any parametrization what becomes most important when two parametric systems are equivalent (permeability and resistivity, speed and slowness, etc.). When compared to two existing McMC schemes, the computational effort was divided by a factor of 12.
Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling
NASA Technical Reports Server (NTRS)
Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw
2005-01-01
The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.
A Statistician's View of Upcoming Grand Challenges
NASA Astrophysics Data System (ADS)
Meng, Xiao Li
2010-01-01
In this session we have seen some snapshots of the broad spectrum of challenges, in this age of huge, complex, computer-intensive models, data, instruments,and questions. These challenges bridge astronomy at many wavelengths; basic physics; machine learning; -- and statistics. At one end of our spectrum, we think of 'compressing' the data with non-parametric methods. This raises the question of creating 'pseudo-replicas' of the data for uncertainty estimates. What would be involved in, e.g. boot-strap and related methods? Somewhere in the middle are these non-parametric methods for encapsulating the uncertainty information. At the far end, we find more model-based approaches, with the physics model embedded in the likelihood and analysis. The other distinctive problem is really the 'black-box' problem, where one has a complicated e.g. fundamental physics-based computer code, or 'black box', and one needs to know how changing the parameters at input -- due to uncertainties of any kind -- will map to changing the output. All of these connect to challenges in complexity of data and computation speed. Dr. Meng will highlight ways to 'cut corners' with advanced computational techniques, such as Parallel Tempering and Equal Energy methods. As well, there are cautionary tales of running automated analysis with real data -- where "30 sigma" outliers due to data artifacts can be more common than the astrophysical event of interest.
Petri Nets with Fuzzy Logic (PNFL): Reverse Engineering and Parametrization
Küffner, Robert; Petri, Tobias; Windhager, Lukas; Zimmer, Ralf
2010-01-01
Background The recent DREAM4 blind assessment provided a particularly realistic and challenging setting for network reverse engineering methods. The in silico part of DREAM4 solicited the inference of cycle-rich gene regulatory networks from heterogeneous, noisy expression data including time courses as well as knockout, knockdown and multifactorial perturbations. Methodology and Principal Findings We inferred and parametrized simulation models based on Petri Nets with Fuzzy Logic (PNFL). This completely automated approach correctly reconstructed networks with cycles as well as oscillating network motifs. PNFL was evaluated as the best performer on DREAM4 in silico networks of size 10 with an area under the precision-recall curve (AUPR) of 81%. Besides topology, we inferred a range of additional mechanistic details with good reliability, e.g. distinguishing activation from inhibition as well as dependent from independent regulation. Our models also performed well on new experimental conditions such as double knockout mutations that were not included in the provided datasets. Conclusions The inference of biological networks substantially benefits from methods that are expressive enough to deal with diverse datasets in a unified way. At the same time, overly complex approaches could generate multiple different models that explain the data equally well. PNFL appears to strike the balance between expressive power and complexity. This also applies to the intuitive representation of PNFL models combining a straightforward graphical notation with colloquial fuzzy parameters. PMID:20862218
NASA Astrophysics Data System (ADS)
Canino, Lawrence S.; Shen, Tongye; McCammon, J. Andrew
2002-12-01
We extend the self-consistent pair contact probability method to the evaluation of the partition function for a protein complex at thermodynamic equilibrium. Specifically, we adapt the method for multichain models and introduce a parametrization for amino acid-specific pairwise interactions. This method is similar to the Gaussian network model but allows for the adjusting of the strengths of native state contacts. The method is first validated on a high resolution x-ray crystal structure of bovine Pancreatic Phospholipase A2 by comparing calculated B-factors with reported values. We then examine binding-induced changes in flexibility in protein-protein complexes, comparing computed results with those obtained from x-ray crystal structures and molecular dynamics simulations. In particular, we focus on the mouse acetylcholinesterase:fasciculin II and the human α-thrombin:thrombomodulin complexes.
A knotted complex scalar field for any knot
NASA Astrophysics Data System (ADS)
Bode, Benjamin; Dennis, Mark
Three-dimensional field configurations where a privileged defect line is knotted or linked have experienced an upsurge in interest, with examples including fluid mechanics, quantum wavefunctions, optics, liquid crystals and skyrmions. We describe a constructive algorithm to write down complex scalar functions of three-dimensional real space with knotted nodal lines, using trigonometric parametrizations of braids. The construction is most natural for the family of lemniscate knots which generalizes the torus knot and figure-8 knot, but generalizes to any knot or link. The specific forms of these functions allow various topological quantities associated with the field to be chosen, such as the helicity of a knotted flow field. We will describe some applications to physical systems such as those listed above. This work was supported by the Leverhulme Trust Programme Grant ''Scientific Properties of Complex Knots''.
Multi-Agent-Based Simulation of a Complex Ecosystem of Mental Health Care.
Kalton, Alan; Falconer, Erin; Docherty, John; Alevras, Dimitris; Brann, David; Johnson, Kyle
2016-02-01
This paper discusses the creation of an Agent-Based Simulation that modeled the introduction of care coordination capabilities into a complex system of care for patients with Serious and Persistent Mental Illness. The model describes the engagement between patients and the medical, social and criminal justice services they interact with in a complex ecosystem of care. We outline the challenges involved in developing the model, including process mapping and the collection and synthesis of data to support parametric estimates, and describe the controls built into the model to support analysis of potential changes to the system. We also describe the approach taken to calibrate the model to an observable level of system performance. Preliminary results from application of the simulation are provided to demonstrate how it can provide insights into potential improvements deriving from introduction of care coordination technology.
NASA Astrophysics Data System (ADS)
Zhang, Lei; Yang, Si-Gang; Wang, Xiao-Jian; Gou, Dou-Dou; Chen, Hong-Wei; Chen, Ming-Hua; Xie, Shi-Zhong
2014-01-01
We report the experimental demonstration of the optical parametric gain generation in the 1 μm regime based on a photonic crystal fiber (PCF) with a zero group velocity dispersion (GVD) wavelength of 1062 nm pumped by a homemade tunable picosecond mode-locked ytterbium-doped fiber laser. A broad parametric gain band is obtained by pumping the PCF in the anomalous GVD regime with a relatively low power. Two separated narrow parametric gain bands are observed by pumping the PCF in the normal GVD regime. The peak of the parametric gain profile can be tuned from 927 to 1038 nm and from 1099 to 1228 nm. This widely tunable parametric gain band can be used for a broad band optical parametric amplifier, large span wavelength conversion or a tunable optical parametric oscillator.
Histogram Curve Matching Approaches for Object-based Image Classification of Land Cover and Land Use
Toure, Sory I.; Stow, Douglas A.; Weeks, John R.; Kumar, Sunil
2013-01-01
The classification of image-objects is usually done using parametric statistical measures of central tendency and/or dispersion (e.g., mean or standard deviation). The objectives of this study were to analyze digital number histograms of image objects and evaluate classifications measures exploiting characteristic signatures of such histograms. Two histograms matching classifiers were evaluated and compared to the standard nearest neighbor to mean classifier. An ADS40 airborne multispectral image of San Diego, California was used for assessing the utility of curve matching classifiers in a geographic object-based image analysis (GEOBIA) approach. The classifications were performed with data sets having 0.5 m, 2.5 m, and 5 m spatial resolutions. Results show that histograms are reliable features for characterizing classes. Also, both histogram matching classifiers consistently performed better than the one based on the standard nearest neighbor to mean rule. The highest classification accuracies were produced with images having 2.5 m spatial resolution. PMID:24403648
NASA Astrophysics Data System (ADS)
Giuliani, Matteo; Mason, Emanuele; Castelletti, Andrea; Pianosi, Francesca
2014-05-01
The optimal operation of water resources systems is a wide and challenging problem due to non-linearities in the model and the objectives, high dimensional state-control space, and strong uncertainties in the hydroclimatic regimes. The application of classical optimization techniques (e.g., SDP, Q-learning, gradient descent-based algorithms) is strongly limited by the dimensionality of the system and by the presence of multiple, conflicting objectives. This study presents a novel approach which combines Direct Policy Search (DPS) and Multi-Objective Evolutionary Algorithms (MOEAs) to solve high-dimensional state and control space problems involving multiple objectives. DPS, also known as parameterization-simulation-optimization in the water resources literature, is a simulation-based approach where the reservoir operating policy is first parameterized within a given family of functions and, then, the parameters optimized with respect to the objectives of the management problem. The selection of a suitable class of functions to which the operating policy belong to is a key step, as it might restrict the search for the optimal policy to a subspace of the decision space that does not include the optimal solution. In the water reservoir literature, a number of classes have been proposed. However, many of these rules are based largely on empirical or experimental successes and they were designed mostly via simulation and for single-purpose reservoirs. In a multi-objective context similar rules can not easily inferred from the experience and the use of universal function approximators is generally preferred. In this work, we comparatively analyze two among the most common universal approximators: artificial neural networks (ANN) and radial basis functions (RBF) under different problem settings to estimate their scalability and flexibility in dealing with more and more complex problems. The multi-purpose HoaBinh water reservoir in Vietnam, accounting for hydropower production and flood control, is used as a case study. Preliminary results show that the RBF policy parametrization is more effective than the ANN one. In particular, the approximated Pareto front obtained with RBF control policies successfully explores the full tradeoff space between the two conflicting objectives, while most of the ANN solutions results to be Pareto-dominated by the RBF ones.
Scenario based optimization of a container vessel with respect to its projected operating conditions
NASA Astrophysics Data System (ADS)
Wagner, Jonas; Binkowski, Eva; Bronsart, Robert
2014-06-01
In this paper the scenario based optimization of the bulbous bow of the KRISO Container Ship (KCS) is presented. The optimization of the parametrically modeled vessel is based on a statistically developed operational profile generated from noon-to-noon reports of a comparable 3600 TEU container vessel and specific development functions representing the growth of global economy during the vessels service time. In order to consider uncertainties, statistical fluctuations are added. An analysis of these data lead to a number of most probable upcoming operating conditions (OC) the vessel will stay in the future. According to their respective likeliness an objective function for the evaluation of the optimal design variant of the vessel is derived and implemented within the parametrical optimization workbench FRIENDSHIP Framework. In the following this evaluation is done with respect to vessel's calculated effective power based on the usage of potential flow code. The evaluation shows, that the usage of scenarios within the optimization process has a strong influence on the hull form.
Clinical Alarms in intensive care: implications of alarm fatigue for the safety of patients1
Bridi, Adriana Carla; Louro, Thiago Quinellato; da Silva, Roberto Carlos Lyra
2014-01-01
OBJECTIVES: to identify the number of electro-medical pieces of equipment in a coronary care unit, characterize their types, and analyze implications for the safety of patients from the perspective of alarm fatigue. METHOD: this quantitative, observational, descriptive, non-participatory study was conducted in a coronary care unit of a cardiology hospital with 170 beds. RESULTS: a total of 426 alarms were recorded in 40 hours of observation: 227 were triggered by multi-parametric monitors and 199 were triggered by other equipment (infusion pumps, dialysis pumps, mechanical ventilators, and intra-aortic balloons); that is an average of 10.6 alarms per hour. CONCLUSION: the results reinforce the importance of properly configuring physiological variables, the volume and parameters of alarms of multi-parametric monitors within the routine of intensive care units. The alarms of equipment intended to protect patients have increased noise within the unit, the level of distraction and interruptions in the workflow, leading to a false sense of security. PMID:25591100
A Conceptual Wing Flutter Analysis Tool for Systems Analysis and Parametric Design Study
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
2003-01-01
An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate flutt er instability boundaries of a typical wing, when detailed structural and aerodynamic data are not available. Effects of change in key flu tter parameters can also be estimated in order to guide the conceptual design. This userfriendly software was developed using MathCad and M atlab codes. The analysis method was based on non-dimensional paramet ric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on wing torsion stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravit y location and pitch-inertia radius of gyration. These parametric plo ts were compiled in a Chance-Vought Corporation report from database of past experiments and wind tunnel test results. An example was prese nted for conceptual flutter analysis of outer-wing of a Blended-Wing- Body aircraft.
NASA Astrophysics Data System (ADS)
Avila, Edward R.
The Electric Insertion Transfer Experiment (ELITE) is an Air Force Advanced Technology Transition Demonstration which is being executed as a cooperative Research and Development Agreement between the Phillips Lab and TRW. The objective is to build, test, and fly a solar-electric orbit transfer and orbit maneuvering vehicle, as a precursor to an operational electric orbit transfer vehicle (EOTV). This paper surveys some of the analysis tools used to do parametric studies and discusses the study results. The primary analysis tool was the Electric Vehicle Analyzer (EVA) developed by the Phillips Lab and modified by The Aerospace Corporation. It uses a simple orbit averaging approach to model low-thrust transfer performance, and runs in a PC environment. The assumptions used in deriving the EVA math model are presented. This tool and others surveyed were used to size the solar array power required for the spacecraft, and develop a baseline mission profile that meets the requirements of the ELITE mission.
System Study: Technology Assessment and Prioritizing
NASA Technical Reports Server (NTRS)
2005-01-01
The objective of this NASA funded project is to assess and prioritize advanced technologies required to achieve the goals for an "Intelligent Propulsion System" through collaboration among GEAE, NASA, and Georgia Tech. Key GEAE deliverables are parametric response surface equations (RSE's) relating technology features to system benefits (sfc, weight, fuel burn, design range, acoustics, emission, etc...) and listings of Technology Impact Matrix (TIM) with benefits, debits, and approximate readiness status. TIM has been completed for GEAE and NASA proposed technologies. The combined GEAE and NASA TIM input requirement is shown in Table.1. In the course of building the RSE's and TIM, significant parametric technology modeling and RSE accuracy improvements were accomplished. GEAE has also done preliminary ranking of the technologies using Georgia Tech/GEAE USA developed technology evaluation tools. System level impact was performed by combining beneficial technologies with minimum conflict among various system figures of merits to assess their overall benefits to the system. The shortfalls and issues with modeling the proposed technologies are identified, and recommendations for future work are also proposed.
A probabilistic strategy for parametric catastrophe insurance
NASA Astrophysics Data System (ADS)
Figueiredo, Rui; Martina, Mario; Stephenson, David; Youngman, Benjamin
2017-04-01
Economic losses due to natural hazards have shown an upward trend since 1980, which is expected to continue. Recent years have seen a growing worldwide commitment towards the reduction of disaster losses. This requires effective management of disaster risk at all levels, a part of which involves reducing financial vulnerability to disasters ex-ante, ensuring that necessary resources will be available following such events. One way to achieve this is through risk transfer instruments. These can be based on different types of triggers, which determine the conditions under which payouts are made after an event. This study focuses on parametric triggers, where payouts are determined by the occurrence of an event exceeding specified physical parameters at a given location, or at multiple locations, or over a region. This type of product offers a number of important advantages, and its adoption is increasing. The main drawback of parametric triggers is their susceptibility to basis risk, which arises when there is a mismatch between triggered payouts and the occurrence of loss events. This is unavoidable in said programmes, as their calibration is based on models containing a number of different sources of uncertainty. Thus, a deterministic definition of the loss event triggering parameters appears flawed. However, often for simplicity, this is the way in which most parametric models tend to be developed. This study therefore presents an innovative probabilistic strategy for parametric catastrophe insurance. It is advantageous as it recognizes uncertainties and minimizes basis risk while maintaining a simple and transparent procedure. A logistic regression model is constructed here to represent the occurrence of loss events based on certain loss index variables, obtained through the transformation of input environmental variables. Flood-related losses due to rainfall are studied. The resulting model is able, for any given day, to issue probabilities of occurrence of loss events. Due to the nature of parametric programmes, it is still necessary to clearly define when a payout is due or not, and so a decision threshold probability above which a loss event is considered to occur must be set, effectively converting the issued probabilities into deterministic binary outcomes. Model skill and value are evaluated over the range of possible threshold probabilities, with the objective of defining the optimal one. The predictive ability of the model is assessed. In terms of value assessment, a decision model is proposed, allowing users to quantify monetarily their expected expenses when different combinations of model event triggering and actual event occurrence take place, directly tackling the problem of basis risk.
Rhinencephalon changes in tuberous sclerosis complex.
Manara, Renzo; Brotto, Davide; Bugin, Samuela; Pelizza, Maria Federica; Sartori, Stefano; Nosadini, Margherita; Azzolini, Sara; Iaconetta, Giorgio; Parazzini, Cecilia; Murgia, Alessandra; Peron, Angela; Canevini, Paola; Labriola, Francesca; Vignoli, Aglaia; Toldo, Irene
2018-06-17
Despite complex olfactory bulb embryogenesis, its development abnormalities in tuberous sclerosis complex (TSC) have been poorly investigated. Brain MRIs of 110 TSC patients (mean age 11.5 years; age range 0.5-38 years; 52 female; 26 TSC1, 68 TSC2, 8 without mutation identified in TSC1 or TSC2, 8 not tested) were retrospectively evaluated. Signal and morphological abnormalities consistent with olfactory bulb hypo/aplasia or with olfactory bulb hamartomas were recorded. Cortical tuber number was visually assessed and a neurological severity score was obtained. Patients with and without rhinencephalon abnormalities were compared using appropriate parametric and non-parametric tests. Eight of110 (7.2%) TSC patients presented rhinencephalon MRI changes encompassing olfactory bulb bilateral aplasia (2/110), bilateral hypoplasia (2/110), unilateral hypoplasia (1/110), unilateral hamartoma (2/110), and bilateral hamartomas (1/110); olfactory bulb hypo/aplasia always displayed ipsilateral olfactory sulcus hypoplasia, while no TSC patient harboring rhinencephalon hamartomas had concomitant forebrain sulcation abnormalities. None of the patients showed overt olfactory deficits or hypogonadism, though young age and poor compliance hampered a proper evaluation in most cases. TSC patients with rhinencephalon changes had more cortical tubers (47 ± 29.1 vs 26.2 ± 19.6; p = 0.006) but did not differ for clinical severity (p = 0.45) compared to the other patients of the sample. Olfactory bulb and/or forebrain changes are not rare among TSC subjects. Future studies investigating clinical consequences in older subjects (anosmia, gonadic development etc.) will define whether rhinencephalon changes are simply an imaging feature among the constellation of TSC-related brain changes or a feature to be searched for possible implications in the management of TSC subjects.
2009-01-01
representation to a simple curve in 3D by using the Whitney embedding theorem. In a very ludic way, we propose to combine phases one and two to...elimination principle which takes advantage of the designed parametrization. To further refine discrimination among objects, we introduce a post...packing numbers and design of principal curves. IEEE transactions on Pattern Analysis and Machine Intel- ligence, 22(3):281-297, 2000. [68] M. H. Yang, Face
NASA Astrophysics Data System (ADS)
Oesterle, Jonathan; Lionel, Amodeo
2018-06-01
The current competitive situation increases the importance of realistically estimating product costs during the early phases of product and assembly line planning projects. In this article, several multi-objective algorithms using difference dominance rules are proposed to solve the problem associated with the selection of the most effective combination of product and assembly lines. The list of developed algorithms includes variants of ant colony algorithms, evolutionary algorithms and imperialist competitive algorithms. The performance of each algorithm and dominance rule is analysed by five multi-objective quality indicators and fifty problem instances. The algorithms and dominance rules are ranked using a non-parametric statistical test.
NASA Technical Reports Server (NTRS)
Feinstein, S. P.; Girard, M. A.
1979-01-01
An automated technique for measuring particle diameters and their spatial coordinates from holographic reconstructions is being developed. Preliminary tests on actual cold-flow holograms of impinging jets indicate that a suitable discriminant algorithm consists of a Fourier-Gaussian noise filter and a contour thresholding technique. This process identifies circular as well as noncircular objects. The desired objects (in this case, circular or possibly ellipsoidal) are then selected automatically from the above set and stored with their parametric representations. From this data, dropsize distributions as a function of spatial coordinates can be generated and combustion effects due to hardware and/or physical variables studied.
Hyperbolic and semi-parametric models in finance
NASA Astrophysics Data System (ADS)
Bingham, N. H.; Kiesel, Rüdiger
2001-02-01
The benchmark Black-Scholes-Merton model of mathematical finance is parametric, based on the normal/Gaussian distribution. Its principal parametric competitor, the hyperbolic model of Barndorff-Nielsen, Eberlein and others, is briefly discussed. Our main theme is the use of semi-parametric models, incorporating the mean vector and covariance matrix as in the Markowitz approach, plus a non-parametric part, a scalar function incorporating features such as tail-decay. Implementation is also briefly discussed.
The role of banks in the Brazilian interbank market: Does bank type matter?
NASA Astrophysics Data System (ADS)
Cajueiro, Daniel O.; Tabak, Benjamin M.
2008-12-01
This paper analyzes the Brazilian interbank network structure using a complex network-based approach. Results suggest a weak evidence of community structure, high heterogeneity of the network and that this market is characterized by money centers having exposures to many banks. Furthermore, we go beyond the structure of the network using information about the characteristics of the nodes and a non-parametric test in order to understand the role of the banks in the interbanking market.
A Weak Constraint 4D-Var Assimilation System for the Navy Coastal Model Using the Representer Method
2013-01-01
the help of the Parametric Fortrai compiler (PFC), Erwig et al. 2007 . Some general circulation models of the complexity of NCOM have seen 1 similar...the Mir general circulation model (MITgcm, Marotzke et al. 1999) also used in the ECCO consortium assimilation experiments ( Stammer et al. 2002...using the« inverse Regional Ocean Modeling System (IROMS, Di Lorenzo et al. 2007 ) with horizontal resolutions of 10 and 30km. The CCS is a large
2007-09-01
also relatively easy to change the wind tunnel model to allow detailed parametric effects to be investigated. The main disadvantage of wind tunnel...as Magnus force and moment coefficients are difficult to obtain in a wind tunnel and require a complex physical wind tunnel model. Over the past...7) The terms containing YPAC constitute the Magnus air load acting at the Magnus center of pressure while the terms containing 0 2, ,X X NAC C C
What can music tell us about social interaction?
D'Ausilio, Alessandro; Novembre, Giacomo; Fadiga, Luciano; Keller, Peter E
2015-03-01
Humans are innately social creatures, but cognitive neuroscience, that has traditionally focused on individual brains, is only now beginning to investigate social cognition through realistic interpersonal interaction. Music provides an ideal domain for doing so because it offers a promising solution for balancing the trade-off between ecological validity and experimental control when testing cognitive and brain functions. Musical ensembles constitute a microcosm that provides a platform for parametrically modeling the complexity of human social interaction. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chaotic Motions in the Real Fuzzy Electronic Circuits (Preprint)
2012-12-01
the research field of secure communications, the original source should be blended with other complex signals. Chaotic signals are one of the good... blending of the linear system models. Consider a continuous-time nonlinear dynamic system as follows: Rule i: IF )(1 tx is ...1iM and )(txn is...Chaos Solitons Fractals, vol. 21, no. 4, pp. 957–965, 2004. 29. L. M. Tam and W. M. SiTou, “Parametric study of the fractional order Chen–Lee
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1995-01-01
Parametric cost analysis is a mathematical approach to estimating cost. Parametric cost analysis uses non-cost parameters, such as quality characteristics, to estimate the cost to bring forth, sustain, and retire a product. This paper reviews parametric cost analysis and shows how it can be used within the cost deployment process.
Vasilyev, M; Choi, S K; Kumar, P; D'Ariano, G M
1998-09-01
Photon-number distributions for parametric fluorescence from a nondegenerate optical parametric amplifier are measured with a novel self-homodyne technique. These distributions exhibit the thermal-state character predicted by theory. However, a difference between the fluorescence gain and the signal gain of the parametric amplifier is observed. We attribute this difference to a change in the signal-beam profile during the traveling-wave pulsed amplification process.
A Cartesian parametrization for the numerical analysis of material instability
Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.; ...
2016-02-25
We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, themore » performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.« less
A Cartesian parametrization for the numerical analysis of material instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.
We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, themore » performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.« less
An interactive local flattening operator to support digital investigations on artwork surfaces.
Pietroni, Nico; Massimiliano, Corsini; Cignoni, Paolo; Scopigno, Roberto
2011-12-01
Analyzing either high-frequency shape detail or any other 2D fields (scalar or vector) embedded over a 3D geometry is a complex task, since detaching the detail from the overall shape can be tricky. An alternative approach is to move to the 2D space, resolving shape reasoning to easier image processing techniques. In this paper we propose a novel framework for the analysis of 2D information distributed over 3D geometry, based on a locally smooth parametrization technique that allows us to treat local 3D data in terms of image content. The proposed approach has been implemented as a sketch-based system that allows to design with a few gestures a set of (possibly overlapping) parameterizations of rectangular portions of the surface. We demonstrate that, due to the locality of the parametrization, the distortion is under an acceptable threshold, while discontinuities can be avoided since the parametrized geometry is always homeomorphic to a disk. We show the effectiveness of the proposed technique to solve specific Cultural Heritage (CH) tasks: the analysis of chisel marks over the surface of a unfinished sculpture and the local comparison of multiple photographs mapped over the surface of an artwork. For this very difficult task, we believe that our framework and the corresponding tool are the first steps toward a computer-based shape reasoning system, able to support CH scholars with a medium they are more used to. © 2011 IEEE
Ji, Jiadong; He, Di; Feng, Yang; He, Yong; Xue, Fuzhong; Xie, Lei
2017-10-01
A complex disease is usually driven by a number of genes interwoven into networks, rather than a single gene product. Network comparison or differential network analysis has become an important means of revealing the underlying mechanism of pathogenesis and identifying clinical biomarkers for disease classification. Most studies, however, are limited to network correlations that mainly capture the linear relationship among genes, or rely on the assumption of a parametric probability distribution of gene measurements. They are restrictive in real application. We propose a new Joint density based non-parametric Differential Interaction Network Analysis and Classification (JDINAC) method to identify differential interaction patterns of network activation between two groups. At the same time, JDINAC uses the network biomarkers to build a classification model. The novelty of JDINAC lies in its potential to capture non-linear relations between molecular interactions using high-dimensional sparse data as well as to adjust confounding factors, without the need of the assumption of a parametric probability distribution of gene measurements. Simulation studies demonstrate that JDINAC provides more accurate differential network estimation and lower classification error than that achieved by other state-of-the-art methods. We apply JDINAC to a Breast Invasive Carcinoma dataset, which includes 114 patients who have both tumor and matched normal samples. The hub genes and differential interaction patterns identified were consistent with existing experimental studies. Furthermore, JDINAC discriminated the tumor and normal sample with high accuracy by virtue of the identified biomarkers. JDINAC provides a general framework for feature selection and classification using high-dimensional sparse omics data. R scripts available at https://github.com/jijiadong/JDINAC. lxie@iscb.org. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Technical Reports Server (NTRS)
Olds, John Robert; Walberg, Gerald D.
1993-01-01
Multidisciplinary design optimization (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional optimization methods cannot always be applied. Several multidisciplinary techniques and methods were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) optimization schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design optimization methods is included. Methods from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on methods from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time optimization methods because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of Taguchi methods, central composite designs, and response surface methods to the design optimization of the RBCC SSTO are presented. Attention is given to the aspect of Taguchi methods that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are determined for the vehicle. A summary and evaluation of the various parametric MDO methods employed in the research are included. Recommendations for additional research are provided.
Bantis, Leonidas E; Nakas, Christos T; Reiser, Benjamin; Myall, Daniel; Dalrymple-Alford, John C
2017-06-01
The three-class approach is used for progressive disorders when clinicians and researchers want to diagnose or classify subjects as members of one of three ordered categories based on a continuous diagnostic marker. The decision thresholds or optimal cut-off points required for this classification are often chosen to maximize the generalized Youden index (Nakas et al., Stat Med 2013; 32: 995-1003). The effectiveness of these chosen cut-off points can be evaluated by estimating their corresponding true class fractions and their associated confidence regions. Recently, in the two-class case, parametric and non-parametric methods were investigated for the construction of confidence regions for the pair of the Youden-index-based optimal sensitivity and specificity fractions that can take into account the correlation introduced between sensitivity and specificity when the optimal cut-off point is estimated from the data (Bantis et al., Biomet 2014; 70: 212-223). A parametric approach based on the Box-Cox transformation to normality often works well while for markers having more complex distributions a non-parametric procedure using logspline density estimation can be used instead. The true class fractions that correspond to the optimal cut-off points estimated by the generalized Youden index are correlated similarly to the two-class case. In this article, we generalize these methods to the three- and to the general k-class case which involves the classification of subjects into three or more ordered categories, where ROC surface or ROC manifold methodology, respectively, is typically employed for the evaluation of the discriminatory capacity of a diagnostic marker. We obtain three- and multi-dimensional joint confidence regions for the optimal true class fractions. We illustrate this with an application to the Trail Making Test Part A that has been used to characterize cognitive impairment in patients with Parkinson's disease.
Local kernel nonparametric discriminant analysis for adaptive extraction of complex structures
NASA Astrophysics Data System (ADS)
Li, Quanbao; Wei, Fajie; Zhou, Shenghan
2017-05-01
The linear discriminant analysis (LDA) is one of popular means for linear feature extraction. It usually performs well when the global data structure is consistent with the local data structure. Other frequently-used approaches of feature extraction usually require linear, independence, or large sample condition. However, in real world applications, these assumptions are not always satisfied or cannot be tested. In this paper, we introduce an adaptive method, local kernel nonparametric discriminant analysis (LKNDA), which integrates conventional discriminant analysis with nonparametric statistics. LKNDA is adept in identifying both complex nonlinear structures and the ad hoc rule. Six simulation cases demonstrate that LKNDA have both parametric and nonparametric algorithm advantages and higher classification accuracy. Quartic unilateral kernel function may provide better robustness of prediction than other functions. LKNDA gives an alternative solution for discriminant cases of complex nonlinear feature extraction or unknown feature extraction. At last, the application of LKNDA in the complex feature extraction of financial market activities is proposed.
Reduction of a linear complex model for respiratory system during Airflow Interruption.
Jablonski, Ireneusz; Mroczka, Janusz
2010-01-01
The paper presents methodology of a complex model reduction to its simpler version - an identifiable inverse model. Its main tool is a numerical procedure of sensitivity analysis (structural and parametric) applied to the forward linear equivalent designed for the conditions of interrupter experiment. Final result - the reduced analog for the interrupter technique is especially worth of notice as it fills a major gap in occlusional measurements, which typically use simple, one- or two-element physical representations. Proposed electrical reduced circuit, being structural combination of resistive, inertial and elastic properties, can be perceived as a candidate for reliable reconstruction and quantification (in the time and frequency domain) of dynamical behavior of the respiratory system in response to a quasi-step excitation by valve closure.
NASA Astrophysics Data System (ADS)
Nordebo, Sven; Dalarsson, Mariana; Khodadad, Davood; Müller, Beat; Waldmann, Andreas D.; Becher, Tobias; Frerichs, Inez; Sophocleous, Louiza; Sjöberg, Daniel; Seifnaraghi, Nima; Bayford, Richard
2018-05-01
Classical homogenization theory based on the Hashin–Shtrikman coated ellipsoids is used to model the changes in the complex valued conductivity (or admittivity) of a lung during tidal breathing. Here, the lung is modeled as a two-phase composite material where the alveolar air-filling corresponds to the inclusion phase. The theory predicts a linear relationship between the real and the imaginary parts of the change in the complex valued conductivity of a lung during tidal breathing, and where the loss cotangent of the change is approximately the same as of the effective background conductivity and hence easy to estimate. The theory is illustrated with numerical examples based on realistic parameter values and frequency ranges used with electrical impedance tomography (EIT). The theory may be potentially useful for imaging and clinical evaluations in connection with lung EIT for respiratory management and control.
Caie, Peter D; Harrison, David J
2016-01-01
The field of pathology is rapidly transforming from a semiquantitative and empirical science toward a big data discipline. Large data sets from across multiple omics fields may now be extracted from a patient's tissue sample. Tissue is, however, complex, heterogeneous, and prone to artifact. A reductionist view of tissue and disease progression, which does not take this complexity into account, may lead to single biomarkers failing in clinical trials. The integration of standardized multi-omics big data and the retention of valuable information on spatial heterogeneity are imperative to model complex disease mechanisms. Mathematical modeling through systems pathology approaches is the ideal medium to distill the significant information from these large, multi-parametric, and hierarchical data sets. Systems pathology may also predict the dynamical response of disease progression or response to therapy regimens from a static tissue sample. Next-generation pathology will incorporate big data with systems medicine in order to personalize clinical practice for both prognostic and predictive patient care.
Rajwa, Bartek; Wallace, Paul K.; Griffiths, Elizabeth A.; Dundar, Murat
2017-01-01
Objective Flow cytometry (FC) is a widely acknowledged technology in diagnosis of acute myeloid leukemia (AML) and has been indispensable in determining progression of the disease. Although FC plays a key role as a post-therapy prognosticator and evaluator of therapeutic efficacy, the manual analysis of cytometry data is a barrier to optimization of reproducibility and objectivity. This study investigates the utility of our recently introduced non-parametric Bayesian framework in accurately predicting the direction of change in disease progression in AML patients using FC data. Methods The highly flexible non-parametric Bayesian model based on the infinite mixture of infinite Gaussian mixtures is used for jointly modeling data from multiple FC samples to automatically identify functionally distinct cell populations and their local realizations. Phenotype vectors are obtained by characterizing each sample by the proportions of recovered cell populations, which are in turn used to predict the direction of change in disease progression for each patient. Results We used 200 diseased and non-diseased immunophenotypic panels for training and tested the system with 36 additional AML cases collected at multiple time points. The proposed framework identified the change in direction of disease progression with accuracies of 90% (9 out of 10) for relapsing cases and 100% (26 out of 26) for the remaining cases. Conclusions We believe that these promising results are an important first step towards the development of automated predictive systems for disease monitoring and continuous response evaluation. Significance Automated measurement and monitoring of therapeutic response is critical not only for objective evaluation of disease status prognosis but also for timely assessment of treatment strategies. PMID:27416585
Characterization of time series via Rényi complexity-entropy curves
NASA Astrophysics Data System (ADS)
Jauregui, M.; Zunino, L.; Lenzi, E. K.; Mendes, R. S.; Ribeiro, H. V.
2018-05-01
One of the most useful tools for distinguishing between chaotic and stochastic time series is the so-called complexity-entropy causality plane. This diagram involves two complexity measures: the Shannon entropy and the statistical complexity. Recently, this idea has been generalized by considering the Tsallis monoparametric generalization of the Shannon entropy, yielding complexity-entropy curves. These curves have proven to enhance the discrimination among different time series related to stochastic and chaotic processes of numerical and experimental nature. Here we further explore these complexity-entropy curves in the context of the Rényi entropy, which is another monoparametric generalization of the Shannon entropy. By combining the Rényi entropy with the proper generalization of the statistical complexity, we associate a parametric curve (the Rényi complexity-entropy curve) with a given time series. We explore this approach in a series of numerical and experimental applications, demonstrating the usefulness of this new technique for time series analysis. We show that the Rényi complexity-entropy curves enable the differentiation among time series of chaotic, stochastic, and periodic nature. In particular, time series of stochastic nature are associated with curves displaying positive curvature in a neighborhood of their initial points, whereas curves related to chaotic phenomena have a negative curvature; finally, periodic time series are represented by vertical straight lines.
Parametric nanomechanical amplification at very high frequency.
Karabalin, R B; Feng, X L; Roukes, M L
2009-09-01
Parametric resonance and amplification are important in both fundamental physics and technological applications. Here we report very high frequency (VHF) parametric resonators and mechanical-domain amplifiers based on nanoelectromechanical systems (NEMS). Compound mechanical nanostructures patterned by multilayer, top-down nanofabrication are read out by a novel scheme that parametrically modulates longitudinal stress in doubly clamped beam NEMS resonators. Parametric pumping and signal amplification are demonstrated for VHF resonators up to approximately 130 MHz and provide useful enhancement of both resonance signal amplitude and quality factor. We find that Joule heating and reduced thermal conductance in these nanostructures ultimately impose an upper limit to device performance. We develop a theoretical model to account for both the parametric response and nonequilibrium thermal transport in these composite nanostructures. The results closely conform to our experimental observations, elucidate the frequency and threshold-voltage scaling in parametric VHF NEMS resonators and sensors, and establish the ultimate sensitivity limits of this approach.
Parametric amplification in MoS2 drum resonator.
Prasad, Parmeshwar; Arora, Nishta; Naik, A K
2017-11-30
Parametric amplification is widely used in diverse areas from optics to electronic circuits to enhance low level signals by varying relevant system parameters. Parametric amplification has also been performed in several micro-nano resonators including nano-electromechanical system (NEMS) resonators based on a two-dimensional (2D) material. Here, we report the enhancement of mechanical response in a MoS 2 drum resonator using degenerate parametric amplification. We use parametric pumping to modulate the spring constant of the MoS 2 resonator and achieve a 10 dB amplitude gain. We also demonstrate quality factor enhancement in the resonator with parametric amplification. We investigate the effect of cubic nonlinearity on parametric amplification and show that it limits the gain of the mechanical resonator. Amplifying ultra-small displacements at room temperature and understanding the limitations of the amplification in these devices is key for using these devices for practical applications.
Problems of the design of low-noise input devices. [parametric amplifiers
NASA Technical Reports Server (NTRS)
Manokhin, V. M.; Nemlikher, Y. A.; Strukov, I. A.; Sharfov, Y. A.
1974-01-01
An analysis is given of the requirements placed on the elements of parametric centimeter waveband amplifiers for achievement of minimal noise temperatures. A low-noise semiconductor parametric amplifier using germanium parametric diodes for a receiver operating in the 4 GHz band was developed and tested confirming the possibility of satisfying all requirements.
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
Kang, Jiqiang; Wei, Xiaoming; Li, Bowen; Wang, Xie; Yu, Luoqin; Tan, Sisi; Jinata, Chandra; Wong, Kenneth K. Y.
2016-01-01
We proposed a sensitivity enhancement method of the interference-based signal detection approach and applied it on a swept-source optical coherence tomography (SS-OCT) system through all-fiber optical parametric amplifier (FOPA) and parametric balanced detector (BD). The parametric BD was realized by combining the signal and phase conjugated idler band that was newly-generated through FOPA, and specifically by superimposing these two bands at a photodetector. The sensitivity enhancement by FOPA and parametric BD in SS-OCT were demonstrated experimentally. The results show that SS-OCT with FOPA and SS-OCT with parametric BD can provide more than 9 dB and 12 dB sensitivity improvement, respectively, when compared with the conventional SS-OCT in a spectral bandwidth spanning over 76 nm. To further verify and elaborate their sensitivity enhancement, a bio-sample imaging experiment was conducted on loach eyes by conventional SS-OCT setup, SS-OCT with FOPA and parametric BD at different illumination power levels. All these results proved that using FOPA and parametric BD could improve the sensitivity significantly in SS-OCT systems. PMID:27446655
Direct statistical modeling and its implications for predictive mapping in mining exploration
NASA Astrophysics Data System (ADS)
Sterligov, Boris; Gumiaux, Charles; Barbanson, Luc; Chen, Yan; Cassard, Daniel; Cherkasov, Sergey; Zolotaya, Ludmila
2010-05-01
Recent advances in geosciences make more and more multidisciplinary data available for mining exploration. This allowed developing methodologies for computing forecast ore maps from the statistical combination of such different input parameters, all based on an inverse problem theory. Numerous statistical methods (e.g. algebraic method, weight of evidence, Siris method, etc) with varying degrees of complexity in their development and implementation, have been proposed and/or adapted for ore geology purposes. In literature, such approaches are often presented through applications on natural examples and the results obtained can present specificities due to local characteristics. Moreover, though crucial for statistical computations, "minimum requirements" needed for input parameters (number of minimum data points, spatial distribution of objects, etc) are often only poorly expressed. From these, problems often arise when one has to choose between one and the other method for her/his specific question. In this study, a direct statistical modeling approach is developed in order to i) evaluate the constraints on the input parameters and ii) test the validity of different existing inversion methods. The approach particularly focused on the analysis of spatial relationships between location of points and various objects (e.g. polygons and /or polylines) which is particularly well adapted to constrain the influence of intrusive bodies - such as a granite - and faults or ductile shear-zones on spatial location of ore deposits (point objects). The method is designed in a way to insure a-dimensionality with respect to scale. In this approach, both spatial distribution and topology of objects (polygons and polylines) can be parametrized by the user (e.g. density of objects, length, surface, orientation, clustering). Then, the distance of points with respect to a given type of objects (polygons or polylines) is given using a probability distribution. The location of points is computed assuming either independency or different grades of dependency between the two probability distributions. The results show that i)polygons surface mean value, polylines length mean value, the number of objects and their clustering are critical and ii) the validity of the different tested inversion methods strongly depends on the relative importance and on the dependency between the parameters used. In addition, this combined approach of direct and inverse modeling offers an opportunity to test the robustness of the inferred distribution point laws with respect to the quality of the input data set.
NASA Technical Reports Server (NTRS)
Egbert, James Allen
2016-01-01
In support of ground system development for the Space Launch System (SLS), engineers are tasked with building immense engineering models of extreme complexity. The various systems require rigorous analysis of pneumatics, hydraulic, cryogenic, and hypergolic systems. There are certain standards that each of these systems must meet, in the form of pressure vessel system (PVS) certification reports. These reports can be hundreds of pages long, and require many hours to compile. Traditionally, each component is analyzed individually, often utilizing hand calculations in the design process. The objective of this opportunity is to perform these analyses in an integrated fashion with the parametric CADCAE environment. This allows for systems to be analyzed on an assembly level in a semi-automated fashion, which greatly improves accuracy and efficiency. To accomplish this, component specific parameters were stored in the Windchill database to individual Creo Parametric models based on spec control drawings. These parameters were then accessed by using the Prime Analysis within Creo Parametric. MathCAD Prime spreadsheets were created that automatically extracted these parameters, performed calculations, and generated reports. The reports described component compatibility based on local conditions such as pressure, temperature, density, and flow rates. The reports also determined component pairing compatibility, such as properly sizing relief valves with regulators. The reports stored the input conditions that were used to determine compatibility to increase traceability of component selection. The desired workflow for using this tool would begin with a Creo Schematics diagram of a PVS system. This schematic would store local conditions and locations of components. The schematic would then populate an assembly within Creo Parametric, using Windchill database parts. These parts would have their attributes already assigned, and the MathCAD spreadsheets could begin running through database parts to determine which components would be suited for specific locations within the assembly. This eliminates a significant amount of time from the design process, and makes initial analysis assessments more accurate. Each component that would be checked for a location within the assembly would generate a report, showing whether the component was compatible. These reports could be used to generate the PVS report without the need to perform the same analysis multiple times. This process also has the potential to be expanded upon to further automate PVS reports. The integration of software codes or macros could be used to automatically check through hundreds of parts for each location on the schematic. If the software could recognize which type of component would be necessary for each location, it is possible that simply starting the macro could completely choose all the components needed for the schematic, and in turn the system. This would save many hours of work initially selecting components, which could end up saving money. Overall, this process helps to automate initial component selections for PVS systems to fit local design specifications. These selections will automatically generate reports showing how the design criteria are met by the specific component that was chosen. These reports will contribute to easier compilation of the PVS certification reports, which currently take a great amount of time and effort to produce.
Moss, Brian G; Yeaton, William H
2013-10-01
Annually, American colleges and universities provide developmental education (DE) to millions of underprepared students; however, evaluation estimates of DE benefits have been mixed. Using a prototypic exemplar of DE, our primary objective was to investigate the utility of a replicative evaluative framework for assessing program effectiveness. Within the context of the regression discontinuity (RD) design, this research examined the effectiveness of a DE program for five, sequential cohorts of first-time college students. Discontinuity estimates were generated for individual terms and cumulatively, across terms. Participants were 3,589 first-time community college students. DE program effects were measured by contrasting both college-level English grades and a dichotomous measure of pass/fail, for DE and non-DE students. Parametric and nonparametric estimates of overall effect were positive for continuous and dichotomous measures of achievement (grade and pass/fail). The variability of program effects over time was determined by tracking results within individual terms and cumulatively, across terms. Applying this replication strategy, DE's overall impact was modest (an effect size of approximately .20) but quite consistent, based on parametric and nonparametric estimation approaches. A meta-analysis of five RD results yielded virtually the same estimate as the overall, parametric findings. Subset analysis, though tentative, suggested that males benefited more than females, while academic gains were comparable for different ethnicities. The cumulative, within-study comparison, replication approach offers considerable potential for the evaluation of new and existing policies, particularly when effects are relatively small, as is often the case in applied settings.
Adelian, R.; Jamali, J.; Zare, N.; Ayatollahi, S. M. T.; Pooladfar, G. R.; Roustaei, N.
2015-01-01
Background: Identification of the prognostic factors for survival in patients with liver transplantation is challengeable. Various methods of survival analysis have provided different, sometimes contradictory, results from the same data. Objective: To compare Cox’s regression model with parametric models for determining the independent factors for predicting adults’ and pediatrics’ survival after liver transplantation. Method: This study was conducted on 183 pediatric patients and 346 adults underwent liver transplantation in Namazi Hospital, Shiraz, southern Iran. The study population included all patients undergoing liver transplantation from 2000 to 2012. The prognostic factors sex, age, Child class, initial diagnosis of the liver disease, PELD/MELD score, and pre-operative laboratory markers were selected for survival analysis. Result: Among 529 patients, 346 (64.5%) were adult and 183 (34.6%) were pediatric cases. Overall, the lognormal distribution was the best-fitting model for adult and pediatric patients. Age in adults (HR=1.16, p<0.05) and weight (HR=2.68, p<0.01) and Child class B (HR=2.12, p<0.05) in pediatric patients were the most important factors for prediction of survival after liver transplantation. Adult patients younger than the mean age and pediatric patients weighing above the mean and Child class A (compared to those with classes B or C) had better survival. Conclusion: Parametric regression model is a good alternative for the Cox’s regression model. PMID:26306158
Variable selection in a flexible parametric mixture cure model with interval-censored data.
Scolas, Sylvie; El Ghouch, Anouar; Legrand, Catherine; Oulhaj, Abderrahim
2016-03-30
In standard survival analysis, it is generally assumed that every individual will experience someday the event of interest. However, this is not always the case, as some individuals may not be susceptible to this event. Also, in medical studies, it is frequent that patients come to scheduled interviews and that the time to the event is only known to occur between two visits. That is, the data are interval-censored with a cure fraction. Variable selection in such a setting is of outstanding interest. Covariates impacting the survival are not necessarily the same as those impacting the probability to experience the event. The objective of this paper is to develop a parametric but flexible statistical model to analyze data that are interval-censored and include a fraction of cured individuals when the number of potential covariates may be large. We use the parametric mixture cure model with an accelerated failure time regression model for the survival, along with the extended generalized gamma for the error term. To overcome the issue of non-stable and non-continuous variable selection procedures, we extend the adaptive LASSO to our model. By means of simulation studies, we show good performance of our method and discuss the behavior of estimates with varying cure and censoring proportion. Lastly, our proposed method is illustrated with a real dataset studying the time until conversion to mild cognitive impairment, a possible precursor of Alzheimer's disease. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Wang, Dengfeng; Cai, Kefang
2018-04-01
This article presents a hybrid method combining a modified non-dominated sorting genetic algorithm (MNSGA-II) with grey relational analysis (GRA) to improve the static-dynamic performance of a body-in-white (BIW). First, an implicit parametric model of the BIW was built using SFE-CONCEPT software, and then the validity of the implicit parametric model was verified by physical testing. Eight shape design variables were defined for BIW beam structures based on the implicit parametric technology. Subsequently, MNSGA-II was used to determine the optimal combination of the design parameters that can improve the bending stiffness, torsion stiffness and low-order natural frequencies of the BIW without considerable increase in the mass. A set of non-dominated solutions was then obtained in the multi-objective optimization design. Finally, the grey entropy theory and GRA were applied to rank all non-dominated solutions from best to worst to determine the best trade-off solution. The comparison between the GRA and the technique for order of preference by similarity to ideal solution (TOPSIS) illustrated the reliability and rationality of GRA. Moreover, the effectiveness of the hybrid method was verified by the optimal results such that the bending stiffness, torsion stiffness, first order bending and first order torsion natural frequency were improved by 5.46%, 9.30%, 7.32% and 5.73%, respectively, with the mass of the BIW increasing by 1.30%.
Critical analysis of consecutive unilateral cleft lip repairs: determining ideal sample size.
Power, Stephanie M; Matic, Damir B
2013-03-01
Objective : Cleft surgeons often show 10 consecutive lip repairs to reduce presentation bias, however the validity remains unknown. The purpose of this study is to determine the number of consecutive cases that represent average outcomes. Secondary objectives are to determine if outcomes correlate with cleft severity and to calculate interrater reliability. Design : Consecutive preoperative and 2-year postoperative photographs of the unilateral cleft lip-nose complex were randomized and evaluated by cleft surgeons. Parametric analysis was performed according to chronologic, consecutive order. The mean standard deviation over all raters enabled calculation of expected 95% confidence intervals around a mean tested for various sample sizes. Setting : Meeting of the American Cleft Palate-Craniofacial Association in 2009. Patients, Participants : Ten senior cleft surgeons evaluated 39 consecutive lip repairs. Main Outcome Measures : Preoperative severity and postoperative outcomes were evaluated using descriptive and quantitative scales. Results : Intraclass correlation coefficients for cleft severity and postoperative evaluations were 0.65 and 0.21, respectively. Outcomes did not correlate with cleft severity (P = .28). Calculations for 10 consecutive cases demonstrated wide 95% confidence intervals, spanning two points on both postoperative grading scales. Ninety-five percent confidence intervals narrowed within one qualitative grade (±0.30) and one point (±0.50) on the 10-point scale for 27 consecutive cases. Conclusions : Larger numbers of consecutive cases (n > 27) are increasingly representative of average results, but less practical in presentation format. Ten consecutive cases lack statistical support. Cleft surgeons showed low interrater reliability for postoperative assessments, which may reflect personal bias when evaluating another surgeon's results.
Frömer, Romy; Maier, Martin; Abdel Rahman, Rasha
2018-01-01
Here we present an application of an EEG processing pipeline customizing EEGLAB and FieldTrip functions, specifically optimized to flexibly analyze EEG data based on single trial information. The key component of our approach is to create a comprehensive 3-D EEG data structure including all trials and all participants maintaining the original order of recording. This allows straightforward access to subsets of the data based on any information available in a behavioral data structure matched with the EEG data (experimental conditions, but also performance indicators, such accuracy or RTs of single trials). In the present study we exploit this structure to compute linear mixed models (LMMs, using lmer in R) including random intercepts and slopes for items. This information can easily be read out from the matched behavioral data, whereas it might not be accessible in traditional ERP approaches without substantial effort. We further provide easily adaptable scripts for performing cluster-based permutation tests (as implemented in FieldTrip), as a more robust alternative to traditional omnibus ANOVAs. Our approach is particularly advantageous for data with parametric within-subject covariates (e.g., performance) and/or multiple complex stimuli (such as words, faces or objects) that vary in features affecting cognitive processes and ERPs (such as word frequency, salience or familiarity), which are sometimes hard to control experimentally or might themselves constitute variables of interest. The present dataset was recorded from 40 participants who performed a visual search task on previously unfamiliar objects, presented either visually intact or blurred. MATLAB as well as R scripts are provided that can be adapted to different datasets.
Paraboloid-aspheric lenses free of spherical aberration
NASA Astrophysics Data System (ADS)
Lozano-Rincón, Ninfa del C.; Valencia-Estrada, Juan Camilo
2017-07-01
A method to design singlet paraboloid-aspheric lenses free of all orders of spherical aberration with maximum aperture is described. This work includes all parametric formulas to describe paraboloid-aspheric or aspheric-paraboloid lenses for any finite conjugated planes. It also includes the Schwarzchilds approximations (which can be used to calculate one rigorous propagation of light waves in physic optics) to design convex paraboloid-aspheric lenses for imaging an object at infinity, with explicit formulas to calculate thicknesses easily. The results were verified with software through ray tracing.