Sample records for expected moments algorithm

  1. Listing triangles in expected linear time on a class of power law graphs.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordman, Daniel J.; Wilson, Alyson G.; Phillips, Cynthia Ann

    Enumerating triangles (3-cycles) in graphs is a kernel operation for social network analysis. For example, many community detection methods depend upon finding common neighbors of two related entities. We consider Cohen's simple and elegant solution for listing triangles: give each node a 'bucket.' Place each edge into the bucket of its endpoint of lowest degree, breaking ties consistently. Each node then checks each pair of edges in its bucket, testing for the adjacency that would complete that triangle. Cohen presents an informal argument that his algorithm should run well on real graphs. We formalize this argument by providing an analysismore » for the expected running time on a class of random graphs, including power law graphs. We consider a rigorously defined method for generating a random simple graph, the erased configuration model (ECM). In the ECM each node draws a degree independently from a marginal degree distribution, endpoints pair randomly, and we erase self loops and multiedges. If the marginal degree distribution has a finite second moment, it follows immediately that Cohen's algorithm runs in expected linear time. Furthermore, it can still run in expected linear time even when the degree distribution has such a heavy tail that the second moment is not finite. We prove that Cohen's algorithm runs in expected linear time when the marginal degree distribution has finite 4/3 moment and no vertex has degree larger than {radical}n. In fact we give the precise asymptotic value of the expected number of edge pairs per bucket. A finite 4/3 moment is required; if it is unbounded, then so is the number of pairs. The marginal degree distribution of a power law graph has bounded 4/3 moment when its exponent {alpha} is more than 7/3. Thus for this class of power law graphs, with degree at most {radical}n, Cohen's algorithm runs in expected linear time. This is precisely the value of {alpha} for which the clustering coefficient tends to zero asymptotically, and it is in the range that is relevant for the degree distribution of the World-Wide Web.« less

  2. An algorithm for computing moments-based flood quantile estimates when historical flood information is available

    USGS Publications Warehouse

    Cohn, T.A.; Lane, W.L.; Baier, W.G.

    1997-01-01

    This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.

  3. An algorithm for computing moments-based flood quantile estimates when historical flood information is available

    NASA Astrophysics Data System (ADS)

    Cohn, T. A.; Lane, W. L.; Baier, W. G.

    This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.

  4. Confidence intervals for expected moments algorithm flood quantile estimates

    USGS Publications Warehouse

    Cohn, Timothy A.; Lane, William L.; Stedinger, Jery R.

    2001-01-01

    Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient “weighting” procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed‐form method has been available for quantifying the uncertainty of EMA‐based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood‐quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25‐ to 100‐year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

  5. Efficient 3D geometric and Zernike moments computation from unstructured surface meshes.

    PubMed

    Pozo, José María; Villa-Uriol, Maria-Cruz; Frangi, Alejandro F

    2011-03-01

    This paper introduces and evaluates a fast exact algorithm and a series of faster approximate algorithms for the computation of 3D geometric moments from an unstructured surface mesh of triangles. Being based on the object surface reduces the computational complexity of these algorithms with respect to volumetric grid-based algorithms. In contrast, it can only be applied for the computation of geometric moments of homogeneous objects. This advantage and restriction is shared with other proposed algorithms based on the object boundary. The proposed exact algorithm reduces the computational complexity for computing geometric moments up to order N with respect to previously proposed exact algorithms, from N(9) to N(6). The approximate series algorithm appears as a power series on the rate between triangle size and object size, which can be truncated at any desired degree. The higher the number and quality of the triangles, the better the approximation. This approximate algorithm reduces the computational complexity to N(3). In addition, the paper introduces a fast algorithm for the computation of 3D Zernike moments from the computed geometric moments, with a computational complexity N(4), while the previously proposed algorithm is of order N(6). The error introduced by the proposed approximate algorithms is evaluated in different shapes and the cost-benefit ratio in terms of error, and computational time is analyzed for different moment orders.

  6. Parameter estimation for the 4-parameter Asymmetric Exponential Power distribution by the method of L-moments using R

    USGS Publications Warehouse

    Asquith, William H.

    2014-01-01

    The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.

  7. Recursive approach to the moment-based phase unwrapping method.

    PubMed

    Langley, Jason A; Brice, Robert G; Zhao, Qun

    2010-06-01

    The moment-based phase unwrapping algorithm approximates the phase map as a product of Gegenbauer polynomials, but the weight function for the Gegenbauer polynomials generates artificial singularities along the edge of the phase map. A method is presented to remove the singularities inherent to the moment-based phase unwrapping algorithm by approximating the phase map as a product of two one-dimensional Legendre polynomials and applying a recursive property of derivatives of Legendre polynomials. The proposed phase unwrapping algorithm is tested on simulated and experimental data sets. The results are then compared to those of PRELUDE 2D, a widely used phase unwrapping algorithm, and a Chebyshev-polynomial-based phase unwrapping algorithm. It was found that the proposed phase unwrapping algorithm provides results that are comparable to those obtained by using PRELUDE 2D and the Chebyshev phase unwrapping algorithm.

  8. A moment projection method for population balance dynamics with a shrinkage term

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Shaohua; Yapp, Edward K.Y.; Akroyd, Jethro

    A new method of moments for solving the population balance equation is developed and presented. The moment projection method (MPM) is numerically simple and easy to implement and attempts to address the challenge of particle shrinkage due to processes such as oxidation, evaporation or dissolution. It directly solves the moment transport equation for the moments and tracks the number of the smallest particles using the algorithm by Blumstein and Wheeler (1973) . The performance of the new method is measured against the method of moments (MOM) and the hybrid method of moments (HMOM). The results suggest that MPM performs muchmore » better than MOM and HMOM where shrinkage is dominant. The new method predicts mean quantities which are almost as accurate as a high-precision stochastic method calculated using the established direct simulation algorithm (DSA).« less

  9. Gaussian mixture models-based ship target recognition algorithm in remote sensing infrared images

    NASA Astrophysics Data System (ADS)

    Yao, Shoukui; Qin, Xiaojuan

    2018-02-01

    Since the resolution of remote sensing infrared images is low, the features of ship targets become unstable. The issue of how to recognize ships with fuzzy features is an open problem. In this paper, we propose a novel ship target recognition algorithm based on Gaussian mixture models (GMMs). In the proposed algorithm, there are mainly two steps. At the first step, the Hu moments of these ship target images are calculated, and the GMMs are trained on the moment features of ships. At the second step, the moment feature of each ship image is assigned to the trained GMMs for recognition. Because of the scale, rotation, translation invariance property of Hu moments and the power feature-space description ability of GMMs, the GMMs-based ship target recognition algorithm can recognize ship reliably. Experimental results of a large simulating image set show that our approach is effective in distinguishing different ship types, and obtains a satisfactory ship recognition performance.

  10. Analysis of Sting Balance Calibration Data Using Optimized Regression Models

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert; Bader, Jon B.

    2009-01-01

    Calibration data of a wind tunnel sting balance was processed using a search algorithm that identifies an optimized regression model for the data analysis. The selected sting balance had two moment gages that were mounted forward and aft of the balance moment center. The difference and the sum of the two gage outputs were fitted in the least squares sense using the normal force and the pitching moment at the balance moment center as independent variables. The regression model search algorithm predicted that the difference of the gage outputs should be modeled using the intercept and the normal force. The sum of the two gage outputs, on the other hand, should be modeled using the intercept, the pitching moment, and the square of the pitching moment. Equations of the deflection of a cantilever beam are used to show that the search algorithm s two recommended math models can also be obtained after performing a rigorous theoretical analysis of the deflection of the sting balance under load. The analysis of the sting balance calibration data set is a rare example of a situation when regression models of balance calibration data can directly be derived from first principles of physics and engineering. In addition, it is interesting to see that the search algorithm recommended the same regression models for the data analysis using only a set of statistical quality metrics.

  11. Connected and leading disconnected hadronic light-by-light contribution to the muon anomalous magnetic moment with a physical pion mass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blum, Thomas; Christ, Norman; Hayakawa, Masashi

    We report a lattice QCD calculation of the hadronic light-by-light contribution to the muon anomalous magnetic moment at a physical pion mass. The calculation includes the connected diagrams and the leading, quark-line-disconnected diagrams. We incorporate algorithmic improvements developed in our previous work. The calculation was performed on the 48 3 × 96 ensemble generated with a physical pion mass and a 5.5 fm spatial extent by the RBC and UKQCD Collaborations using the chiral, domain wall fermion formulation. We find a HLbL μ = 5.35(1.35) × 10 –10, where the error is statistical only. The finite-volume and finite lattice-spacing errorsmore » could be quite large and are the subject of ongoing research. Finally, the omitted disconnected graphs, while expected to give a correction of order 10%, also need to be computed.« less

  12. Connected and leading disconnected hadronic light-by-light contribution to the muon anomalous magnetic moment with a physical pion mass

    DOE PAGES

    Blum, Thomas; Christ, Norman; Hayakawa, Masashi; ...

    2017-01-11

    We report a lattice QCD calculation of the hadronic light-by-light contribution to the muon anomalous magnetic moment at a physical pion mass. The calculation includes the connected diagrams and the leading, quark-line-disconnected diagrams. We incorporate algorithmic improvements developed in our previous work. The calculation was performed on the 48 3 × 96 ensemble generated with a physical pion mass and a 5.5 fm spatial extent by the RBC and UKQCD Collaborations using the chiral, domain wall fermion formulation. We find a HLbL μ = 5.35(1.35) × 10 –10, where the error is statistical only. The finite-volume and finite lattice-spacing errorsmore » could be quite large and are the subject of ongoing research. Finally, the omitted disconnected graphs, while expected to give a correction of order 10%, also need to be computed.« less

  13. Iris recognition using image moments and k-means algorithm.

    PubMed

    Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed

    2014-01-01

    This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%.

  14. Iris Recognition Using Image Moments and k-Means Algorithm

    PubMed Central

    Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed

    2014-01-01

    This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%. PMID:24977221

  15. Parallelization strategies for continuum-generalized method of moments on the multi-thread systems

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.

    2017-07-01

    Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.

  16. Research on allocation efficiency of the daisy chain allocation algorithm

    NASA Astrophysics Data System (ADS)

    Shi, Jingping; Zhang, Weiguo

    2013-03-01

    With the improvement of the aircraft performance in reliability, maneuverability and survivability, the number of the control effectors increases a lot. How to distribute the three-axis moments into the control surfaces reasonably becomes an important problem. Daisy chain method is simple and easy to be carried out in the design of the allocation system. But it can not solve the allocation problem for entire attainable moment subset. For the lateral-directional allocation problem, the allocation efficiency of the daisy chain can be directly measured by the area of its subset of attainable moments. Because of the non-linear allocation characteristic, the subset of attainable moments of daisy-chain method is a complex non-convex polygon, and it is difficult to solve directly. By analyzing the two-dimensional allocation problems with a "micro-element" idea, a numerical calculation algorithm is proposed to compute the area of the non-convex polygon. In order to improve the allocation efficiency of the algorithm, a genetic algorithm with the allocation efficiency chosen as the fitness function is proposed to find the best pseudo-inverse matrix.

  17. The Strong Lensing Time Delay Challenge (2014)

    NASA Astrophysics Data System (ADS)

    Liao, Kai; Dobler, G.; Fassnacht, C. D.; Treu, T.; Marshall, P. J.; Rumbaugh, N.; Linder, E.; Hojjati, A.

    2014-01-01

    Time delays between multiple images in strong lensing systems are a powerful probe of cosmology. At the moment the application of this technique is limited by the number of lensed quasars with measured time delays. However, the number of such systems is expected to increase dramatically in the next few years. Hundred such systems are expected within this decade, while the Large Synoptic Survey Telescope (LSST) is expected to deliver of order 1000 time delays in the 2020 decade. In order to exploit this bounty of lenses we needed to make sure the time delay determination algorithms have sufficiently high precision and accuracy. As a first step to test current algorithms and identify potential areas for improvement we have started a "Time Delay Challenge" (TDC). An "evil" team has created realistic simulated light curves, to be analyzed blindly by "good" teams. The challenge is open to all interested parties. The initial challenge consists of two steps (TDC0 and TDC1). TDC0 consists of a small number of datasets to be used as a training template. The non-mandatory deadline is December 1 2013. The "good" teams that complete TDC0 will be given access to TDC1. TDC1 consists of thousands of lightcurves, a number sufficient to test precision and accuracy at the subpercent level, necessary for time-delay cosmography. The deadline for responding to TDC1 is July 1 2014. Submissions will be analyzed and compared in terms of predefined metrics to establish the goodness-of-fit, efficiency, precision and accuracy of current algorithms. This poster describes the challenge in detail and gives instructions for participation.

  18. Numerically stable, scalable formulas for parallel and online computation of higher-order multivariate central moments with arbitrary weights

    DOE PAGES

    Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth; ...

    2016-03-29

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Such formulas are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the fullmore » representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less

  19. Moment Inversion of the DPRK Nuclear Tests Using Finite-Difference Three-dimensional Strain Green's Tensors

    NASA Astrophysics Data System (ADS)

    Bao, X.; Shen, Y.; Wang, N.

    2017-12-01

    Accurate estimation of the source moment is important for discriminating underground explosions from earthquakes and other seismic sources. In this study, we invert for the full moment tensors of the recent seismic events (since 2016) at the Democratic People's Republic of Korea (PRRK) Punggye-ri test site. We use waveform data from broadband seismic stations located in China, Korea, and Japan in the inversion. Using a non-staggered-grid, finite-difference algorithm, we calculate the strain Green's tensors (SGT) based on one-dimensional (1D) and three-dimensional (3D) Earth models. Taking advantage of the source-receiver reciprocity, a SGT database pre-calculated and stored for the Punggye-ri test site is used in inversion for the source mechanism of each event. With the source locations estimated from cross-correlation using regional Pn and Pn-coda waveforms, we obtain the optimal source mechanism that best fits synthetics to the observed waveforms of both body and surface waves. The moment solutions of the first three events (2016-01-06, 2016-09-09, and 2017-09-03) show dominant isotropic components, as expected from explosions, though there are also notable non-isotropic components. The last event ( 8 minutes after the mb6.3 explosion in 2017) contained mainly implosive component, suggesting a collapse following the explosion. The solutions from the 3D model can better fit observed waveforms than the corresponding solutions from the 1D model. The uncertainty in the resulting moment solution is influenced by heterogeneities not resolved by the Earth model according to the waveform misfit. Using the moment solutions, we predict the peak ground acceleration at the Punggye-ri test site and compare the prediction with corresponding InSAR and other satellite images.

  20. Simulation of an expanding plasma using the Boris algorithm

    NASA Astrophysics Data System (ADS)

    Neal, Luke; Aguirre, Evan; Steinberger, Thomas; Good, Timothy; Scime, Earl

    2017-10-01

    We present a Boris algorithm simulation in a cylindrical geometry of charged particle motion in a helicon plasma confined by a diverging magnetic field. Laboratory measurements of ion velocity distribution functions (ivdfs) provide evidence for acceleration of ions into the divergent field region in the center of the discharge. The increase in ion velocity is inconsistent with expectations for simple magnetic moment conservation given the magnetic field mirror ratio and is therefore attributed to the presence of a double layer in the literature. Using measured electric fields and ivdfs (at different radial locations across the entire plasma column) upstream and downstream of the divergent magnetic field region, we compare predictions for the downstream ivdfs to measurements. We also present predictions for the evolution of the electron velocity distribution function downstream of the divergent magnetic field. This work was supported by U.S. National Science Foundation Grant No. PHY-1360278.

  1. Production facility layout by comparing moment displacement using BLOCPLAN and ALDEP Algorithms

    NASA Astrophysics Data System (ADS)

    Tambunan, M.; Ginting, E.; Sari, R. M.

    2018-02-01

    Production floor layout settings include the organizing of machinery, materials, and all the equipments used in the production process in the available area. PT. XYZ is a company that manufactures rubber and rubber compounds for retreading tire threaded with hot and cold cooking system. In the production of PT. XYZ is divided into three interrelated parts, namely Masterbatch Department, Department Compound, and Procured Thread Line Department. PT. XYZ has a production process with material flow is irregular and the arrangement of machine is complicated and need to be redesigned. The purpose of this study is comparing movement displacement using BLOCPLAN and ALDEP algorithm in order to redesign existing layout. Redesigning the layout of the production floor is done by applying algorithms of BLOCPLAN and ALDEP. The algorithm used to find the best layout design by comparing the moment displacement and the flow pattern. Moment displacement on the floor layout of the company’s production currently amounts to 2,090,578.5 meters per year and material flow pattern is irregular. Based on the calculation, the moment displacement for the BLOCPLAN is 1,551,344.82 meter per year and ALDEP is 1,600,179 meter per year. Flow Material resulted is in the form of straight the line.

  2. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects inmore » all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.« less

  3. Regional skew for California, and flood frequency for selected sites in the Sacramento-San Joaquin River Basin, based on data through water year 2006

    USGS Publications Warehouse

    Parrett, Charles; Veilleux, Andrea; Stedinger, J.R.; Barth, N.A.; Knifong, Donna L.; Ferris, J.C.

    2011-01-01

    Improved flood-frequency information is important throughout California in general and in the Sacramento-San Joaquin River Basin in particular, because of an extensive network of flood-control levees and the risk of catastrophic flooding. A key first step in updating flood-frequency information is determining regional skew. A Bayesian generalized least squares (GLS) regression method was used to derive a regional-skew model based on annual peak-discharge data for 158 long-term (30 or more years of record) stations throughout most of California. The desert areas in southeastern California had too few long-term stations to reliably determine regional skew for that hydrologically distinct region; therefore, the desert areas were excluded from the regional skew analysis for California. Of the 158 long-term stations used to determine regional skew, 145 have minimally regulated annual-peak discharges, and 13 stations are dam sites for which unregulated peak discharges were estimated from unregulated daily maximum discharge data furnished by the U.S. Army Corp of Engineers. Station skew was determined by using an expected moments algorithm (EMA) program for fitting the Pearson Type 3 flood-frequency distribution to the logarithms of annual peak-discharge data. The Bayesian GLS regression method previously developed was modified because of the large cross correlations among concurrent recorded peak discharges in California and the use of censored data and historical flood information with the new expected moments algorithm. In particular, to properly account for these cross-correlation problems and develop a suitable regression model and regression diagnostics, a combination of Bayesian weighted least squares and generalized least squares regression was adopted. This new methodology identified a nonlinear function relating regional skew to mean basin elevation. The regional skew values ranged from -0.62 for a mean basin elevation of zero to 0.61 for a mean basin elevation of 11,000 feet. This relation between skew and elevation reflects the interaction of snow with rain, which increases with increased elevation. The equivalent record length for the new regional skew ranges from 52 to 65 years of record, depending upon mean basin elevation. The old regional skew map in Bulletin 17B, published by the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data (1982), reported an equivalent record length of only 17 years. The newly developed regional skew relation for California was used to update flood frequency for the 158 sites used in the regional skew analysis as well as 206 selected sites in the Sacramento-San Joaquin River Basin. For these sites, annual-peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years were determined on the basis of data through water year 2006. The expected moments algorithm was used for determining the magnitude and frequency of floods at gaged sites by using regional skew values and using the basic approach outlined in Bulletin

  4. Relationships between expected, online and remembered enjoyment for food products.

    PubMed

    Robinson, Eric

    2014-03-01

    How enjoyable a food product is remembered to be is likely to shape future choice. The present study tested the influence that expectations and specific moments during consumption experiences have on remembered enjoyment for food products. Sixty-four participants consumed three snack foods (savoury, sweet and savoury-sweet) and rated expected and online enjoyment for each product. Twenty-four hours later participants rated remembered enjoyment and future expected enjoyment for each product. Remembered enjoyment differed to online enjoyment for two of the three products, resulting in the foods being remembered as less enjoyable than they actually were. Both expected enjoyment and specific moments during the consumption experience (e.g. the least enjoyable mouthful) influenced remembered enjoyment. However, the factors that shaped remembered enjoyment were not consistent across the different food products. Remembered enjoyment was also shown to be a better predictor of future expected enjoyment than online enjoyment. Remembered enjoyment is likely to influence choice behaviour and can be discrepant to actual enjoyment. Specific moments during a consumption experience can have disproportionately large influence on remembered enjoyment (whilst others are neglected), but the factors that determine which moments influence remembered enjoyment are unclear. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Formulas such as these, are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearlymore » the full representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Such formulas are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the fullmore » representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less

  7. Application of temporal moments and other signal processing algorithms to analysis of ultrasonic signals through melting wax

    DOE PAGES

    Lau, Sarah J.; Moore, David G.; Stair, Sarah L.; ...

    2016-01-01

    Ultrasonic analysis is being explored as a way to capture events during melting of highly dispersive wax. Typical events include temperature changes in the material, phase transition of the material, surface flows and reformations, and void filling as the material melts. Melt tests are performed with wax to evaluate the usefulness of different signal processing algorithms in capturing event data. Several algorithm paths are being pursued. The first looks at changes in the velocity of the signal through the material. This is only appropriate when the changes from one ultrasonic signal to the next can be represented by a linearmore » relationship, which is not always the case. The second tracks changes in the frequency content of the signal. The third algorithm tracks changes in the temporal moments of a signal over a full test. This method does not require that the changes in the signal be represented by a linear relationship, but attaching changes in the temporal moments to physical events can be difficult. This study describes the algorithm paths applied to experimental data from ultrasonic signals as wax melts and explores different ways to display the results.« less

  8. Optimum Actuator Selection with a Genetic Algorithm for Aircraft Control

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    2004-01-01

    The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. For example, the desired actuators produce a pure roll moment without at the same time causing much pitch or yaw. For a typical wing, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements and mission constraints. A genetic algorithm has been developed for finding the best placement for four actuators to produce an uncoupled pitch moment. The genetic algorithm has been extended to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control. A simplified, untapered, unswept wing is the model for each application.

  9. Estimation of full moment tensors, including uncertainties, for earthquakes, volcanic events, and nuclear explosions

    NASA Astrophysics Data System (ADS)

    Alvizuri, Celso; Silwal, Vipul; Krischer, Lion; Tape, Carl

    2017-04-01

    A seismic moment tensor is a 3 × 3 symmetric matrix that provides a compact representation of seismic events within Earth's crust. We develop an algorithm to estimate moment tensors and their uncertainties from observed seismic data. For a given event, the algorithm performs a grid search over the six-dimensional space of moment tensors by generating synthetic waveforms at each grid point and then evaluating a misfit function between the observed and synthetic waveforms. 'The' moment tensor M for the event is then the moment tensor with minimum misfit. To describe the uncertainty associated with M, we first convert the misfit function to a probability function. The uncertainty, or rather the confidence, is then given by the 'confidence curve' P(V ), where P(V ) is the probability that the true moment tensor for the event lies within the neighborhood of M that has fractional volume V . The area under the confidence curve provides a single, abbreviated 'confidence parameter' for M. We apply the method to data from events in different regions and tectonic settings: small (Mw < 2.5) events at Uturuncu volcano in Bolivia, moderate (Mw > 4) earthquakes in the southern Alaska subduction zone, and natural and man-made events at the Nevada Test Site. Moment tensor uncertainties allow us to better discriminate among moment tensor source types and to assign physical processes to the events.

  10. Toward an autonomous brain machine interface: integrating sensorimotor reward modulation and reinforcement learning.

    PubMed

    Marsh, Brandi T; Tarigoppula, Venkata S Aditya; Chen, Chen; Francis, Joseph T

    2015-05-13

    For decades, neurophysiologists have worked on elucidating the function of the cortical sensorimotor control system from the standpoint of kinematics or dynamics. Recently, computational neuroscientists have developed models that can emulate changes seen in the primary motor cortex during learning. However, these simulations rely on the existence of a reward-like signal in the primary sensorimotor cortex. Reward modulation of the primary sensorimotor cortex has yet to be characterized at the level of neural units. Here we demonstrate that single units/multiunits and local field potentials in the primary motor (M1) cortex of nonhuman primates (Macaca radiata) are modulated by reward expectation during reaching movements and that this modulation is present even while subjects passively view cursor motions that are predictive of either reward or nonreward. After establishing this reward modulation, we set out to determine whether we could correctly classify rewarding versus nonrewarding trials, on a moment-to-moment basis. This reward information could then be used in collaboration with reinforcement learning principles toward an autonomous brain-machine interface. The autonomous brain-machine interface would use M1 for both decoding movement intention and extraction of reward expectation information as evaluative feedback, which would then update the decoding algorithm as necessary. In the work presented here, we show that this, in theory, is possible. Copyright © 2015 the authors 0270-6474/15/357374-14$15.00/0.

  11. Sensitivity Analysis of Linear Programming and Quadratic Programming Algorithms for Control Allocation

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Bodson, Marc; Acosta, Diana M.

    2009-01-01

    The Next Generation (NextGen) transport aircraft configurations being investigated as part of the NASA Aeronautics Subsonic Fixed Wing Project have more control surfaces, or control effectors, than existing transport aircraft configurations. Conventional flight control is achieved through two symmetric elevators, two antisymmetric ailerons, and a rudder. The five effectors, reduced to three command variables, produce moments along the three main axes of the aircraft and enable the pilot to control the attitude and flight path of the aircraft. The NextGen aircraft will have additional redundant control effectors to control the three moments, creating a situation where the aircraft is over-actuated and where a simple relationship does not exist anymore between the required effector deflections and the desired moments. NextGen flight controllers will incorporate control allocation algorithms to determine the optimal effector commands and attain the desired moments, taking into account the effector limits. Approaches to solving the problem using linear programming and quadratic programming algorithms have been proposed and tested. It is of great interest to understand their relative advantages and disadvantages and how design parameters may affect their properties. In this paper, we investigate the sensitivity of the effector commands with respect to the desired moments and show on some examples that the solutions provided using the l2 norm of quadratic programming are less sensitive than those using the l1 norm of linear programming.

  12. A tyre slip-based integrated chassis control of front/rear traction distribution and four-wheel independent brake from moderate driving to limit handling

    NASA Astrophysics Data System (ADS)

    Joa, Eunhyek; Park, Kwanwoo; Koh, Youngil; Yi, Kyongsu; Kim, Kilsoo

    2018-04-01

    This paper presents a tyre slip-based integrated chassis control of front/rear traction distribution and four-wheel braking for enhanced performance from moderate driving to limit handling. The proposed algorithm adopted hierarchical structure: supervisor - desired motion tracking controller - optimisation-based control allocation. In the supervisor, by considering transient cornering characteristics, desired vehicle motion is calculated. In the desired motion tracking controller, in order to track desired vehicle motion, virtual control input is determined in the manner of sliding mode control. In the control allocation, virtual control input is allocated to minimise cost function. The cost function consists of two major parts. First part is a slip-based tyre friction utilisation quantification, which does not need a tyre force estimation. Second part is an allocation guideline, which guides optimally allocated inputs to predefined solution. The proposed algorithm has been investigated via simulation from moderate driving to limit handling scenario. Compared to Base and direct yaw moment control system, the proposed algorithm can effectively reduce tyre dissipation energy in the moderate driving situation. Moreover, the proposed algorithm enhances limit handling performance compared to Base and direct yaw moment control system. In addition to comparison with Base and direct yaw moment control, comparison the proposed algorithm with the control algorithm based on the known tyre force information has been conducted. The results show that the performance of the proposed algorithm is similar with that of the control algorithm with the known tyre force information.

  13. Performance of the split-symbol moments SNR estimator in the presence of inter-symbol interference

    NASA Technical Reports Server (NTRS)

    Shah, B.; Hinedi, S.

    1989-01-01

    The Split-Symbol Moments Estimator (SSME) is an algorithm that is designed to estimate symbol signal-to-noise ratio (SNR) in the presence of additive white Gaussian noise (AWGN). The performance of the SSME algorithm in band-limited channels is examined. The effects of the resulting inter-symbol interference (ISI) are quantified. All results obtained are in closed form and can be easily evaluated numerically for performance prediction purposes. Furthermore, they are validated through digital simulations.

  14. Multi-fidelity stochastic collocation method for computation of statistical moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Xueyu, E-mail: xueyu-zhu@uiowa.edu; Linebarger, Erin M., E-mail: aerinline@sci.utah.edu; Xiu, Dongbin, E-mail: xiu.16@osu.edu

    We present an efficient numerical algorithm to approximate the statistical moments of stochastic problems, in the presence of models with different fidelities. The method extends the multi-fidelity approximation method developed in . By combining the efficiency of low-fidelity models and the accuracy of high-fidelity models, our method exhibits fast convergence with a limited number of high-fidelity simulations. We establish an error bound of the method and present several numerical examples to demonstrate the efficiency and applicability of the multi-fidelity algorithm.

  15. Elementary quantum mechanics of the neutron with an electric dipole moment

    PubMed Central

    Baym, Gordon; Beck, D. H.

    2016-01-01

    The neutron, in addition to possibly having a permanent electric dipole moment as a consequence of violation of time-reversal invariance, develops an induced electric dipole moment in the presence of an external electric field. We present here a unified nonrelativistic description of these two phenomena, in which the dipole moment operator, D→, is not constrained to lie along the spin operator. Although the expectation value of D→ in the neutron is less than 10−13 of the neutron radius, rn, the expectation value of D→ 2 is of order rn2. We determine the spin motion in external electric and magnetic fields, as used in past and future searches for a permanent dipole moment, and show that the neutron electric polarizability, although entering the neutron energy in an external electric field, does not affect the spin motion. In a simple nonrelativistic model we show that the expectation value of the permanent dipole is, to lowest order, proportional to the product of the time-reversal-violating coupling strength and the electric polarizability of the neutron. PMID:27325765

  16. Elementary quantum mechanics of the neutron with an electric dipole moment.

    PubMed

    Baym, Gordon; Beck, D H

    2016-07-05

    The neutron, in addition to possibly having a permanent electric dipole moment as a consequence of violation of time-reversal invariance, develops an induced electric dipole moment in the presence of an external electric field. We present here a unified nonrelativistic description of these two phenomena, in which the dipole moment operator, [Formula: see text], is not constrained to lie along the spin operator. Although the expectation value of [Formula: see text] in the neutron is less than [Formula: see text] of the neutron radius, [Formula: see text], the expectation value of [Formula: see text] is of order [Formula: see text] We determine the spin motion in external electric and magnetic fields, as used in past and future searches for a permanent dipole moment, and show that the neutron electric polarizability, although entering the neutron energy in an external electric field, does not affect the spin motion. In a simple nonrelativistic model we show that the expectation value of the permanent dipole is, to lowest order, proportional to the product of the time-reversal-violating coupling strength and the electric polarizability of the neutron.

  17. On the pth moment estimates of solutions to stochastic functional differential equations in the G-framework.

    PubMed

    Faizullah, Faiz

    2016-01-01

    The aim of the current paper is to present the path-wise and moment estimates for solutions to stochastic functional differential equations with non-linear growth condition in the framework of G-expectation and G-Brownian motion. Under the nonlinear growth condition, the pth moment estimates for solutions to SFDEs driven by G-Brownian motion are proved. The properties of G-expectations, Hölder's inequality, Bihari's inequality, Gronwall's inequality and Burkholder-Davis-Gundy inequalities are used to develop the above mentioned theory. In addition, the path-wise asymptotic estimates and continuity of pth moment for the solutions to SFDEs in the G-framework, with non-linear growth condition are shown.

  18. Novel structures for Discrete Hartley Transform based on first-order moments

    NASA Astrophysics Data System (ADS)

    Xiong, Jun; Zheng, Wenjuan; Wang, Hao; Liu, Jianguo

    2018-03-01

    Discrete Hartley Transform (DHT) is an important tool in digital signal processing. In the present paper, the DHT is firstly transformed into the first-order moments-based form, then a new fast algorithm is proposed to calculate the first-order moments without multiplication. Based on the algorithm theory, the corresponding hardware architecture for DHT is proposed, which only contains shift operations and additions with no need for multipliers and large memory. To verify the availability and effectiveness, the proposed design is implemented with hardware description language and synthesized by Synopsys Design Compiler with 0.18-μm SMIC library. A series of experiments have proved that the proposed architecture has better performance in terms of the product of the hardware consumption and computation time.

  19. Multifractal detrending moving-average cross-correlation analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2011-07-01

    There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0.5, as expected, and the MFXDFA algorithm has the second best performance. For the volatilities, the forward and backward MFXDMA algorithms give similar results, while the centered MFXDMA and the MFXDFA algorithms fail to extract rational multifractal nature.

  20. Damping Rotor Nutation Oscillations in a Gyroscope with Magnetic Suspension

    NASA Technical Reports Server (NTRS)

    Komarov, Valentine N.

    1996-01-01

    A possibility of an effective damping of rotor nutations by modulating the field of the moment transducers in synchronism with the nutation frequency is considered. The algorithms for forming the control moments are proposed and their application is discussed.

  1. Steering law design for redundant single-gimbal control moment gyroscopes. [for spacecraft attitude control

    NASA Technical Reports Server (NTRS)

    Bedrossian, Nazareth S.; Paradiso, Joseph; Bergmann, Edward V.; Rowell, Derek

    1990-01-01

    Two steering laws are presented for single-gimbal control moment gyroscopes. An approach using the Moore-Penrose pseudoinverse with a nondirectional null-motion algorithm is shown by example to avoid internal singularities for unidirectional torque commands, for which existing algorithms fail. Because this is still a tangent-based approach, however, singularity avoidance cannot be guaranteed. The singularity robust inverse is introduced as an alternative to the pseudoinverse for computing torque-producing gimbal rates near singular states. This approach, coupled with the nondirectional null algorithm, is shown by example to provide better steering law performance by allowing torque errors to be produced in the vicinity of singular states.

  2. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.

    PubMed

    Dhar, Amrit; Minin, Vladimir N

    2017-05-01

    Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.

  3. Determining residual reduction algorithm kinematic tracking weights for a sidestep cut via numerical optimization.

    PubMed

    Samaan, Michael A; Weinhandl, Joshua T; Bawab, Sebastian Y; Ringleb, Stacie I

    2016-12-01

    Musculoskeletal modeling allows for the determination of various parameters during dynamic maneuvers by using in vivo kinematic and ground reaction force (GRF) data as inputs. Differences between experimental and model marker data and inconsistencies in the GRFs applied to these musculoskeletal models may not produce accurate simulations. Therefore, residual forces and moments are applied to these models in order to reduce these differences. Numerical optimization techniques can be used to determine optimal tracking weights of each degree of freedom of a musculoskeletal model in order to reduce differences between the experimental and model marker data as well as residual forces and moments. In this study, the particle swarm optimization (PSO) and simplex simulated annealing (SIMPSA) algorithms were used to determine optimal tracking weights for the simulation of a sidestep cut. The PSO and SIMPSA algorithms were able to produce model kinematics that were within 1.4° of experimental kinematics with residual forces and moments of less than 10 N and 18 Nm, respectively. The PSO algorithm was able to replicate the experimental kinematic data more closely and produce more dynamically consistent kinematic data for a sidestep cut compared to the SIMPSA algorithm. Future studies should use external optimization routines to determine dynamically consistent kinematic data and report the differences between experimental and model data for these musculoskeletal simulations.

  4. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time

    PubMed Central

    Dhar, Amrit

    2017-01-01

    Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780

  5. Analysis of Sting Balance Calibration Data Using Optimized Regression Models

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Bader, Jon B.

    2010-01-01

    Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.

  6. Moment-tensor solutions for the 24 November 1987 Superstition Hills, California, earthquakes

    USGS Publications Warehouse

    Sipkin, S.A.

    1989-01-01

    The teleseismic long-period waveforms recorded by the Global Digital Seismograph Network from the two largest Superstition Hills earthquakes are inverted using an algorithm based on optimal filter theory. These solutions differ slightly from those published in the Preliminary Determination of Epicenters Monthly Listing because a somewhat different, improved data set was used in the inversions and a time-dependent moment-tensor algorithm was used to investigate the complexity of the main shock. The foreshock (origin time 01:54:14.5, mb 5.7, Ms6.2) had a scalar moment of 2.3 ?? 1025 dyne-cm, a depth of 8km, and a mechanism of strike 217??, dip 79??, rake 4??. The main shock (origin time 13:15:56.4, mb 6.0, Ms6.6) was a complex event, consisting of at least two subevents, with a combined scalar moment of 1.0 ?? 1026 dyne-cm, a depth of 10km, and a mechanism of strike 303??, dip 89??, rake -180??. -Authors

  7. Small sample estimation of the reliability function for technical products

    NASA Astrophysics Data System (ADS)

    Lyamets, L. L.; Yakimenko, I. V.; Kanishchev, O. A.; Bliznyuk, O. A.

    2017-12-01

    It is demonstrated that, in the absence of big statistic samples obtained as a result of testing complex technical products for failure, statistic estimation of the reliability function of initial elements can be made by the moments method. A formal description of the moments method is given and its advantages in the analysis of small censored samples are discussed. A modified algorithm is proposed for the implementation of the moments method with the use of only the moments at which the failures of initial elements occur.

  8. Extension of moment projection method to the fragmentation process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Shaohua; Yapp, Edward K.Y.; Akroyd, Jethro

    2017-04-15

    The method of moments is a simple but efficient method of solving the population balance equation which describes particle dynamics. Recently, the moment projection method (MPM) was proposed and validated for particle inception, coagulation, growth and, more importantly, shrinkage; here the method is extended to include the fragmentation process. The performance of MPM is tested for 13 different test cases for different fragmentation kernels, fragment distribution functions and initial conditions. Comparisons are made with the quadrature method of moments (QMOM), hybrid method of moments (HMOM) and a high-precision stochastic solution calculated using the established direct simulation algorithm (DSA) and advantagesmore » of MPM are drawn.« less

  9. Biologically Inspired Strategies, Algorithms and Hardware for Visual Guidance of Autonomous Helicopters

    DTIC Science & Technology

    2011-05-02

    Dacke, J. Reinhard and M.V. Srinivasan (2010) The moment before touchdown: Landing manoeuvres of the honeybee Apis mellifera . Journal of Experimental...moment before touchdown: Landing manoeuvres of the honeybee Apis mellifera . Journal of Experimental Biology 213, 262-270. M.V. Srinivasan (2010

  10. Fluid preconditioning for Newton–Krylov-based, fully implicit, electrostatic particle-in-cell simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, G., E-mail: gchen@lanl.gov; Chacón, L.; Leibs, C.A.

    2014-02-01

    A recent proof-of-principle study proposes an energy- and charge-conserving, nonlinearly implicit electrostatic particle-in-cell (PIC) algorithm in one dimension [9]. The algorithm in the reference employs an unpreconditioned Jacobian-free Newton–Krylov method, which ensures nonlinear convergence at every timestep (resolving the dynamical timescale of interest). Kinetic enslavement, which is one key component of the algorithm, not only enables fully implicit PIC as a practical approach, but also allows preconditioning the kinetic solver with a fluid approximation. This study proposes such a preconditioner, in which the linearized moment equations are closed with moments computed from particles. Effective acceleration of the linear GMRES solvemore » is demonstrated, on both uniform and non-uniform meshes. The algorithm performance is largely insensitive to the electron–ion mass ratio. Numerical experiments are performed on a 1D multi-scale ion acoustic wave test problem.« less

  11. Radiation and scattering from bodies of translation. Volume 2: User's manual, computer program documentation

    NASA Astrophysics Data System (ADS)

    Medgyesi-Mitschang, L. N.; Putnam, J. M.

    1980-04-01

    A hierarchy of computer programs implementing the method of moments for bodies of translation (MM/BOT) is described. The algorithm treats the far-field radiation and scattering from finite-length open cylinders of arbitrary cross section as well as the near fields and aperture-coupled fields for rectangular apertures on such bodies. The theoretical development underlying the algorithm is described in Volume 1. The structure of the computer algorithm is such that no a priori knowledge of the method of moments technique or detailed FORTRAN experience are presupposed for the user. A set of carefully drawn example problems illustrates all the options of the algorithm. For more detailed understanding of the workings of the codes, special cross referencing to the equations in Volume 1 is provided. For additional clarity, comment statements are liberally interspersed in the code listings, summarized in the present volume.

  12. Optimization and large scale computation of an entropy-based moment closure

    NASA Astrophysics Data System (ADS)

    Kristopher Garrett, C.; Hauck, Cory; Hill, Judith

    2015-12-01

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  13. Extended AIC model based on high order moments and its application in the financial market

    NASA Astrophysics Data System (ADS)

    Mao, Xuegeng; Shang, Pengjian

    2018-07-01

    In this paper, an extended method of traditional Akaike Information Criteria(AIC) is proposed to detect the volatility of time series by combining it with higher order moments, such as skewness and kurtosis. Since measures considering higher order moments are powerful in many aspects, the properties of asymmetry and flatness can be observed. Furthermore, in order to reduce the effect of noise and other incoherent features, we combine the extended AIC algorithm with multiscale wavelet analysis, in which the newly extended AIC algorithm is applied to wavelet coefficients at several scales and the time series are reconstructed by wavelet transform. After that, we create AIC planes to derive the relationship among AIC values using variance, skewness and kurtosis respectively. When we test this technique on the financial market, the aim is to analyze the trend and volatility of the closing price of stock indices and classify them. And we also adapt multiscale analysis to measure complexity of time series over a range of scales. Empirical results show that the singularity of time series in stock market can be detected via extended AIC algorithm.

  14. Optimization and large scale computation of an entropy-based moment closure

    DOE PAGES

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, M N, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as P N, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which aremore » used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the M N algorithm that do not appear for the P N algorithm. We also observe that in weak scaling tests, the ratio in time to solution of M N to P N decreases.« less

  15. Algorithms for Determining Physical Responses of Structures Under Load

    NASA Technical Reports Server (NTRS)

    Richards, W. Lance; Ko, William L.

    2012-01-01

    Ultra-efficient real-time structural monitoring algorithms have been developed to provide extensive information about the physical response of structures under load. These algorithms are driven by actual strain data to measure accurately local strains at multiple locations on the surface of a structure. Through a single point load calibration test, these structural strains are then used to calculate key physical properties of the structure at each measurement location. Such properties include the structure s flexural rigidity (the product of the structure's modulus of elasticity, and its moment of inertia) and the section modulus (the moment of inertia divided by the structure s half-depth). The resulting structural properties at each location can be used to determine the structure s bending moment, shear, and structural loads in real time while the structure is in service. The amount of structural information can be maximized through the use of highly multiplexed fiber Bragg grating technology using optical time domain reflectometry and optical frequency domain reflectometry, which can provide a local strain measurement every 10 mm on a single hair-sized optical fiber. Since local strain is used as input to the algorithms, this system serves multiple purposes of measuring strains and displacements, as well as determining structural bending moment, shear, and loads for assessing real-time structural health. The first step is to install a series of strain sensors on the structure s surface in such a way as to measure bending strains at desired locations. The next step is to perform a simple ground test calibration. For a beam of length l (see example), discretized into n sections and subjected to a tip load of P that places the beam in bending, the flexural rigidity of the beam can be experimentally determined at each measurement location x. The bending moment at each station can then be determined for any general set of loads applied during operation.

  16. Time-frequency analysis-based time-windowing algorithm for the inverse synthetic aperture radar imaging of ships

    NASA Astrophysics Data System (ADS)

    Zhou, Peng; Zhang, Xi; Sun, Weifeng; Dai, Yongshou; Wan, Yong

    2018-01-01

    An algorithm based on time-frequency analysis is proposed to select an imaging time window for the inverse synthetic aperture radar imaging of ships. An appropriate range bin is selected to perform the time-frequency analysis after radial motion compensation. The selected range bin is that with the maximum mean amplitude among the range bins whose echoes are confirmed to be contributed by a dominant scatter. The criterion for judging whether the echoes of a range bin are contributed by a dominant scatter is key to the proposed algorithm and is therefore described in detail. When the first range bin that satisfies the judgment criterion is found, a sequence composed of the frequencies that have the largest amplitudes in every moment's time-frequency spectrum corresponding to this range bin is employed to calculate the length and the center moment of the optimal imaging time window. Experiments performed with simulation data and real data show the effectiveness of the proposed algorithm, and comparisons between the proposed algorithm and the image contrast-based algorithm (ICBA) are provided. Similar image contrast and lower entropy are acquired using the proposed algorithm as compared with those values when using the ICBA.

  17. Infrared Ship Classification Using A New Moment Pattern Recognition Concept

    NASA Astrophysics Data System (ADS)

    Casasent, David; Pauly, John; Fetterly, Donald

    1982-03-01

    An analysis of the statistics of the moments and the conventional invariant moments shows that the variance of the latter become quite large as the order of the moments and the degree of invariance increases. Moreso, the need to whiten the error volume increases with the order and degree, but so does the computational load associated with computing the whitening operator. We thus advance a new estimation approach to the use of moments in pattern recog-nition that overcomes these problems. This work is supported by experimental verification and demonstration on an infrared ship pattern recognition problem. The computational load associated with our new algorithm is also shown to be very low.

  18. Control of systematic uncertainties in the storage ring search for an electric dipole moment by measuring the electric quadrupole moment

    NASA Astrophysics Data System (ADS)

    Magiera, Andrzej

    2017-09-01

    Measurements of electric dipole moment (EDM) for light hadrons with use of a storage ring have been proposed. The expected effect is very small, therefore various subtle effects need to be considered. In particular, interaction of particle's magnetic dipole moment and electric quadrupole moment with electromagnetic field gradients can produce an effect of a similar order of magnitude as that expected for EDM. This paper describes a very promising method employing an rf Wien filter, allowing to disentangle that contribution from the genuine EDM effect. It is shown that both these effects could be separated by the proper setting of the rf Wien filter frequency and phase. In the EDM measurement the magnitude of systematic uncertainties plays a key role and they should be under strict control. It is shown that particles' interaction with field gradients offers also the possibility to estimate global systematic uncertainties with the precision necessary for an EDM measurement with the planned accuracy.

  19. Parameterization of cloud lidar backscattering profiles by means of asymmetrical Gaussians

    NASA Astrophysics Data System (ADS)

    del Guasta, Massimo; Morandi, Marco; Stefanutti, Leopoldo

    1995-06-01

    A fitting procedure for cloud lidar data processing is shown that is based on the computation of the first three moments of the vertical-backscattering (or -extinction) profile. Single-peak clouds or single cloud layers are approximated to asymmetrical Gaussians. The algorithm is particularly stable with respect to noise and processing errors, and it is much faster than the equivalent least-squares approach. Multilayer clouds can easily be treated as a sum of single asymmetrical Gaussian peaks. The method is suitable for cloud-shape parametrization in noisy lidar signatures (like those expected from satellite lidars). It also permits an improvement of cloud radiative-property computations that are based on huge lidar data sets for which storage and careful examination of single lidar profiles can't be carried out.

  20. Data-based comparisons of moments estimators using historical and paleoflood data

    USGS Publications Warehouse

    England, J.F.; Jarrett, R.D.; Salas, J.D.

    2003-01-01

    This paper presents the first systematic comparison, using historical and paleoflood data, of moments-based flood frequency methods. Peak flow estimates were compiled from streamflow-gaging stations with historical and/or paleoflood data at 36 sites located in the United States, Argentina, United Kingdom and China, covering a diverse range of hydrologic conditions. The Expected Moments Algorithm (EMA) and the Bulletin 17B historical weighting procedure (B17H) were compared in terms of goodness of fit using 25 of the data sets. Results from this comparison indicate that EMA is a viable alternative to current B17H procedures from an operational perspective, and performed equal to or better than B17H for the data analyzed. We demonstrate satisfactory EMA performance for the remaining 11 sites with multiple thresholds and binomial censoring, which B17H cannot accommodate. It is shown that the EMA estimator readily incorporates these types of information and the LP-III distribution provided an adequate fit to the data in most cases. The results shown here are consistent with Monte Carlo simulation studies, and demonstrate that EMA is preferred overall to B17H. The Bulletin 17B document could be revised to include an option for EMA as an alternative to the existing historical weighting approach. These results are of practical relevance to hydrologists and water resources managers for applications in floodplain management, design of hydraulic structures, and risk analysis for dams. ?? 2003 Elsevier Science B.V. All rights reserved.

  1. Data-based comparisons of moments estimators using historical and paleoflood data

    NASA Astrophysics Data System (ADS)

    England, John F.; Jarrett, Robert D.; Salas, José D.

    2003-07-01

    This paper presents the first systematic comparison, using historical and paleoflood data, of moments-based flood frequency methods. Peak flow estimates were compiled from streamflow-gaging stations with historical and/or paleoflood data at 36 sites located in the United States, Argentina, United Kingdom and China, covering a diverse range of hydrologic conditions. The Expected Moments Algorithm (EMA) and the Bulletin 17B historical weighting procedure (B17H) were compared in terms of goodness of fit using 25 of the data sets. Results from this comparison indicate that EMA is a viable alternative to current B17H procedures from an operational perspective, and performed equal to or better than B17H for the data analyzed. We demonstrate satisfactory EMA performance for the remaining 11 sites with multiple thresholds and binomial censoring, which B17H cannot accommodate. It is shown that the EMA estimator readily incorporates these types of information and the LP-III distribution provided an adequate fit to the data in most cases. The results shown here are consistent with Monte Carlo simulation studies, and demonstrate that EMA is preferred overall to B17H. The Bulletin 17B document could be revised to include an option for EMA as an alternative to the existing historical weighting approach. These results are of practical relevance to hydrologists and water resources managers for applications in floodplain management, design of hydraulic structures, and risk analysis for dams.

  2. Hardware-efficient implementation of digital FIR filter using fast first-order moment algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Li; Liu, Jianguo; Xiong, Jun; Zhang, Jing

    2018-03-01

    As the digital finite impulse response (FIR) filter can be transformed into the shift-add form of multiple small-sized firstorder moments, based on the existing fast first-order moment algorithm, this paper presents a novel multiplier-less structure to calculate any number of sequential filtering results in parallel. The theoretical analysis on its hardware and time-complexities reveals that by appropriately setting the degree of parallelism and the decomposition factor of a fixed word width, the proposed structure may achieve better area-time efficiency than the existing two-dimensional (2-D) memoryless-based filter. To evaluate the performance concretely, the proposed designs for different taps along with the existing 2-D memoryless-based filters, are synthesized by Synopsys Design Compiler with 0.18-μm SMIC library. The comparisons show that the proposed design has less area-time complexity and power consumption when the number of filter taps is larger than 48.

  3. Quadratures with multiple nodes, power orthogonality, and moment-preserving spline approximation

    NASA Astrophysics Data System (ADS)

    Milovanovic, Gradimir V.

    2001-01-01

    Quadrature formulas with multiple nodes, power orthogonality, and some applications of such quadratures to moment-preserving approximation by defective splines are considered. An account on power orthogonality (s- and [sigma]-orthogonal polynomials) and generalized Gaussian quadratures with multiple nodes, including stable algorithms for numerical construction of the corresponding polynomials and Cotes numbers, are given. In particular, the important case of Chebyshev weight is analyzed. Finally, some applications in moment-preserving approximation of functions by defective splines are discussed.

  4. Comparison of recent S-wave indicating methods

    NASA Astrophysics Data System (ADS)

    Hubicka, Katarzyna; Sokolowski, Jakub

    2018-01-01

    Seismic event consists of surface waves and body waves. Due to the fact that the body waves are faster (P-waves) and more energetic (S-waves) in literature the problem of their analysis is taken more often. The most universal information that is received from the recorded wave is its moment of arrival. When this information is obtained from at least four seismometers in different locations, the epicentre of the particular event can be estimated [1]. Since the recorded body waves may overlap in signal, the problem of wave onset moment is considered more often for faster P-wave than S-wave. This however does not mean that the issue of S-wave arrival time is not taken at all. As the process of manual picking is time-consuming, methods of automatic detection are recommended (these however may be less accurate). In this paper four recently developed methods estimating S-wave arrival are compared: the method operating on empirical mode decomposition and Teager-Kaiser operator [2], the modification of STA/LTA algorithm [3], the method using a nearest neighbour-based approach [4] and the algorithm operating on characteristic of signals' second moments. The methods will be also compared to wellknown algorithm based on the autoregressive model [5]. The algorithms will be tested in terms of their S-wave arrival identification accuracy on real data originating from International Research Institutions for Seismology (IRIS) database.

  5. Target recognition of ladar range images using slice image: comparison of four improved algorithms

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Han, Shaokun; Cao, Jingya; Wang, Liang; Zhai, Yu; Cheng, Yang

    2017-07-01

    Compared with traditional 3-D shape data, ladar range images possess properties of strong noise, shape degeneracy, and sparsity, which make feature extraction and representation difficult. The slice image is an effective feature descriptor to resolve this problem. We propose four improved algorithms on target recognition of ladar range images using slice image. In order to improve resolution invariance of the slice image, mean value detection instead of maximum value detection is applied in these four improved algorithms. In order to improve rotation invariance of the slice image, three new improved feature descriptors-which are feature slice image, slice-Zernike moments, and slice-Fourier moments-are applied to the last three improved algorithms, respectively. Backpropagation neural networks are used as feature classifiers in the last two improved algorithms. The performance of these four improved recognition systems is analyzed comprehensively in the aspects of the three invariances, recognition rate, and execution time. The final experiment results show that the improvements for these four algorithms reach the desired effect, the three invariances of feature descriptors are not directly related to the final recognition performance of recognition systems, and these four improved recognition systems have different performances under different conditions.

  6. The algorithm of fast image stitching based on multi-feature extraction

    NASA Astrophysics Data System (ADS)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  7. The calculation of the mass moment of inertia of a fluid in a rotating rectangular tank

    NASA Technical Reports Server (NTRS)

    1977-01-01

    This analysis calculated the mass moment of inertia of a nonviscous fluid in a slowly rotating rectangular tank. Given the dimensions of the tank in the x, y, and z coordinates, the axis of rotation, the percentage of the tank occupied by the fluid, and angle of rotation, an algorithm was written that could calculate the mass moment of inertia of the fluid. While not included in this paper, the change in the mass moment of inertia of the fluid could then be used to calculate the force exerted by the fluid on the container wall.

  8. Compressible, multiphase semi-implicit method with moment of fluid interface representation

    DOE PAGES

    Jemison, Matthew; Sussman, Mark; Arienti, Marco

    2014-09-16

    A unified method for simulating multiphase flows using an exactly mass, momentum, and energy conserving Cell-Integrated Semi-Lagrangian advection algorithm is presented. The deforming material boundaries are represented using the moment-of-fluid method. Our new algorithm uses a semi-implicit pressure update scheme that asymptotically preserves the standard incompressible pressure projection method in the limit of infinite sound speed. The asymptotically preserving attribute makes the new method applicable to compressible and incompressible flows including stiff materials; enabling large time steps characteristic of incompressible flow algorithms rather than the small time steps required by explicit methods. Moreover, shocks are captured and material discontinuities aremore » tracked, without the aid of any approximate or exact Riemann solvers. As a result, wimulations of underwater explosions and fluid jetting in one, two, and three dimensions are presented which illustrate the effectiveness of the new algorithm at efficiently computing multiphase flows containing shock waves and material discontinuities with large “impedance mismatch.”« less

  9. Advancements of in-flight mass moment of inertia and structural deflection algorithms for satellite attitude simulators

    NASA Astrophysics Data System (ADS)

    Wright, Jonathan W.

    Experimental satellite attitude simulators have long been used to test and analyze control algorithms in order to drive down risk before implementation on an operational satellite. Ideally, the dynamic response of a terrestrial-based experimental satellite attitude simulator would be similar to that of an on-orbit satellite. Unfortunately, gravitational disturbance torques and poorly characterized moments of inertia introduce uncertainty into the system dynamics leading to questionable attitude control algorithm experimental results. This research consists of three distinct, but related contributions to the field of developing robust satellite attitude simulators. In the first part of this research, existing approaches to estimate mass moments and products of inertia are evaluated followed by a proposition and evaluation of a new approach that increases both the accuracy and precision of these estimates using typical on-board satellite sensors. Next, in order to better simulate the micro-torque environment of space, a new approach to mass balancing satellite attitude simulator is presented, experimentally evaluated, and verified. Finally, in the third area of research, we capitalize on the platform improvements to analyze a control moment gyroscope (CMG) singularity avoidance steering law. Several successful experiments were conducted with the CMG array at near-singular configurations. An evaluation process was implemented to verify that the platform remained near the desired test momentum, showing that the first two components of this research were effective in allowing us to conduct singularity avoidance experiments in a representative space-like test environment.

  10. Monte Carlo closure for moment-based transport schemes in general relativistic radiation hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Foucart, Francois

    2018-04-01

    General relativistic radiation hydrodynamic simulations are necessary to accurately model a number of astrophysical systems involving black holes and neutron stars. Photon transport plays a crucial role in radiatively dominated accretion discs, while neutrino transport is critical to core-collapse supernovae and to the modelling of electromagnetic transients and nucleosynthesis in neutron star mergers. However, evolving the full Boltzmann equations of radiative transport is extremely expensive. Here, we describe the implementation in the general relativistic SPEC code of a cheaper radiation hydrodynamic method that theoretically converges to a solution of Boltzmann's equation in the limit of infinite numerical resources. The algorithm is based on a grey two-moment scheme, in which we evolve the energy density and momentum density of the radiation. Two-moment schemes require a closure that fills in missing information about the energy spectrum and higher order moments of the radiation. Instead of the approximate analytical closure currently used in core-collapse and merger simulations, we complement the two-moment scheme with a low-accuracy Monte Carlo evolution. The Monte Carlo results can provide any or all of the missing information in the evolution of the moments, as desired by the user. As a first test of our methods, we study a set of idealized problems demonstrating that our algorithm performs significantly better than existing analytical closures. We also discuss the current limitations of our method, in particular open questions regarding the stability of the fully coupled scheme.

  11. L-moments and TL-moments of the generalized lambda distribution

    USGS Publications Warehouse

    Asquith, W.H.

    2007-01-01

    The 4-parameter generalized lambda distribution (GLD) is a flexible distribution capable of mimicking the shapes of many distributions and data samples including those with heavy tails. The method of L-moments and the recently developed method of trimmed L-moments (TL-moments) are attractive techniques for parameter estimation for heavy-tailed distributions for which the L- and TL-moments have been defined. Analytical solutions for the first five L- and TL-moments in terms of GLD parameters are derived. Unfortunately, numerical methods are needed to compute the parameters from the L- or TL-moments. Algorithms are suggested for parameter estimation. Application of the GLD using both L- and TL-moment parameter estimates from example data is demonstrated, and comparison of the L-moment fit of the 4-parameter kappa distribution is made. A small simulation study of the 98th percentile (far-right tail) is conducted for a heavy-tail GLD with high-outlier contamination. The simulations show, with respect to estimation of the 98th-percent quantile, that TL-moments are less biased (more robost) in the presence of high-outlier contamination. However, the robustness comes at the expense of considerably more sampling variability. ?? 2006 Elsevier B.V. All rights reserved.

  12. Development of an algorithm for controlling a multilevel three-phase converter

    NASA Astrophysics Data System (ADS)

    Taissariyeva, Kyrmyzy; Ilipbaeva, Lyazzat

    2017-08-01

    This work is devoted to the development of an algorithm for controlling transistors in a three-phase multilevel conversion system. The developed algorithm allows to organize a correct operation and describes the state of transistors at each moment of time when constructing a computer model of a three-phase multilevel converter. The developed algorithm of operation of transistors provides in-phase of a three-phase converter and obtaining a sinusoidal voltage curve at the converter output.

  13. Computing moment to moment BOLD activation for real-time neurofeedback

    PubMed Central

    Hinds, Oliver; Ghosh, Satrajit; Thompson, Todd W.; Yoo, Julie J.; Whitfield-Gabrieli, Susan; Triantafyllou, Christina; Gabrieli, John D.E.

    2013-01-01

    Estimating moment to moment changes in blood oxygenation level dependent (BOLD) activation levels from functional magnetic resonance imaging (fMRI) data has applications for learned regulation of regional activation, brain state monitoring, and brain-machine interfaces. In each of these contexts, accurate estimation of the BOLD signal in as little time as possible is desired. This is a challenging problem due to the low signal-to-noise ratio of fMRI data. Previous methods for real-time fMRI analysis have either sacrificed the ability to compute moment to moment activation changes by averaging several acquisitions into a single activation estimate or have sacrificed accuracy by failing to account for prominent sources of noise in the fMRI signal. Here we present a new method for computing the amount of activation present in a single fMRI acquisition that separates moment to moment changes in the fMRI signal intensity attributable to neural sources from those due to noise, resulting in a feedback signal more reflective of neural activation. This method computes an incremental general linear model fit to the fMRI timeseries, which is used to calculate the expected signal intensity at each new acquisition. The difference between the measured intensity and the expected intensity is scaled by the variance of the estimator in order to transform this residual difference into a statistic. Both synthetic and real data were used to validate this method and compare it to the only other published real-time fMRI method. PMID:20682350

  14. A robust two-node, 13 moment quadrature method of moments for dilute particle flows including wall bouncing

    NASA Astrophysics Data System (ADS)

    Sun, Dan; Garmory, Andrew; Page, Gary J.

    2017-02-01

    For flows where the particle number density is low and the Stokes number is relatively high, as found when sand or ice is ingested into aircraft gas turbine engines, streams of particles can cross each other's path or bounce from a solid surface without being influenced by inter-particle collisions. The aim of this work is to develop an Eulerian method to simulate these types of flow. To this end, a two-node quadrature-based moment method using 13 moments is proposed. In the proposed algorithm thirteen moments of particle velocity, including cross-moments of second order, are used to determine the weights and abscissas of the two nodes and to set up the association between the velocity components in each node. Previous Quadrature Method of Moments (QMOM) algorithms either use more than two nodes, leading to increased computational expense, or are shown here to give incorrect results under some circumstances. This method gives the computational efficiency advantages of only needing two particle phase velocity fields whilst ensuring that a correct combination of weights and abscissas is returned for any arbitrary combination of particle trajectories without the need for any further assumptions. Particle crossing and wall bouncing with arbitrary combinations of angles are demonstrated using the method in a two-dimensional scheme. The ability of the scheme to include the presence of drag from a carrier phase is also demonstrated, as is bouncing off surfaces with inelastic collisions. The method is also applied to the Taylor-Green vortex flow test case and is found to give results superior to the existing two-node QMOM method and is in good agreement with results from Lagrangian modelling of this case.

  15. Study of photon correlation techniques for processing of laser velocimeter signals

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1977-01-01

    The objective was to provide the theory and a system design for a new type of photon counting processor for low level dual scatter laser velocimeter (LV) signals which would be capable of both the first order measurements of mean flow and turbulence intensity and also the second order time statistics: cross correlation auto correlation, and related spectra. A general Poisson process model for low level LV signals and noise which is valid from the photon-resolved regime all the way to the limiting case of nonstationary Gaussian noise was used. Computer simulation algorithms and higher order statistical moment analysis of Poisson processes were derived and applied to the analysis of photon correlation techniques. A system design using a unique dual correlate and subtract frequency discriminator technique is postulated and analyzed. Expectation analysis indicates that the objective measurements are feasible.

  16. A hybrid multi-objective evolutionary algorithm for wind-turbine blade optimization

    NASA Astrophysics Data System (ADS)

    Sessarego, M.; Dixon, K. R.; Rival, D. E.; Wood, D. H.

    2015-08-01

    A concurrent-hybrid non-dominated sorting genetic algorithm (hybrid NSGA-II) has been developed and applied to the simultaneous optimization of the annual energy production, flapwise root-bending moment and mass of the NREL 5 MW wind-turbine blade. By hybridizing a multi-objective evolutionary algorithm (MOEA) with gradient-based local search, it is believed that the optimal set of blade designs could be achieved in lower computational cost than for a conventional MOEA. To measure the convergence between the hybrid and non-hybrid NSGA-II on a wind-turbine blade optimization problem, a computationally intensive case was performed using the non-hybrid NSGA-II. From this particular case, a three-dimensional surface representing the optimal trade-off between the annual energy production, flapwise root-bending moment and blade mass was achieved. The inclusion of local gradients in the blade optimization, however, shows no improvement in the convergence for this three-objective problem.

  17. X-29A Lateral-Directional Stability and Control Derivatives Extracted From High-Angle-of-Attack Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.; Wang, Kon-Sheng Charles Wang

    1996-01-01

    The lateral-directional stability and control derivatives of the X-29A number 2 are extracted from flight data over an angle-of-attack range of 4 degrees to 53 degrees using a parameter identification algorithm. The algorithm uses the linearized aircraft equations of motion and a maximum likelihood estimator in the presence of state and measurement noise. State noise is used to model the uncommanded forcing function caused by unsteady aerodynamics over the aircraft at angles of attack above 15 degrees. The results supported the flight-envelope-expansion phase of the X-29A number 2 by helping to update the aerodynamic mathematical model, to improve the real-time simulator, and to revise flight control system laws. Effects of the aircraft high gain flight control system on maneuver quality and the estimated derivatives are also discussed. The derivatives are plotted as functions of angle of attack and compared with the predicted aerodynamic database. Agreement between predicted and flight values is quite good for some derivatives such as the lateral force due to sideslip, the lateral force due to rudder deflection, and the rolling moment due to roll rate. The results also show significant differences in several important derivatives such as the rolling moment due to sideslip, the yawing moment due to sideslip, the yawing moment due to aileron deflection, and the yawing moment due to rudder deflection.

  18. Comparison of PDF and Moment Closure Methods in the Modeling of Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Norris, Andrew T.; Hsu, Andrew T.

    1994-01-01

    In modeling turbulent reactive flows, Probability Density Function (PDF) methods have an advantage over the more traditional moment closure schemes in that the PDF formulation treats the chemical reaction source terms exactly, while moment closure methods are required to model the mean reaction rate. The common model used is the laminar chemistry approximation, where the effects of turbulence on the reaction are assumed negligible. For flows with low turbulence levels and fast chemistry, the difference between the two methods can be expected to be small. However for flows with finite rate chemistry and high turbulence levels, significant errors can be expected in the moment closure method. In this paper, the ability of the PDF method and the moment closure scheme to accurately model a turbulent reacting flow is tested. To accomplish this, both schemes were used to model a CO/H2/N2- air piloted diffusion flame near extinction. Identical thermochemistry, turbulence models, initial conditions and boundary conditions are employed to ensure a consistent comparison can be made. The results of the two methods are compared to experimental data as well as to each other. The comparison reveals that the PDF method provides good agreement with the experimental data, while the moment closure scheme incorrectly shows a broad, laminar-like flame structure.

  19. Gauge-free cluster variational method by maximal messages and moment matching.

    PubMed

    Domínguez, Eduardo; Lage-Castellanos, Alejandro; Mulet, Roberto; Ricci-Tersenghi, Federico

    2017-04-01

    We present an implementation of the cluster variational method (CVM) as a message passing algorithm. The kind of message passing algorithm used for CVM, usually named generalized belief propagation (GBP), is a generalization of the belief propagation algorithm in the same way that CVM is a generalization of the Bethe approximation for estimating the partition function. However, the connection between fixed points of GBP and the extremal points of the CVM free energy is usually not a one-to-one correspondence because of the existence of a gauge transformation involving the GBP messages. Our contribution is twofold. First, we propose a way of defining messages (fields) in a generic CVM approximation, such that messages arrive on a given region from all its ancestors, and not only from its direct parents, as in the standard parent-to-child GBP. We call this approach maximal messages. Second, we focus on the case of binary variables, reinterpreting the messages as fields enforcing the consistency between the moments of the local (marginal) probability distributions. We provide a precise rule to enforce all consistencies, avoiding any redundancy, that would otherwise lead to a gauge transformation on the messages. This moment matching method is gauge free, i.e., it guarantees that the resulting GBP is not gauge invariant. We apply our maximal messages and moment matching GBP to obtain an analytical expression for the critical temperature of the Ising model in general dimensions at the level of plaquette CVM. The values obtained outperform Bethe estimates, and are comparable with loop corrected belief propagation equations. The method allows for a straightforward generalization to disordered systems.

  20. Gauge-free cluster variational method by maximal messages and moment matching

    NASA Astrophysics Data System (ADS)

    Domínguez, Eduardo; Lage-Castellanos, Alejandro; Mulet, Roberto; Ricci-Tersenghi, Federico

    2017-04-01

    We present an implementation of the cluster variational method (CVM) as a message passing algorithm. The kind of message passing algorithm used for CVM, usually named generalized belief propagation (GBP), is a generalization of the belief propagation algorithm in the same way that CVM is a generalization of the Bethe approximation for estimating the partition function. However, the connection between fixed points of GBP and the extremal points of the CVM free energy is usually not a one-to-one correspondence because of the existence of a gauge transformation involving the GBP messages. Our contribution is twofold. First, we propose a way of defining messages (fields) in a generic CVM approximation, such that messages arrive on a given region from all its ancestors, and not only from its direct parents, as in the standard parent-to-child GBP. We call this approach maximal messages. Second, we focus on the case of binary variables, reinterpreting the messages as fields enforcing the consistency between the moments of the local (marginal) probability distributions. We provide a precise rule to enforce all consistencies, avoiding any redundancy, that would otherwise lead to a gauge transformation on the messages. This moment matching method is gauge free, i.e., it guarantees that the resulting GBP is not gauge invariant. We apply our maximal messages and moment matching GBP to obtain an analytical expression for the critical temperature of the Ising model in general dimensions at the level of plaquette CVM. The values obtained outperform Bethe estimates, and are comparable with loop corrected belief propagation equations. The method allows for a straightforward generalization to disordered systems.

  1. Three Dimensional Cross-Sectional Properties From Bone Densitometry

    NASA Technical Reports Server (NTRS)

    Cleek, Tammy M.; Whalen, Robert T.; Dalton, Bonnie P. (Technical Monitor)

    2001-01-01

    Bone densitometry has previously been used to obtain cross-sectional properties of bone in a single scan plane. Using three non-coplanar scans, we have extended the method to obtain the principal area Moments of inertia and orientations of the principal axes at each cross-section along the length of the scan. Various 5 aluminum phantoms were used to examine scanner characteristics to develop the highest accuracy possible for in vitro non-invasive analysis of mass distribution. Factors considered included X-ray photon energy, initial scan orientation, the included angle of the 3 scans, and Imin/Imax ratios. Principal moments of inertia were accurate to within 3.1% and principal angles were within 1 deg. of the expected value for phantoms scanned with included angles of 60 deg. and 90 deg. at the higher X-ray photon energy. Low standard deviations in error also 10 indicate high precision of calculated measurements with these included angles. Accuracy and precision decreased slightly when the included angle was reduced to 30 deg. The method was then successfully applied to a pair of excised cadaveric tibiae. The accuracy and insensitivity of the algorithms to cross-sectional shape and changing isotropy (Imin/Imax) values when various included angles are used make this technique viable for future in vivo studies.

  2. Upper limb joint forces and moments during underwater cyclical movements.

    PubMed

    Lauer, Jessy; Rouard, Annie Hélène; Vilas-Boas, João Paulo

    2016-10-03

    Sound inverse dynamics modeling is lacking in aquatic locomotion research because of the difficulty in measuring hydrodynamic forces in dynamic conditions. Here we report the successful implementation and validation of an innovative methodology crossing new computational fluid dynamics and inverse dynamics techniques to quantify upper limb joint forces and moments while moving in water. Upper limb kinematics of seven male swimmers sculling while ballasted with 4kg was recorded through underwater motion capture. Together with body scans, segment inertial properties, and hydrodynamic resistances computed from a unique dynamic mesh algorithm capable to handle large body deformations, these data were fed into an inverse dynamics model to solve for joint kinetics. Simulation validity was assessed by comparing the impulse produced by the arms, calculated by integrating vertical forces over a stroke period, to the net theoretical impulse of buoyancy and ballast forces. A resulting gap of 1.2±3.5% provided confidence in the results. Upper limb joint load was within 5% of swimmer׳s body weight, which tends to supports the use of low-load aquatic exercises to reduce joint stress. We expect this significant methodological improvement to pave the way towards deeper insights into the mechanics of aquatic movement and the establishment of practice guidelines in rehabilitation, fitness or swimming performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less

  4. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    NASA Astrophysics Data System (ADS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.

  5. Magnetic moment of solar plasma and the Kelvin force: -The driving force of plasma up-flow -

    NASA Astrophysics Data System (ADS)

    Shibasaki, Kiyoto

    2017-04-01

    Thermal plasma in the solar atmosphere is magnetized (diamagnetic). The magnetic moment does not disappear by collisions because complete gyration is not a necessary condition to have magnetic moment. Magnetized fluid is subjected to Kelvin force in non-uniform magnetic field. Generally, magnetic field strength decreases upwards in the solar atmosphere, hence the Kelvin force is directed upwards along the field. This force is not included in the fluid treatment of MHD. By adding the Kelvin force to the MHD equation of motion, we can expect temperature dependent plasma flows along the field which are reported by many observations. The temperature dependence of the flow speed is explained by temperature dependence of magnetic moment. From the observed parameters, we can infer physical parameters in the solar atmosphere such as scale length of the magnetic field strength and the friction force acting on the flowing plasma. In case of closed magnetic field lines, loop-top concentration of hot plasma is expected which is frequently observed.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lau, Sarah J.; Moore, David G.; Stair, Sarah L.

    Ultrasonic analysis is being explored as a way to capture events during melting of highly dispersive wax. Typical events include temperature changes in the material, phase transition of the material, surface flows and reformations, and void filling as the material melts. Melt tests are performed with wax to evaluate the usefulness of different signal processing algorithms in capturing event data. Several algorithm paths are being pursued. The first looks at changes in the velocity of the signal through the material. This is only appropriate when the changes from one ultrasonic signal to the next can be represented by a linearmore » relationship, which is not always the case. The second tracks changes in the frequency content of the signal. The third algorithm tracks changes in the temporal moments of a signal over a full test. This method does not require that the changes in the signal be represented by a linear relationship, but attaching changes in the temporal moments to physical events can be difficult. This study describes the algorithm paths applied to experimental data from ultrasonic signals as wax melts and explores different ways to display the results.« less

  7. Algorithm for Surface of Translation Attached Radiators (A-STAR). Volume 2. Users manual

    NASA Astrophysics Data System (ADS)

    Medgyesimitschang, L. N.; Putnam, J. M.

    1982-05-01

    A hierarchy of computer programs implementing the method of moments for bodies of translation (MM/BOT) is described. The algorithm treats the far-field radiation from off-surface and aperture antennas on finite-length open or closed bodies of arbitrary cross section. The near fields and antenna coupling on such bodies are computed. The theoretical development underlying the algorithm is described in Volume 1 of this report.

  8. Accelerating image recognition on mobile devices using GPGPU

    NASA Astrophysics Data System (ADS)

    Bordallo López, Miguel; Nykänen, Henri; Hannuksela, Jari; Silvén, Olli; Vehviläinen, Markku

    2011-01-01

    The future multi-modal user interfaces of battery-powered mobile devices are expected to require computationally costly image analysis techniques. The use of Graphic Processing Units for computing is very well suited for parallel processing and the addition of programmable stages and high precision arithmetic provide for opportunities to implement energy-efficient complete algorithms. At the moment the first mobile graphics accelerators with programmable pipelines are available, enabling the GPGPU implementation of several image processing algorithms. In this context, we consider a face tracking approach that uses efficient gray-scale invariant texture features and boosting. The solution is based on the Local Binary Pattern (LBP) features and makes use of the GPU on the pre-processing and feature extraction phase. We have implemented a series of image processing techniques in the shader language of OpenGL ES 2.0, compiled them for a mobile graphics processing unit and performed tests on a mobile application processor platform (OMAP3530). In our contribution, we describe the challenges of designing on a mobile platform, present the performance achieved and provide measurement results for the actual power consumption in comparison to using the CPU (ARM) on the same platform.

  9. Pattern Recognition Control Design

    NASA Technical Reports Server (NTRS)

    Gambone, Elisabeth A.

    2018-01-01

    Spacecraft control algorithms must know the expected vehicle response to any command to the available control effectors, such as reaction thrusters or torque devices. Spacecraft control system design approaches have traditionally relied on the estimated vehicle mass properties to determine the desired force and moment, as well as knowledge of the effector performance to efficiently control the spacecraft. A pattern recognition approach was used to investigate the relationship between the control effector commands and spacecraft responses. Instead of supplying the approximated vehicle properties and the thruster performance characteristics, a database of information relating the thruster ring commands and the desired vehicle response was used for closed-loop control. A Monte Carlo simulation data set of the spacecraft dynamic response to effector commands was analyzed to establish the influence a command has on the behavior of the spacecraft. A tool developed at NASA Johnson Space Center to analyze flight dynamics Monte Carlo data sets through pattern recognition methods was used to perform this analysis. Once a comprehensive data set relating spacecraft responses with commands was established, it was used in place of traditional control methods and gains set. This pattern recognition approach was compared with traditional control algorithms to determine the potential benefits and uses.

  10. Limiting neutrino magnetic moments with Borexino Phase-II solar neutrino data

    NASA Astrophysics Data System (ADS)

    Agostini, M.; Altenmüller, K.; Appel, S.; Atroshchenko, V.; Bagdasarian, Z.; Basilico, D.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Caminata, A.; Caprioli, S.; Carlini, M.; Cavalcante, P.; Chepurnov, A.; Choi, K.; Collica, L.; D'Angelo, D.; Davini, S.; Derbin, A.; Ding, X. F.; Di Ludovico, A.; Di Noto, L.; Drachnev, I.; Fomenko, K.; Formozov, A.; Franco, D.; Froborg, F.; Gabriele, F.; Galbiati, C.; Ghiano, C.; Giammarchi, M.; Goretti, A.; Gromov, M.; Guffanti, D.; Hagner, C.; Houdy, T.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jany, A.; Jeschke, D.; Kobychev, V.; Korablev, D.; Korga, G.; Kryn, D.; Laubenstein, M.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Lukyanchenko, L.; Machulin, I.; Manuzio, G.; Marcocci, S.; Martyn, J.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Muratova, V.; Neumair, B.; Oberauer, L.; Opitz, B.; Orekhov, V.; Ortica, F.; Pallavicini, M.; Papp, L.; Penek, Ã.-.; Pilipenko, N.; Pocar, A.; Porcelli, A.; Ranucci, G.; Razeto, A.; Re, A.; Redchuk, M.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Semenov, D.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stokes, L. F. F.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Vishneva, A.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.; Borexino Collaboration

    2017-11-01

    A search for the solar neutrino effective magnetic moment has been performed using data from 1291.5 days exposure during the second phase of the Borexino experiment. No significant deviations from the expected shape of the electron recoil spectrum from solar neutrinos have been found, and a new upper limit on the effective neutrino magnetic moment of μνeff<2.8×10 -11 μB at 90% C.L. has been set using constraints on the sum of the solar neutrino fluxes implied by the radiochemical gallium experiments. Using the limit for the effective neutrino moment, new limits for the magnetic moments of the neutrino flavor states, and for the elements of the neutrino magnetic moments matrix for Dirac and Majorana neutrinos, are derived.

  11. Assessment of macroseismic intensity in the Nile basin, Egypt

    NASA Astrophysics Data System (ADS)

    Fergany, Elsayed

    2018-01-01

    This work intends to assess deterministic seismic hazard and risk analysis in terms of the maximum expected intensity map of the Egyptian Nile basin sector. Seismic source zone model of Egypt was delineated based on updated compatible earthquake catalog in 2015, focal mechanisms, and the common tectonic elements. Four effective seismic source zones were identified along the Nile basin. The observed macroseismic intensity data along the basin was used to develop intensity prediction equation defined in terms of moment magnitude. Expected maximum intensity map was proven based on the developed intensity prediction equation, identified effective seismic source zones, and maximum expected magnitude for each zone along the basin. The earthquake hazard and risk analysis was discussed and analyzed in view of the maximum expected moment magnitude and the maximum expected intensity values for each effective source zone. Moderate expected magnitudes are expected to put high risk at Cairo and Aswan regions. The results of this study could be a recommendation for the planners in charge to mitigate the seismic risk at these strategic zones of Egypt.

  12. Magnetic Moment of Proton Drip-Line Nucleus (9)C

    NASA Technical Reports Server (NTRS)

    Matsuta, K.; Fukuda, M.; Tanigaki, M.; Minamisono, T.; Nojiri, Y.; Mihara, M.; Onishi, T.; Yamaguchi, T.; Harada, A.; Sasaki, M.

    1994-01-01

    The magnetic moment of the proton drip-line nucleus C-9(I(sup (pi)) = 3/2, T(sub 1/2) = 126 ms) has been measured for the first time, using the beta-NMR detection technique with polarized radioactive beams. The measure value for the magnetic moment is 1mu(C-9)! = 1.3914 +/- 0.0005 (mu)N. The deduced spin expectation value of 1.44 is unusually larger than any other ones of even-odd nuclei.

  13. A generalized Grubbs-Beck test statistic for detecting multiple potentially influential low outliers in flood series

    USGS Publications Warehouse

    Cohn, T.A.; England, J.F.; Berenbrock, C.E.; Mason, R.R.; Stedinger, J.R.; Lamontagne, J.R.

    2013-01-01

    he Grubbs-Beck test is recommended by the federal guidelines for detection of low outliers in flood flow frequency computation in the United States. This paper presents a generalization of the Grubbs-Beck test for normal data (similar to the Rosner (1983) test; see also Spencer and McCuen (1996)) that can provide a consistent standard for identifying multiple potentially influential low flows. In cases where low outliers have been identified, they can be represented as “less-than” values, and a frequency distribution can be developed using censored-data statistical techniques, such as the Expected Moments Algorithm. This approach can improve the fit of the right-hand tail of a frequency distribution and provide protection from lack-of-fit due to unimportant but potentially influential low flows (PILFs) in a flood series, thus making the flood frequency analysis procedure more robust.

  14. A generalized Grubbs-Beck test statistic for detecting multiple potentially influential low outliers in flood series

    NASA Astrophysics Data System (ADS)

    Cohn, T. A.; England, J. F.; Berenbrock, C. E.; Mason, R. R.; Stedinger, J. R.; Lamontagne, J. R.

    2013-08-01

    The Grubbs-Beck test is recommended by the federal guidelines for detection of low outliers in flood flow frequency computation in the United States. This paper presents a generalization of the Grubbs-Beck test for normal data (similar to the Rosner (1983) test; see also Spencer and McCuen (1996)) that can provide a consistent standard for identifying multiple potentially influential low flows. In cases where low outliers have been identified, they can be represented as "less-than" values, and a frequency distribution can be developed using censored-data statistical techniques, such as the Expected Moments Algorithm. This approach can improve the fit of the right-hand tail of a frequency distribution and provide protection from lack-of-fit due to unimportant but potentially influential low flows (PILFs) in a flood series, thus making the flood frequency analysis procedure more robust.

  15. The Fermilab Muon g-2 experiment: laser calibration system

    DOE PAGES

    Karuza, M.; Anastasi, A.; Basti, A.; ...

    2017-08-17

    The anomalous muon dipole magnetic moment can be measured (and calculated) with great precision thus providing insight on the Standard Model and new physics. Currently an experiment is under construction at Fermilab (U.S.A.) which is expected to measure the anomalous muon dipole magnetic moment with unprecedented precision. One of the improvements with respect to the previous experiments is expected to come from the laser calibration system which has been designed and constructed by the Italian part of the collaboration (INFN). Furthermore, an emphasis of this paper will be on the calibration system that is in the final stages of constructionmore » as well as the experiment which is expected to start data taking this year.« less

  16. Convergence of moment expansions for expectation values with embedded random matrix ensembles and quantum chaos

    NASA Astrophysics Data System (ADS)

    Kota, V. K. B.

    2003-07-01

    Smoothed forms for expectation values < K> E of positive definite operators K follow from the K-density moments either directly or in many other ways each giving a series expansion (involving polynomials in E). In large spectroscopic spaces one has to partition the many particle spaces into subspaces. Partitioning leads to new expansions for expectation values. It is shown that all the expansions converge to compact forms depending on the nature of the operator K and the operation of embedded random matrix ensembles and quantum chaos in many particle spaces. Explicit results are given for occupancies < ni> E, spin-cutoff factors < JZ2> E and strength sums < O†O> E, where O is a one-body transition operator.

  17. Moment measurements in dynamic and quasi-static spine segment testing using eccentric compression are susceptible to artifacts based on loading configuration.

    PubMed

    Van Toen, Carolyn; Carter, Jarrod W; Oxland, Thomas R; Cripton, Peter A

    2014-12-01

    The tolerance of the spine to bending moments, used for evaluation of injury prevention devices, is often determined through eccentric axial compression experiments using segments of the cadaver spine. Preliminary experiments in our laboratory demonstrated that eccentric axial compression resulted in "unexpected" (artifact) moments. The aim of this study was to evaluate the static and dynamic effects of test configuration on bending moments during eccentric axial compression typical in cadaver spine segment testing. Specific objectives were to create dynamic equilibrium equations for the loads measured inferior to the specimen, experimentally verify these equations, and compare moment responses from various test configurations using synthetic (rubber) and human cadaver specimens. The equilibrium equations were verified by performing quasi-static (5 mm/s) and dynamic experiments (0.4 m/s) on a rubber specimen and comparing calculated shear forces and bending moments to those measured using a six-axis load cell. Moment responses were compared for hinge joint, linear slider and hinge joint, and roller joint configurations tested at quasi-static and dynamic rates. Calculated shear force and bending moment curves had similar shapes to those measured. Calculated values in the first local minima differed from those measured by 3% and 15%, respectively, in the dynamic test, and these occurred within 1.5 ms of those measured. In the rubber specimen experiments, for the hinge joint (translation constrained), quasi-static and dynamic posterior eccentric compression resulted in flexion (unexpected) moments. For the slider and hinge joints and the roller joints (translation unconstrained), extension ("expected") moments were measured quasi-statically and initial flexion (unexpected) moments were measured dynamically. In the cadaver experiments with roller joints, anterior and posterior eccentricities resulted in extension moments, which were unexpected and expected, for those configurations, respectively. The unexpected moments were due to the inertia of the superior mounting structures. This study has shown that eccentric axial compression produces unexpected moments due to translation constraints at all loading rates and due to the inertia of the superior mounting structures in dynamic experiments. It may be incorrect to assume that bending moments are equal to the product of compression force and eccentricity, particularly where the test configuration involves translational constraints and where the experiments are dynamic. In order to reduce inertial moment artifacts, the mass, and moment of inertia of any loading jig structures that rotate with the specimen should be minimized. Also, the distance between these structures and the load cell should be reduced.

  18. Estimation of full moment tensors, including uncertainties, for earthquakes, volcanic events, and nuclear explosions

    NASA Astrophysics Data System (ADS)

    Alvizuri, Celso R.

    We present a catalog of full seismic moment tensors for 63 events from Uturuncu volcano in Bolivia. The events were recorded during 2011-2012 in the PLUTONS seismic array of 24 broadband stations. Most events had magnitudes between 0.5 and 2.0 and did not generate discernible surface waves; the largest event was Mw 2.8. For each event we computed the misfit between observed and synthetic waveforms, and we used first-motion polarity measurements to reduce the number of possible solutions. Each moment tensor solution was obtained using a grid search over the six-dimensional space of moment tensors. For each event we show the misfit function in eigenvalue space, represented by a lune. We identify three subsets of the catalog: (1) 6 isotropic events, (2) 5 tensional crack events, and (3) a swarm of 14 events southeast of the volcanic center that appear to be double couples. The occurrence of positively isotropic events is consistent with other published results from volcanic and geothermal regions. Several of these previous results, as well as our results, cannot be interpreted within the context of either an oblique opening crack or a crack-plus-double-couple model. Proper characterization of uncertainties for full moment tensors is critical for distinguishing among physical models of source processes. A seismic moment tensor is a 3x3 symmetric matrix that provides a compact representation of a seismic source. We develop an algorithm to estimate moment tensors and their uncertainties from observed seismic data. For a given event, the algorithm performs a grid search over the six-dimensional space of moment tensors by generating synthetic waveforms for each moment tensor and then evaluating a misfit function between the observed and synthetic waveforms. 'The' moment tensor M0 for the event is then the moment tensor with minimum misfit. To describe the uncertainty associated with M0, we first convert the misfit function to a probability function. The uncertainty, or rather the confidence, is then given by the 'confidence curve' P( V), where P(V) is the probability that the true moment tensor for the event lies within the neighborhood of M that has fractional volume V. The area under the confidence curve provides a single, abbreviated 'confidence parameter' for M0. We apply the method to data from events in different regions and tectonic settings: 63 small (M w 4) earthquakes in the southern Alaska subduction zone, and 12 earthquakes and 17 nuclear explosions at the Nevada Test Site. Characterization of moment tensor uncertainties puts us in better position to discriminate among moment tensor source types and to assign physical processes to the events.

  19. Prediction of seismic collapse risk of steel moment frame mid-rise structures by meta-heuristic algorithms

    NASA Astrophysics Data System (ADS)

    Jough, Fooad Karimi Ghaleh; Şensoy, Serhan

    2016-12-01

    Different performance levels may be obtained for sideway collapse evaluation of steel moment frames depending on the evaluation procedure used to handle uncertainties. In this article, the process of representing modelling uncertainties, record to record (RTR) variations and cognitive uncertainties for moment resisting steel frames of various heights is discussed in detail. RTR uncertainty is used by incremental dynamic analysis (IDA), modelling uncertainties are considered through backbone curves and hysteresis loops of component, and cognitive uncertainty is presented in three levels of material quality. IDA is used to evaluate RTR uncertainty based on strong ground motion records selected by the k-means algorithm, which is favoured over Monte Carlo selection due to its time saving appeal. Analytical equations of the Response Surface Method are obtained through IDA results by the Cuckoo algorithm, which predicts the mean and standard deviation of the collapse fragility curve. The Takagi-Sugeno-Kang model is used to represent material quality based on the response surface coefficients. Finally, collapse fragility curves with the various sources of uncertainties mentioned are derived through a large number of material quality values and meta variables inferred by the Takagi-Sugeno-Kang fuzzy model based on response surface method coefficients. It is concluded that a better risk management strategy in countries where material quality control is weak, is to account for cognitive uncertainties in fragility curves and the mean annual frequency.

  20. Determination of the diffusivity, dispersion, skewness and kurtosis in heterogeneous porous flow. Part I: Analytical solutions with the extended method of moments.

    NASA Astrophysics Data System (ADS)

    Ginzburg, Irina; Vikhansky, Alexander

    2018-05-01

    The extended method of moments (EMM) is elaborated in recursive algorithmic form for the prediction of the effective diffusivity, the Taylor dispersion dyadic and the associated longitudinal high-order coefficients in mean-concentration profiles and residence-time distributions. The method applies in any streamwise-periodic stationary d-dimensional velocity field resolved in the piecewise continuous heterogeneous porosity field. It is demonstrated that EMM reduces to the method of moments and the volume-averaging formulation in microscopic velocity field and homogeneous soil, respectively. The EMM simultaneously constructs two systems of moments, the spatial and the temporal, without resorting to solving of the high-order upscaled PDE. At the same time, the EMM is supported with the reconstruction of distribution from its moments, allowing to visualize the deviation from the classical ADE solution. The EMM can be handled by any linear advection-diffusion solver with explicit mass-source and diffusive-flux jump condition on the solid boundary and permeable interface. The prediction of the first four moments is decisive in the optimization of the dispersion, asymmetry, peakedness and heavy-tails of the solute distributions, through an adequate design of the composite materials, wetlands, chemical devices or oil recovery. The symbolic solutions for dispersion, skewness and kurtosis are constructed in basic configurations: diffusion process and Darcy flow through two porous blocks in "series", straight and radial Poiseuille flow, porous flow governed by the Stokes-Brinkman-Darcy channel equation and a fracture surrounded by penetrable diffusive matrix or embedded in porous flow. We examine the moments dependency upon porosity contrast, aspect ratio, Péclet and Darcy numbers, but also for their response on the effective Brinkman viscosity applied in flow modeling. Two numerical Lattice Boltzmann algorithms, a direct solver of the microscopic ADE in heterogeneous structure and a novel scheme for EMM numerical formulation, are called for validation of the constructed analytical predictions.

  1. Rapid Characterization of Magnetic Moment of Cells for Magnetic Separation

    PubMed Central

    Ooi, Chinchun; Earhart, Christopher M.; Wilson, Robert J.; Wang, Shan X.

    2014-01-01

    NCI-H1650 lung cancer cell lines labeled with magnetic nanoparticles via the Epithelial Cell Adhesion Molecule (EpCAM) antigen were previously shown to be captured at high efficiencies by a microfabricated magnetic sifter. If fine control and optimization of the magnetic separation process is to be achieved, it is vital to be able to characterize the labeled cells’ magnetic moment rapidly. We have thus adapted a rapid prototyping method to obtain the saturation magnetic moment of these cells. This method utilizes a cross-correlation algorithm to analyze the cells’ motion in a simple fluidic channel to obtain their magnetophoretic velocity, and is effective even when the magnetic moments of cells are small. This rapid characterization is proven useful in optimizing our microfabricated magnetic sifter procedures for magnetic cell capture. PMID:24771946

  2. Hamiltonian approach to Ehrenfest expectation values and Gaussian quantum states

    PubMed Central

    Bonet-Luz, Esther

    2016-01-01

    The dynamics of quantum expectation values is considered in a geometric setting. First, expectation values of the canonical observables are shown to be equivariant momentum maps for the action of the Heisenberg group on quantum states. Then, the Hamiltonian structure of Ehrenfest’s theorem is shown to be Lie–Poisson for a semidirect-product Lie group, named the Ehrenfest group. The underlying Poisson structure produces classical and quantum mechanics as special limit cases. In addition, quantum dynamics is expressed in the frame of the expectation values, in which the latter undergo canonical Hamiltonian motion. In the case of Gaussian states, expectation values dynamics couples to second-order moments, which also enjoy a momentum map structure. Eventually, Gaussian states are shown to possess a Lie–Poisson structure associated with another semidirect-product group, which is called the Jacobi group. This structure produces the energy-conserving variant of a class of Gaussian moment models that have previously appeared in the chemical physics literature. PMID:27279764

  3. Implementation of parallel moment equations in NIMROD

    NASA Astrophysics Data System (ADS)

    Lee, Hankyu Q.; Held, Eric D.; Ji, Jeong-Young

    2017-10-01

    As collisionality is low (the Knudsen number is large) in many plasma applications, kinetic effects become important, particularly in parallel dynamics for magnetized plasmas. Fluid models can capture some kinetic effects when integral parallel closures are adopted. The adiabatic and linear approximations are used in solving general moment equations to obtain the integral closures. In this work, we present an effort to incorporate non-adiabatic (time-dependent) and nonlinear effects into parallel closures. Instead of analytically solving the approximate moment system, we implement exact parallel moment equations in the NIMROD fluid code. The moment code is expected to provide a natural convergence scheme by increasing the number of moments. Work in collaboration with the PSI Center and supported by the U.S. DOE under Grant Nos. DE-SC0014033, DE-SC0016256, and DE-FG02-04ER54746.

  4. Improved limit on the Ra 225 electric dipole moment

    DOE PAGES

    Bishof, Michael; Parker, Richard H.; Bailey, Kevin G.; ...

    2016-08-03

    In this study, octupole-deformed nuclei, such as that of 225Ra, are expected to amplify observable atomic electric dipole moments (EDMs) that arise from time-reversal and parity-violating interactions in the nuclear medium. In 2015 we reported the first “proof-of-principle” measurement of the 225Ra atomic EDM.

  5. Improved limit on the Ra 225 electric dipole moment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bishof, Michael; Parker, Richard H.; Bailey, Kevin G.

    In this study, octupole-deformed nuclei, such as that of 225Ra, are expected to amplify observable atomic electric dipole moments (EDMs) that arise from time-reversal and parity-violating interactions in the nuclear medium. In 2015 we reported the first “proof-of-principle” measurement of the 225Ra atomic EDM.

  6. High-frequency, transient magnetic susceptibility of ferroelectrics

    NASA Astrophysics Data System (ADS)

    Grimes, Craig A.

    1996-10-01

    A significant high-frequency magnetic susceptibility was measured both in weakly polarized and nonpolarized samples of barium titanate, lead zirconate titanate, and carnauba wax. Magnetic susceptibility measurements were made from 10 to 500 MHz using a thin film permeameter at room temperature; initial susceptibilities ranged from 0.1 to 2.5. These values are larger than expected for paramagnets and smaller than expected for ferromagnets. It was found that the magnetic susceptibility decreases rapidly with exposure to the exciting field. The origin of the magnetic susceptibility is thought to originate with the applied time varying electric field associated with the susceptibility measurements. An electric field acts to rotate an electric dipole, creating a magnetic quadrupole if the two moments are balanced, and a net magnetic dipole moment if imbalanced. It is thought that local electrostatic fields created at ferroelectric domain discontinuities associated with grain boundaries create an imbalance in the anion rotation that results in a net, measurable, magnetic moment. The origin of the magnetic aftereffect may be due to the local heating of the material through the moving charges associated with the magnetic moment.

  7. Cross-sectional structural parameters from densitometry

    NASA Technical Reports Server (NTRS)

    Cleek, Tammy M.; Whalen, Robert T.

    2002-01-01

    Bone densitometry has previously been used to obtain cross-sectional properties of bone from a single X-ray projection across the bone width. Using three unique projections, we have extended the method to obtain the principal area moments of inertia and orientations of the principal axes at each scan cross-section along the length of the scan. Various aluminum phantoms were used to examine scanner characteristics to develop the highest accuracy possible for in vitro non-invasive analysis of cross-sectional properties. Factors considered included X-ray photon energy, initial scan orientation, the angle spanned by the three scans (included angle), and I(min)/I(max) ratios. Principal moments of inertia were accurate to within +/-3.1% and principal angles were within +/-1 degrees of the expected value for phantoms scanned with included angles of 60 degrees and 90 degrees at the higher X-ray photon energy (140 kVp). Low standard deviations in the error (0.68-1.84%) also indicate high precision of calculated measurements with these included angles. Accuracy and precision decreased slightly when the included angle was reduced to 30 degrees. The method was then successfully applied to a pair of excised cadaveric tibiae. The accuracy and insensitivity of the algorithms to cross-sectional shape and changing isotropy (I(min)/I(max)) values when various included angles are used make this technique viable for future in vivo studies.

  8. Studies of the 4-JET Rate and of Moments of Event Shape Observables Using Jade Data

    NASA Astrophysics Data System (ADS)

    Kluth, S.

    2005-04-01

    Data from e+e- annihilation into hadrons collected by the JADE experiment at centre-of-mass energies between 14 and 44 GeV were used to study the 4-jet rate using the Durham algorithm as well as the first five moments of event shape observables. The data were compared with NLO QCD predictions, augmented by resummed NLLA calculations for the 4-jet rate, in order to extract values of the strong coupling constant αS. The preliminary results are αS(MZ0) = 0.1169 ± 0.0026 (4-jet rate) and αS(MZ0) = 0.1286 ± 0.0072 (moments) consistent with the world average value. For some of the higher moments systematic deficiencies of the QCD predictions are observed.

  9. Noise-enhanced clustering and competitive learning algorithms.

    PubMed

    Osoba, Osonde; Kosko, Bart

    2013-01-01

    Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Hierarchical trie packet classification algorithm based on expectation-maximization clustering.

    PubMed

    Bi, Xia-An; Zhao, Junxia

    2017-01-01

    With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm.

  11. Vehicle handling and stability control by the cooperative control of 4WS and DYC

    NASA Astrophysics Data System (ADS)

    Shen, Huan; Tan, Yun-Sheng

    2017-07-01

    This paper proposes an integrated control system that cooperates with the four-wheel steering (4WS) and direct yaw moment control (DYC) to improve the vehicle handling and stability. The design works of the four-wheel steering and DYC control are based on sliding mode control. The integration control system produces the suitable 4WS angle and corrective yaw moment so that the vehicle tracks the desired yaw rate and sideslip angle. Considering the change of the vehicle longitudinal velocity that means the comfort of driving conditions, both the driving torque and braking torque are used to generate the corrective yaw moment. Simulation results show the effectiveness of the proposed control algorithm.

  12. Determination of ground and excited state dipole moments via electronic Stark spectroscopy: 5-methoxyindole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilke, Josefin; Wilke, Martin; Schmitt, Michael, E-mail: mschmitt@uni-duesseldorf.de

    2016-01-28

    The dipole moments of the ground and lowest electronically excited singlet state of 5-methoxyindole have been determined by means of optical Stark spectroscopy in a molecular beam. The resulting spectra arise from a superposition of different field configurations, one with the static electric field almost parallel to the polarization of the exciting laser radiation, the other nearly perpendicular. Each field configuration leads to different intensities in the rovibronic spectrum. With an automated evolutionary algorithm approach, the spectra can be fit and the ratio of both field configurations can be determined. A simultaneous fit of two spectra with both field configurationsmore » improved the precision of the dipole moment determination by a factor of two. We find a reduction of the absolute dipole moment from 1.59(3) D to 1.14(6) D upon electronic excitation to the lowest electronically excited singlet state. At the same time, the dipole moment orientation rotates by 54{sup ∘} showing the importance of the determination of the dipole moment components. The dipole moment in the electronic ground state can approximately be obtained from a vector addition of the indole and the methoxy group dipole moments. However, in the electronically excited state, vector addition completely fails to describe the observed dipole moment. Several reasons for this behavior are discussed.« less

  13. A novel vehicle dynamics stability control algorithm based on the hierarchical strategy with constrain of nonlinear tyre forces

    NASA Astrophysics Data System (ADS)

    Li, Liang; Jia, Gang; Chen, Jie; Zhu, Hongjun; Cao, Dongpu; Song, Jian

    2015-08-01

    Direct yaw moment control (DYC), which differentially brakes the wheels to produce a yaw moment for the vehicle stability in a steering process, is an important part of electric stability control system. In this field, most control methods utilise the active brake pressure with a feedback controller to adjust the braked wheel. However, the method might lead to a control delay or overshoot because of the lack of a quantitative project relationship between target values from the upper stability controller to the lower pressure controller. Meanwhile, the stability controller usually ignores the implementing ability of the tyre forces, which might be restrained by the combined-slip dynamics of the tyre. Therefore, a novel control algorithm of DYC based on the hierarchical control strategy is brought forward in this paper. As for the upper controller, a correctional linear quadratic regulator, which not only contains feedback control but also contains feed forward control, is introduced to deduce the object of the stability yaw moment in order to guarantee the yaw rate and side-slip angle stability. As for the medium and lower controller, the quantitative relationship between the vehicle stability object and the target tyre forces of controlled wheels is proposed to achieve smooth control performance based on a combined-slip tyre model. The simulations with the hardware-in-the-loop platform validate that the proposed algorithm can improve the stability of the vehicle effectively.

  14. Dynamic imaging in electrical impedance tomography of the human chest with online transition matrix identification.

    PubMed

    Moura, Fernando Silva; Aya, Julio Cesar Ceballos; Fleury, Agenor Toledo; Amato, Marcelo Britto Passos; Lima, Raul Gonzalez

    2010-02-01

    One of the electrical impedance tomography objectives is to estimate the electrical resistivity distribution in a domain based only on electrical potential measurements at its boundary generated by an imposed electrical current distribution into the boundary. One of the methods used in dynamic estimation is the Kalman filter. In biomedical applications, the random walk model is frequently used as evolution model and, under this conditions, poor tracking ability of the extended Kalman filter (EKF) is achieved. An analytically developed evolution model is not feasible at this moment. The paper investigates the identification of the evolution model in parallel to the EKF and updating the evolution model with certain periodicity. The evolution model transition matrix is identified using the history of the estimated resistivity distribution obtained by a sensitivity matrix based algorithm and a Newton-Raphson algorithm. To numerically identify the linear evolution model, the Ibrahim time-domain method is used. The investigation is performed by numerical simulations of a domain with time-varying resistivity and by experimental data collected from the boundary of a human chest during normal breathing. The obtained dynamic resistivity values lie within the expected values for the tissues of a human chest. The EKF results suggest that the tracking ability is significantly improved with this approach.

  15. CT to Cone-beam CT Deformable Registration With Simultaneous Intensity Correction

    PubMed Central

    Zhen, Xin; Gu, Xuejun; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B.

    2012-01-01

    Computed tomography (CT) to cone-beam computed tomography (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called Deformation with Intensity Simultaneously Corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units (GPUs) in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons. PMID:23032638

  16. Pattern Recognition Control Design

    NASA Technical Reports Server (NTRS)

    Gambone, Elisabeth

    2016-01-01

    Spacecraft control algorithms must know the expected spacecraft response to any command to the available control effectors, such as reaction thrusters or torque devices. Spacecraft control system design approaches have traditionally relied on the estimated vehicle mass properties to determine the desired force and moment, as well as knowledge of the effector performance to efficiently control the spacecraft. A pattern recognition approach can be used to investigate the relationship between the control effector commands and the spacecraft responses. Instead of supplying the approximated vehicle properties and the effector performance characteristics, a database of information relating the effector commands and the desired vehicle response can be used for closed-loop control. A Monte Carlo simulation data set of the spacecraft dynamic response to effector commands can be analyzed to establish the influence a command has on the behavior of the spacecraft. A tool developed at NASA Johnson Space Center (Ref. 1) to analyze flight dynamics Monte Carlo data sets through pattern recognition methods can be used to perform this analysis. Once a comprehensive data set relating spacecraft responses with commands is established, it can be used in place of traditional control laws and gains set. This pattern recognition approach can be compared with traditional control algorithms to determine the potential benefits and uses.

  17. Cognitive programs: software for attention's executive

    PubMed Central

    Tsotsos, John K.; Kruijne, Wouter

    2014-01-01

    What are the computational tasks that an executive controller for visual attention must solve? This question is posed in the context of the Selective Tuning model of attention. The range of required computations go beyond top-down bias signals or region-of-interest determinations, and must deal with overt and covert fixations, process timing and synchronization, information routing, memory, matching control to task, spatial localization, priming, and coordination of bottom-up with top-down information. During task execution, results must be monitored to ensure the expected results. This description includes the kinds of elements that are common in the control of any kind of complex machine or system. We seek a mechanistic integration of the above, in other words, algorithms that accomplish control. Such algorithms operate on representations, transforming a representation of one kind into another, which then forms the input to yet another algorithm. Cognitive Programs (CPs) are hypothesized to capture exactly such representational transformations via stepwise sequences of operations. CPs, an updated and modernized offspring of Ullman's Visual Routines, impose an algorithmic structure to the set of attentional functions and play a role in the overall shaping of attentional modulation of the visual system so that it provides its best performance. This requires that we consider the visual system as a dynamic, yet general-purpose processor tuned to the task and input of the moment. This differs dramatically from the almost universal cognitive and computational views, which regard vision as a passively observing module to which simple questions about percepts can be posed, regardless of task. Differing from Visual Routines, CPs explicitly involve the critical elements of Visual Task Executive (vTE), Visual Attention Executive (vAE), and Visual Working Memory (vWM). Cognitive Programs provide the software that directs the actions of the Selective Tuning model of visual attention. PMID:25505430

  18. Flight Deck Surface Trajectory-based Operations (STBO): Results of Piloted Simulations and Implications for Concepts of Operation (ConOps)

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Hooey, Becky L.; Bakowski, Deborah L.

    2013-01-01

    The results offour piloted medium-fidelity simulations investigating flight deck surface trajectory-based operations (STBO) will be reviewed. In these flight deck STBO simulations, commercial transport pilots were given taxi clearances with time and/or speed components and required to taxi to the departing runway or an intermediate traffic intersection. Under a variety of concept of operations (ConOps) and flight deck information conditions, pilots' ability to taxi in compliance with the required time of arrival (RTA) at the designated airport location was measured. ConOps and flight deck information conditions explored included: Availability of taxi clearance speed and elapsed time information; Intermediate RTAs at intermediate time constraint points (e.g., intersection traffic flow points); STBO taxi clearances via ATC voice speed commands or datal ink; and, Availability of flight deck display algorithms to reduce STBO RTA error. Flight Deck Implications. Pilot RTA conformance for STBO clearances, in the form of ATC taxi clearances with associated speed requirements, was found to be relatively poor, unless the pilot is required to follow a precise speed and acceleration/deceleration profile. However, following such a precise speed profile results in inordinate head-down tracking of current ground speed, leading to potentially unsafe operations. Mitigating these results, and providing good taxi RTA performance without the associated safety issues, is a flight deck avionics or electronic flight bag (EFB) solution. Such a solution enables pilots to meet the taxi route RTA without moment-by-moment tracking of ground speed. An avionics or EFB "error-nulling" algorithm allows the pilot to view the STBO information when the pilot determines it is necessary and when workload alloys, thus enabling the pilot to spread his/her attention appropriately and strategically on aircraft separation airport navigation, and the many other flight deck tasks concurrently required. Surface Traffic Management (STM) System Implications. The data indicate a number of implications regarding specific parameters for ATC/STM algorithm development. Pilots have a tendency to arrive at RTA points early with slow required speeds, on time for moderate speeds, and late with faster required speeds. This implies that ATC/STM algorithms should operate with middle-range speeds, similar to that of non-STBO taxi performance. Route length has a related effect: Long taxi routes increase the earliness with slow speeds and the lateness with faster speeds. This is likely due to the" open-loop" nature of the task in which the speed error compounds over a longer time with longer routes. Results showed that this may be mitigated by imposing a small number oftime constraint points each with their own RTAs effectively tuming a long route into a series of shorter routes - and thus improving RTA performance. STBO ConOps Implications. Most important is the impact that these data have for NextGen STM system ConOps development. The results of these experiments imply that it is not reasonable to expect pilots to taxi under a "Full STBO" ConOps in which pilots are expected to be at a predictable (x,y) airport location for every time (t). An STBO ConOps with a small number of intermediate time constraint points and the departing runway, however, is feasible, but only with flight deck equipage enabling the use of a display similar to the "error-nulling algorithm/display" tested.

  19. Algorithms for the Reduction of Wind-Tunnel Data Derived from Strain Gauge Force Balances.

    DTIC Science & Technology

    1984-05-01

    summed. Where hinge moments are measured on a model, it is customary to express them by coefficients of the form C11 h (4.23) q Si dH where hi is the...measured hinge moment and Sit and dH are a characteristic area and length associated with the control surface. 4.6 Transformation to Body Axes...Pty. Ltd. Mr D. Pilkington Mr R. D. Bullen Commonwealth Aircraft Corporation, Libra Hawker de Havilland Aust. Pty. Ltd., Bankstown. L.ibrar

  20. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel II. Distribution functions and moments.

    PubMed

    Langenbucher, Frieder

    2003-01-01

    MS Excel is a useful tool to handle in vitro/in vivo correlation (IVIVC) distribution functions, with emphasis on the Weibull and the biexponential distribution, which are most useful for the presentation of cumulative profiles, e.g. release in vitro or urinary excretion in vivo, and differential profiles such as the plasma response in vivo. The discussion includes moments (AUC and mean) as summarizing statistics, and data-fitting algorithms for parameter estimation.

  1. Effective equations for the quantum pendulum from momentous quantum mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, Hector H.; Chacon-Acosta, Guillermo; Departamento de Matematicas Aplicadas y Sistemas, Universidad Autonoma Metropolitana-Cuajimalpa, Artificios 40, Mexico D. F. 01120

    In this work we study the quantum pendulum within the framework of momentous quantum mechanics. This description replaces the Schroedinger equation for the quantum evolution of the system with an infinite set of classical equations for expectation values of configuration variables, and quantum dispersions. We solve numerically the effective equations up to the second order, and describe its evolution.

  2. Geomagnetic dipole strength and reversal rate over the past two million years.

    PubMed

    Valet, Jean-Pierre; Meynadier, Laure; Guyodo, Yohan

    2005-06-09

    Independent records of relative magnetic palaeointensity from sediment cores in different areas of the world can be stacked together to extract the evolution of the geomagnetic dipole moment and thus provide information regarding the processes governing the geodynamo. So far, this procedure has been limited to the past 800,000 years (800 kyr; ref. 3), which does not include any geomagnetic reversals. Here we present a composite curve that shows the evolution of the dipole moment during the past two million years. This reconstruction is in good agreement with the absolute dipole moments derived from volcanic lavas, which were used for calibration. We show that, at least during this period, the time-averaged field was higher during periods without reversals but the amplitude of the short-term oscillations remained the same. As a consequence, few intervals of very low intensity, and thus fewer instabilities, are expected during periods with a strong average dipole moment, whereas more excursions and reversals are expected during periods of weak field intensity. We also observe that the axial dipole begins to decay 60-80 kyr before reversals, but rebuilds itself in the opposite direction in only a few thousand years.

  3. Exact Dissipative Moment Closures for Simulation of Magnetospheric Plasmas

    NASA Astrophysics Data System (ADS)

    Newman, D. L.; Sen, N.; Goldman, M. V.

    2004-11-01

    Dissipative fluid closures produce a kinetic-like plasma response in simulations based on the evolution of moments of the Vlasov equation. Such methods were previously shown to approximate the kinetic susceptibility of a Maxwellian plasma.(G. W. Hammett and F. W. Perkins Phys. Rev. Lett.) 64, 3019 (1990). We show here that dissipative closures can yield the exact linear response for kappa velocity distributions (i.e., f(v)∝(v^2+w^2)^-κ in 1-D, where w∝ v_th), provided κ is an integer and κ+1 moments are retained in the closure. This finding is particularly relevant to the simulation of collisionless space plasmas, which frequently exhibit power-law tails characteristic of kappa distributions. Such dissipative algorithms can be made energy conserving by evolving the thermal parameter w. Dominant nonlinearities (e.g., ponderomotive effects) can also be incorporated into the algorithm. These methods have proven especially valuable in the context of reduced 2-D Vlasov simulations,(N. Sen, et al., Reduced 2-D Vlasov Simulationsldots), this meeting. where they have been used to model perpendicular ion dynamics in the evolution of nonlinear structures (e.g., double layers) in the auroral ionosphere.

  4. Leonid predictions for the period 2001-2100

    NASA Astrophysics Data System (ADS)

    Maslov, Mikhail

    2007-02-01

    This article provides a set of summaries of what to expect from the Leonid meteor shower for each year of the period 2001-2100. Each summary contains the moments of maximum/maxima, their expected intensity and some comments about average meteor brightness during them. Special attention was paid to background (traditional) maxima, which are characterized with their expected times and intensities.

  5. Recurrent procedure for constructing nonisotropic matrix elements of the collision integral of the nonlinear Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Ender, I. A.; Bakaleinikov, L. A.; Flegontova, E. Yu.; Gerasimenko, A. B.

    2017-08-01

    We have proposed an algorithm for the sequential construction of nonisotropic matrix elements of the collision integral, which are required to solve the nonlinear Boltzmann equation using the moments method. The starting elements of the matrix are isotropic and assumed to be known. The algorithm can be used for an arbitrary law of interactions for any ratio of the masses of colliding particles.

  6. Hierarchical trie packet classification algorithm based on expectation-maximization clustering

    PubMed Central

    Bi, Xia-an; Zhao, Junxia

    2017-01-01

    With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm. PMID:28704476

  7. A solution algorithm for fluid–particle flows across all flow regimes

    DOE PAGES

    Kong, Bo; Fox, Rodney O.

    2017-05-12

    Many fluid–particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are closepacked as well as very dilute regions where particle–particle collisions are rare. Thus, in order to simulate such fluid–particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in themore » flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas–particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid–particle flows.« less

  8. A solution algorithm for fluid-particle flows across all flow regimes

    NASA Astrophysics Data System (ADS)

    Kong, Bo; Fox, Rodney O.

    2017-09-01

    Many fluid-particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are close-packed as well as very dilute regions where particle-particle collisions are rare. Thus, in order to simulate such fluid-particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in the flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas-particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid-particle flows.

  9. A solution algorithm for fluid–particle flows across all flow regimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kong, Bo; Fox, Rodney O.

    Many fluid–particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are closepacked as well as very dilute regions where particle–particle collisions are rare. Thus, in order to simulate such fluid–particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in themore » flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas–particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid–particle flows.« less

  10. Filament capturing with the multimaterial moment-of-fluid method*

    DOE PAGES

    Jemison, Matthew; Sussman, Mark; Shashkov, Mikhail

    2015-01-15

    A novel method for capturing two-dimensional, thin, under-resolved material configurations, known as “filaments,” is presented in the context of interface reconstruction. This technique uses a partitioning procedure to detect disconnected regions of material in the advective preimage of a cell (indicative of a filament) and makes use of the existing functionality of the Multimaterial Moment-of-Fluid interface reconstruction method to accurately capture the under-resolved feature, while exactly conserving volume. An algorithm for Adaptive Mesh Refinement in the presence of filaments is developed so that refinement is introduced only near the tips of filaments and where the Moment-of-Fluid reconstruction error is stillmore » large. Comparison to the standard Moment-of-Fluid method is made. As a result, it is demonstrated that using filament capturing at a given resolution yields gains in accuracy comparable to introducing an additional level of mesh refinement at significantly lower cost.« less

  11. Pattern Discovery and Change Detection of Online Music Query Streams

    NASA Astrophysics Data System (ADS)

    Li, Hua-Fu

    In this paper, an efficient stream mining algorithm, called FTP-stream (Frequent Temporal Pattern mining of streams), is proposed to find the frequent temporal patterns over melody sequence streams. In the framework of our proposed algorithm, an effective bit-sequence representation is used to reduce the time and memory needed to slide the windows. The FTP-stream algorithm can calculate the support threshold in only a single pass based on the concept of bit-sequence representation. It takes the advantage of "left" and "and" operations of the representation. Experiments show that the proposed algorithm only scans the music query stream once, and runs significant faster and consumes less memory than existing algorithms, such as SWFI-stream and Moment.

  12. Lorentz-covariant coordinate-space representation of the leading hadronic contribution to the anomalous magnetic moment of the muon

    NASA Astrophysics Data System (ADS)

    Meyer, Harvey B.

    2017-09-01

    We present a Lorentz-covariant, Euclidean coordinate-space expression for the hadronic vacuum polarisation, the Adler function and the leading hadronic contribution to the anomalous magnetic moment of the muon. The representation offers a high degree of flexibility for an implementation in lattice QCD. We expect it to be particularly helpful for the quark-line disconnected contributions.

  13. Theoretical electric dipole moments of SiH, GeH and SnH

    NASA Technical Reports Server (NTRS)

    Pettersson, L. G. M.; Langhoff, S. R.

    1986-01-01

    Accurate theoretical dipole moments have been computed for the X2Pi ground states of Si(-)H(+) (0.118 D), Ge(+)H(-) (0.085 D), and Sn(+)H(-) (0.357 D). The trend down the periodic table is regular and follows that expected from the electronegativities of the group IV atoms. The dipole moment of 1.24 + or - 0.1 D for GeH recently derived by Brown, Evenson and Sears (1985) from the relative intensities of electric and magnetic dipole transitions in the 10-micron spectrum of the X2Pi state is seriously questioned.

  14. Theoretical Electric Dipole Moments of SiH, GeH and SnH

    NASA Technical Reports Server (NTRS)

    Pettersson, Lars G. M.; Langhoff, Stephen R.

    1986-01-01

    Accurate theoretical dipole moments (mu(sub c) have been computed for the X(exp 2)Pi ground states of Si(-)H(+)(0.118 D), Ge(+)H(-)(0.085 D) and Sn(+)H(-)(0.357 D). The trend down the periodic table is regular and follows that expected from the electronegativities of the group IV atoms. The dipole moment of 1.24 +/- 0.1 D for GeH recently derived by Brown, Evenson and Sears from the relative intensities of electric and magnetic dipole transitions in the 10 microns spectrum of the X(exp 2)Pi state is seriously questioned.

  15. Formulation of the relativistic moment implicit particle-in-cell method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noguchi, Koichi; Tronci, Cesare; Zuccaro, Gianluca

    2007-04-15

    A new formulation is presented for the implicit moment method applied to the time-dependent relativistic Vlasov-Maxwell system. The new approach is based on a specific formulation of the implicit moment method that allows us to retain the same formalism that is valid in the classical case despite the formidable complication introduced by the nonlinear nature of the relativistic equations of motion. To demonstrate the validity of the new formulation, an implicit finite difference algorithm is developed to solve the Maxwell's equations and equations of motion. A number of benchmark problems are run: two stream instability, ion acoustic wave damping, Weibelmore » instability, and Poynting flux acceleration. The numerical results are all in agreement with analytical solutions.« less

  16. Recent National Transonic Facility Test Process Improvements (Invited)

    NASA Technical Reports Server (NTRS)

    Kilgore, W. A.; Balakrishna, S.; Bobbitt, C. W., Jr.; Adcock, J. B.

    2001-01-01

    This paper describes the results of two recent process improvements; drag feed-forward Mach number control and simultaneous force/moment and pressure testing, at the National Transonic Facility. These improvements have reduced the duration and cost of testing. The drag feed-forward Mach number control reduces the Mach number settling time by using measured model drag in the Mach number control algorithm. Simultaneous force/moment and pressure testing allows simultaneous collection of force/moment and pressure data without sacrificing data quality thereby reducing the overall testing time. Both improvements can be implemented at any wind tunnel. Additionally the NTF is working to develop and implement continuous pitch as a testing option as an additional method to reduce costs and maintain data quality.

  17. Recent National Transonic Facility Test Process Improvements (Invited)

    NASA Technical Reports Server (NTRS)

    Kilgore, W. A.; Balakrishna, S.; Bobbitt, C. W., Jr.; Adcock, J. B.

    2001-01-01

    This paper describes the results of two recent process improvements; drag feed-forward Mach number control and simultaneous force/moment and pressure testing, at the National Transonic Facility. These improvements have reduced the duration and cost of testing. The drag feedforward Mach number control reduces the Mach number settling time by using measured model drag in the Mach number control algorithm. Simultaneous force/moment and pressure testing allows simultaneous collection of force/moment and pressure data without sacrificing data quality thereby reducing the overall testing time. Both improvements can be implemented at any wind tunnel. Additionally the NTF is working to develop and implement continuous pitch as a testing option as an additional method to reduce costs and maintain data quality.

  18. Speed-up of the volumetric method of moments for the approximate RCS of large arbitrary-shaped dielectric targets

    NASA Astrophysics Data System (ADS)

    Moreno, Javier; Somolinos, Álvaro; Romero, Gustavo; González, Iván; Cátedra, Felipe

    2017-08-01

    A method for the rigorous computation of the electromagnetic scattering of large dielectric volumes is presented. One goal is to simplify the analysis of large dielectric targets with translational symmetries taken advantage of their Toeplitz symmetry. Then, the matrix-fill stage of the Method of Moments is efficiently obtained because the number of coupling terms to compute is reduced. The Multilevel Fast Multipole Method is applied to solve the problem. Structured meshes are obtained efficiently to approximate the dielectric volumes. The regular mesh grid is achieved by using parallelepipeds whose centres have been identified as internal to the target. The ray casting algorithm is used to classify the parallelepiped centres. It may become a bottleneck when too many points are evaluated in volumes defined by parametric surfaces, so a hierarchical algorithm is proposed to minimize the number of evaluations. Measurements and analytical results are included for validation purposes.

  19. Intra-pulse modulation recognition using short-time ramanujan Fourier transform spectrogram

    NASA Astrophysics Data System (ADS)

    Ma, Xiurong; Liu, Dan; Shan, Yunlong

    2017-12-01

    Intra-pulse modulation recognition under negative signal-to-noise ratio (SNR) environment is a research challenge. This article presents a robust algorithm for the recognition of 5 types of radar signals with large variation range in the signal parameters in low SNR using the combination of the Short-time Ramanujan Fourier transform (ST-RFT) and pseudo-Zernike moments invariant features. The ST-RFT provides the time-frequency distribution features for 5 modulations. The pseudo-Zernike moments provide invariance properties that are able to recognize different modulation schemes on different parameter variation conditions from the ST-RFT spectrograms. Simulation results demonstrate that the proposed algorithm achieves the probability of successful recognition (PSR) of over 90% when SNR is above -5 dB with large variation range in the signal parameters: carrier frequency (CF) for all considered signals, hop size (HS) for frequency shift keying (FSK) signals, and the time-bandwidth product for Linear Frequency Modulation (LFM) signals.

  20. FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision

    PubMed Central

    Botella, Guillermo; Martín H., José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms. PMID:22164069

  1. A Framework for Optimal Control Allocation with Structural Load Constraints

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Taylor, Brian R.; Jutte, Christine V.; Burken, John J.; Trinh, Khanh V.; Bodson, Marc

    2010-01-01

    Conventional aircraft generally employ mixing algorithms or lookup tables to determine control surface deflections needed to achieve moments commanded by the flight control system. Control allocation is the problem of converting desired moments into control effector commands. Next generation aircraft may have many multipurpose, redundant control surfaces, adding considerable complexity to the control allocation problem. These issues can be addressed with optimal control allocation. Most optimal control allocation algorithms have control surface position and rate constraints. However, these constraints are insufficient to ensure that the aircraft's structural load limits will not be exceeded by commanded surface deflections. In this paper, a framework is proposed to enable a flight control system with optimal control allocation to incorporate real-time structural load feedback and structural load constraints. A proof of concept simulation that demonstrates the framework in a simulation of a generic transport aircraft is presented.

  2. Bias correction of daily satellite precipitation data using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.

    2018-05-01

    Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.

  3. FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.

    PubMed

    Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.

  4. Anomalous thermal hysteresis in the high-field magnetic moments of magnetic nanoparticles embedded in multi-walled carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Zhao, Guo-Meng; Wang, Jun; Ren, Yang; Beeli, Pieder

    2012-02-01

    We report high-temperature (300-1120 K) magnetic properties of Fe and Fe3O4 nanoparticles embedded in multi-walled carbon nanotubes. We unambiguously show that the magnetic moments of Fe and Fe3O4 nanoparticles are seemingly enhanced by a factor of about 3 compared with what they would be expected to have for free (unembedded) magnetic nanoparticles. What is more intriguing is that the enhanced moments were completely lost when the sample was heated up to 1120 K and the lost moments at 1120 K were completely recovered through several thermal cycles below 1020 K. The anomalous thermal hysteresis of the high-field magnetic moments is unlikely to be explained by existing physical models except for the high-field paramagnetic Meissner effect due to the existence of ultrahigh temperature superconductivity in the multi-walled carbon nanotubes.

  5. Dike propagation energy balance from deformation modeling and seismic release

    NASA Astrophysics Data System (ADS)

    Bonaccorso, Alessandro; Aoki, Yosuke; Rivalta, Eleonora

    2017-06-01

    Magma is transported in the crust mainly by dike intrusions. In volcanic areas, dikes can ascend toward the free surface and also move by lateral propagation, eventually feeding flank eruptions. Understanding dike mechanics is a key to forecasting the expected propagation and associated hazard. Several studies have been conducted on dike mechanisms and propagation; however, a less in-depth investigated aspect is the relation between measured dike-induced deformation and the seismicity released during its propagation. We individuated a simple x that can be used as a proxy of the expected mechanical energy released by a propagating dike and is related to its average thickness. For several intrusions around the world (Afar, Japan, and Mount Etna), we correlate such mechanical energy to the seismic moment released by the induced earthquakes. We obtain an empirical law that quantifies the expected seismic energy released before arrest. The proposed approach may be helpful to predict the total seismic moment that will be released by an intrusion and thus to control the energy status during its propagation and the time of dike arrest.Plain Language SummaryDike propagation is a dominant mechanism for magma ascent, transport, and eruptions. Besides being an intriguing physical process, it has critical hazard implications. After the magma intrusion starts, it is difficult to predict when and where a specific horizontal dike is going to halt and what its final length will be. In our study, we singled an equation that can be used as a proxy of the expected mechanical energy to be released by the opening dike. We related this expected energy to the seismic moment of several eruptive intrusions around the world (Afar region, Japanese volcanoes, and Mount Etna). The proposed novel approach is helpful to estimate the total seismic moment to be released, therefore allowing potentially predicting when the dike will end its propagation. The approach helps answer one of the fundamental questions raised by civil protection authorities, namely, "how long will the eruptive fissure propagate?"</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1343168-empirical-moments-inertia-axially-asymmetric-nuclei','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1343168-empirical-moments-inertia-axially-asymmetric-nuclei"><span>Empirical moments of inertia of axially asymmetric nuclei</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Allmond, J. M.; Wood, J. L.</p> <p>2017-02-06</p> <p>We extracted empirical moments of inertia, J1, J2, J3, of atomic nuclei with E(4more » $$+\\atop{1}$$)/E(2$$+\\atop{1}$$ ) > 2.7 from experimental 2$$+\\atop{g,y}$$, energies and electric quadrupole matrix elements, determined from multi- step Coulomb excitation data, and the results are compared to expectations based on rigid and irro- tational inertial flow. Only by having the signs of the E2 matrix elements, i.e., <2$$+\\atop{g}$$ ||M (E2)||2$$+\\atop{g}$$> and <0$$+\\atop{g}$$ ||M (E2)||2$$+\\atop{g}$$> < 2$$+\\atop{g}$$ ||M (E2)||2$$+\\atop{γ}$$> <2$$+\\atop{γ}$$ ||M (E2)||0$$+\\atop{g}$$> , can a unique solution to all three components of the inertia tensor of an asymmetric top be obtained. And while the absolute moments of inertia fall between the rigid and irrotational values as expected, the relative moments of inertia appear to be qualitatively consistent with the β 2 sin 2(γ ) dependence of the Bohr Hamiltonian which originates from a SO(5) in- variance. A better understanding of inertial flow is central to improving collective models, particularly hydrodynamic-based collective models. The results suggest that a better description of collective dynamics and inertial flow for atomic nuclei is needed. The inclusion of vorticity degrees of freedom may provide a path forward. This is our first report of empirical moments of inertia for all three axes and the results should challenge both collective and microscopic descriptions of inertial flow.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.7871E..07G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.7871E..07G"><span>Real-time implementation of logo detection on open source BeagleBoard</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>George, M.; Kehtarnavaz, N.; Estevez, L.</p> <p>2011-03-01</p> <p>This paper presents the real-time implementation of our previously developed logo detection and tracking algorithm on the open source BeagleBoard mobile platform. This platform has an OMAP processor that incorporates an ARM Cortex processor. The algorithm combines Scale Invariant Feature Transform (SIFT) with k-means clustering, online color calibration and moment invariants to robustly detect and track logos in video. Various optimization steps that are carried out to allow the real-time execution of the algorithm on BeagleBoard are discussed. The results obtained are compared to the PC real-time implementation results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA098230','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA098230"><span>Multiattribute Fixed-State Utility Assessment.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1981-03-27</p> <p>of a companion distribution, are presented in Appendix A. Because of the theory of conditional expected utility and the modelling of the utilities by...obtain approximations to the moments, using the companion V ) ) distribution discussed in Section 3. The moments of both distributions are discussed...1956b, 21, 207-216. Slavic, P. "From Shakespeare to Simon: Speculation -- and some evidence -- about man’s ability to process information." O g . a</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.usgs.gov/fs/2013/3108/pdf/fs2013-3108.pdf','USGSPUBS'); return false;" href="https://pubs.usgs.gov/fs/2013/3108/pdf/fs2013-3108.pdf"><span>Estimating magnitude and frequency of floods using the PeakFQ 7.0 program</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Veilleux, Andrea G.; Cohn, Timothy A.; Flynn, Kathleen M.; Mason, Jr., Robert R.; Hummel, Paul R.</p> <p>2014-01-01</p> <p>Flood-frequency analysis provides information about the magnitude and frequency of flood discharges based on records of annual maximum instantaneous peak discharges collected at streamgages. The information is essential for defining flood-hazard areas, for managing floodplains, and for designing bridges, culverts, dams, levees, and other flood-control structures. Bulletin 17B (B17B) of the Interagency Advisory Committee on Water Data (IACWD; 1982) codifies the standard methodology for conducting flood-frequency studies in the United States. B17B specifies that annual peak-flow data are to be fit to a log-Pearson Type III distribution. Specific methods are also prescribed for improving skew estimates using regional skew information, tests for high and low outliers, adjustments for low outliers and zero flows, and procedures for incorporating historical flood information. The authors of B17B identified various needs for methodological improvement and recommended additional study. In response to these needs, the Advisory Committee on Water Information (ACWI, successor to IACWD; http://acwi.gov/, Subcommittee on Hydrology (SOH), Hydrologic Frequency Analysis Work Group (HFAWG), has recommended modest changes to B17B. These changes include adoption of a generalized method-of-moments estimator denoted the Expected Moments Algorithm (EMA) (Cohn and others, 1997) and a generalized version of the Grubbs-Beck test for low outliers (Cohn and others, 2013). The SOH requested that the USGS implement these changes in a user-friendly, publicly accessible program.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AIPC.1786r0007L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AIPC.1786r0007L"><span>The high performance parallel algorithm for Unified Gas-Kinetic Scheme</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Shiyi; Li, Qibing; Fu, Song; Xu, Jinxiu</p> <p>2016-11-01</p> <p>A high performance parallel algorithm for UGKS is developed to simulate three-dimensional flows internal and external on arbitrary grid system. The physical domain and velocity domain are divided into different blocks and distributed according to the two-dimensional Cartesian topology with intra-communicators in physical domain for data exchange and other intra-communicators in velocity domain for sum reduction to moment integrals. Numerical results of three-dimensional cavity flow and flow past a sphere agree well with the results from the existing studies and validate the applicability of the algorithm. The scalability of the algorithm is tested both on small (1-16) and large (729-5832) scale processors. The tested speed-up ratio is near linear ashind thus the efficiency is around 1, which reveals the good scalability of the present algorithm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28157904','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28157904"><span>Detection of unmanned aerial vehicles using a visible camera system.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hu, Shuowen; Goldman, Geoffrey H; Borel-Donohue, Christoph C</p> <p>2017-01-20</p> <p>Unmanned aerial vehicles (UAVs) flown by adversaries are an emerging asymmetric threat to homeland security and the military. To help address this threat, we developed and tested a computationally efficient UAV detection algorithm consisting of horizon finding, motion feature extraction, blob analysis, and coherence analysis. We compare the performance of this algorithm against two variants, one using the difference image intensity as the motion features and another using higher-order moments. The proposed algorithm and its variants are tested using field test data of a group 3 UAV acquired with a panoramic video camera in the visible spectrum. The performance of the algorithms was evaluated using receiver operating characteristic curves. The results show that the proposed approach had the best performance compared to the two algorithmic variants.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930007470','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930007470"><span>Recognition of fiducial marks applied to robotic systems. Thesis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Georges, Wayne D.</p> <p>1991-01-01</p> <p>The objective was to devise a method to determine the position and orientation of the links of a PUMA 560 using fiducial marks. As a result, it is necessary to design fiducial marks and a corresponding feature extraction algorithm. The marks used are composites of three basic shapes, a circle, an equilateral triangle and a square. Once a mark is imaged, it is thresholded and the borders of each shape are extracted. These borders are subsequently used in a feature extraction algorithm. Two feature extraction algorithms are used to determine which one produces the most reliable results. The first algorithm is based on moment invariants and the second is based on the discrete version of the psi-s curve of the boundary. The latter algorithm is clearly superior for this application.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MSSP..100..926P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MSSP..100..926P"><span>Decentralized semi-active damping of free structural vibrations by means of structural nodes with an on/off ability to transmit moments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Poplawski, Blazej; Mikułowski, Grzegorz; Mróz, Arkadiusz; Jankowski, Łukasz</p> <p>2018-02-01</p> <p>This paper proposes, tests numerically and verifies experimentally a decentralized control algorithm with local feedback for semi-active mitigation of free vibrations in frame structures. The algorithm aims at transferring the vibration energy of low-order, lightly-damped structural modes into high-frequency modes of vibration, where it is quickly damped by natural mechanisms of material damping. Such an approach to mitigation of vibrations, known as the prestress-accumulation release (PAR) strategy, has been earlier applied only in global control schemes to the fundamental vibration mode of a cantilever beam. In contrast, the decentralization and local feedback allows the approach proposed here to be applied to more complex frame structures and vibration patterns, where the global control ceases to be intuitively obvious. The actuators (truss-frame nodes with controllable ability to transmit moments) are essentially unblockable hinges that become unblocked only for very short time periods in order to trigger local modal transfer of energy. The paper proposes a computationally simple model of the controllable nodes, specifies the control performance measure, yields basic characteristics of the optimum control, proposes the control algorithm and then tests it in numerical and experimental examples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.usgs.gov/sir/2012/5130/pdf/sir20125130.pdf','USGSPUBS'); return false;" href="https://pubs.usgs.gov/sir/2012/5130/pdf/sir20125130.pdf"><span>Development of regional skews for selected flood durations for the Central Valley Region, California, based on data through water year 2008</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Lamontagne, Jonathan R.; Stedinger, Jery R.; Berenbrock, Charles; Veilleux, Andrea G.; Ferris, Justin C.; Knifong, Donna L.</p> <p>2012-01-01</p> <p>Flood-frequency information is important in the Central Valley region of California because of the high risk of catastrophic flooding. Most traditional flood-frequency studies focus on peak flows, but for the assessment of the adequacy of reservoirs, levees, other flood control structures, sustained flood flow (flood duration) frequency data are needed. This study focuses on rainfall or rain-on-snow floods, rather than the annual maximum, because rain events produce the largest floods in the region. A key to estimating flood-duration frequency is determining the regional skew for such data. Of the 50 sites used in this study to determine regional skew, 28 sites were considered to have little to no significant regulated flows, and for the 22 sites considered significantly regulated, unregulated daily flow data were synthesized by using reservoir storage changes and diversion records. The unregulated, annual maximum rainfall flood flows for selected durations (1-day, 3-day, 7-day, 15-day, and 30-day) for all 50 sites were furnished by the U.S. Army Corps of Engineers. Station skew was determined by using the expected moments algorithm program for fitting the Pearson Type 3 flood-frequency distribution to the logarithms of annual flood-duration data. Bayesian generalized least squares regression procedures used in earlier studies were modified to address problems caused by large cross correlations among concurrent rainfall floods in California and to address the extensive censoring of low outliers at some sites, by using the new expected moments algorithm for fitting the LP3 distribution to rainfall flood-duration data. To properly account for these problems and to develop suitable regional-skew regression models and regression diagnostics, a combination of ordinary least squares, weighted least squares, and Bayesian generalized least squares regressions were adopted. This new methodology determined that a nonlinear model relating regional skew to mean basin elevation was the best model for each flood duration. The regional-skew values ranged from -0.74 for a flood duration of 1-day and a mean basin elevation less than 2,500 feet to values near 0 for a flood duration of 7-days and a mean basin elevation greater than 4,500 feet. This relation between skew and elevation reflects the interaction of snow and rain, which increases with increased elevation. The regional skews are more accurate, and the mean squared errors are less than in the Interagency Advisory Committee on Water Data's National skew map of Bulletin 17B.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1429728-expectation-maximization-algorithm-amplitude-estimation-saturated-optical-transient-signals','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1429728-expectation-maximization-algorithm-amplitude-estimation-saturated-optical-transient-signals"><span>An Expectation-Maximization Algorithm for Amplitude Estimation of Saturated Optical Transient Signals.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kagie, Matthew J.; Lanterman, Aaron D.</p> <p>2017-12-01</p> <p>This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA235928','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA235928"><span>Quantitative Structure Retention Relationships of Polychlorinated Dibenzodioxins and Dibenzofurans</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1991-08-01</p> <p>be a projection onto the X-Y plane. The algorithm for this calculation can be found in Stouch and Jurs (22), but was further refined by Rohrbaugh and...throughspace distances. WPSA2 (c) Weighted positive charged surface area. MOMH2 (c) Second major moment of inertia with hydrogens attached. CSTR 3 (d) Sum...of the models. The robust regression analysis method calculates a regression model using a least median squares algorithm which is not as susceptible</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA256168','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA256168"><span>Moments and Signal Processing: Proceedings of the Conference Held in Monterey, CA. on March 30-31 1992</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1992-08-26</p> <p>the following three categories, de- pending where the nonlinear transformation is being applied on the data : (i) the Bussgang algorithms, where the...algorithms belong to one of the following three categories, depending where the nonlinear transformation is being applied on the data : "* The Bussgang...communication systems usually require an initial training period, during which a known data sequence (i.e., training sequence) is transmitted [43], [45]. An</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA463583','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA463583"><span>Progress in Guidance and Control Research for Space Access and Hypersonic Vehicles (Preprint)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2006-09-01</p> <p>affect range capabilities. In 2003 an integrated adaptive guidance control and trajectory re- shaping algorithm was flight demonstrated using in-flight...21] which tied for the best scores as well as a Linear Quadratic Regulator[22], Predictor - Corrector [23], and Shuttle-like entry[24] guidance method...Accurate knowledge of mass, center- of-gravity and moments of inertia improves the perfor- mance of not only IAG& C algorithms but also model based</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1047087','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1047087"><span>An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Semi-Disk Structure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2018-01-01</p> <p>statistical moments of order 2, 3, and 4. The probability density function (PDF) of the vibrational time series of a good bearing has a Gaussian...ARL-TR-8271 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...when it is no longer needed. Do not return it to the originator. ARL-TR-8271 ● JAN 2018 US Army Research Laboratory An Automated</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19840018016','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19840018016"><span>Spectroradiometric calibration of the Thematic Mapper and Multispectral Scanner system. [White Sands, New Mexico</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Palmer, J. M. (Principal Investigator); Slater, P. N.</p> <p>1984-01-01</p> <p>The newly built Caste spectropolarimeters gave satisfactory performance during tests in the solar radiometer and helicopter modes. A bandwidth normalization technique based on analysis of the moments of the spectral responsivity curves was used to analyze the spectral bands of the MSS and TM subsystems of LANDSAT 4 and 5 satellites. Results include the effective wavelength, the bandpass, the wavelength limits, and the normalized responsivity for each spectral channel. Temperature coefficients for TM PF channel 6 were also derived. The moments normalization method used yields sensor parameters whose derivation is independent of source characteristics (i.e., incident solar spectral irradiance, atmospheric transmittance, or ground reflectance). The errors expected using these parameters are lower than those expected using other normalization methods.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_8 --> <div id="page_9" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="161"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1413964-top-quark-loops-muon-anomalous-magnetic-moment','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1413964-top-quark-loops-muon-anomalous-magnetic-moment"><span>Top-quark loops and the muon anomalous magnetic moment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Czarnecki, Andrzej; Marciano, William J.</p> <p>2017-12-07</p> <p>The current status of electroweak radiative corrections to the muon anomalous magnetic moment is discussed. Asymptotic expansions for some important electroweak two-loop top quark triangle diagrams are illustrated and extended to higher order. Results are compared with the more general integral representation solution for generic fermion triangle loops coupled to pseudoscalar and scalar bosons of arbitrary mass. Furthermore, excellent agreement is found for a broader than expected range of mass parameters.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EPJC...77..828B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EPJC...77..828B"><span>Electromagnetic dipole moments of charged baryons with bent crystals at the LHC</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bagli, E.; Bandiera, L.; Cavoto, G.; Guidi, V.; Henry, L.; Marangotto, D.; Martinez Vidal, F.; Mazzolari, A.; Merli, A.; Neri, N.; Ruiz Vidal, J.</p> <p>2017-12-01</p> <p>We propose a unique program of measurements of electric and magnetic dipole moments of charm, beauty and strange charged baryons at the LHC, based on the phenomenon of spin precession of channeled particles in bent crystals. Studies of crystal channeling and spin precession of positively- and negatively-charged particles are presented, along with feasibility studies and expected sensitivities for the proposed experiment using a layout based on the LHCb detector.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MPLB...3250115K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MPLB...3250115K"><span>Fourier-Mellin moment-based intertwining map for image encryption</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kaur, Manjit; Kumar, Vijay</p> <p>2018-03-01</p> <p>In this paper, a robust image encryption technique that utilizes Fourier-Mellin moments and intertwining logistic map is proposed. Fourier-Mellin moment-based intertwining logistic map has been designed to overcome the issue of low sensitivity of an input image. Multi-objective Non-Dominated Sorting Genetic Algorithm (NSGA-II) based on Reinforcement Learning (MNSGA-RL) has been used to optimize the required parameters of intertwining logistic map. Fourier-Mellin moments are used to make the secret keys more secure. Thereafter, permutation and diffusion operations are carried out on input image using secret keys. The performance of proposed image encryption technique has been evaluated on five well-known benchmark images and also compared with seven well-known existing encryption techniques. The experimental results reveal that the proposed technique outperforms others in terms of entropy, correlation analysis, a unified average changing intensity and the number of changing pixel rate. The simulation results reveal that the proposed technique provides high level of security and robustness against various types of attacks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16036284','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16036284"><span>A pilot study of smoking and associated behaviors of low-income expectant fathers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Everett, Kevin D; Gage, Jeffrey; Bullock, Linda; Longo, Daniel R; Geden, Elizabeth; Madsen, Richard W</p> <p>2005-04-01</p> <p>Pregnancy is considered a teachable moment for helping women who smoke to quit, yet few studies have examined smoking behavior of expectant fathers. The present study considers the possibility that pregnancy is a teachable moment for expectant fathers as well and describes smoking and associated behaviors of men during their partner's pregnancy. Participants were 138 low-income men living with their pregnant partners. Using telephone interviews, we found 63% of the men had smoked at least 100 cigarettes in their lifetime. Current smoking was reported by 49.3% of expectant fathers (39.1% daily smoking; 10.2% some days). Expectant fathers' current smoking was associated with having a lower level of education (p<.0001), pregnant partner being a current smoker (p=.0002), higher quantity of alcohol consumption per day of drinking (p=.0003), and absence of smoking prohibitions inside the home (p<.0001). In the past year, 70.1% of the current smokers tried to quit. We found high rates of smoking in low-income expectant fathers, and an expectant father's smoking during his partner's pregnancy was associated with his pregnant partner continuing to smoke. A majority of expectant fathers identified as current smokers tried to quit in the past year or indicated an intention to quit in the near future. Intervention during pregnancy that targets pregnant women and expectant fathers who smoke could lead to more households without tobacco use and thus have positive implications for paternal, maternal, and family health. Further clinical and research attention is needed to address the smoking behaviors of both expectant fathers and their pregnant partners.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014PhDT.......219M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014PhDT.......219M"><span>Seismogeodesy and Rapid Earthquake and Tsunami Source Assessment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Melgar Moctezuma, Diego</p> <p></p> <p>This dissertation presents an optimal combination algorithm for strong motion seismograms and regional high rate GPS recordings. This seismogeodetic solution produces estimates of ground motion that recover the whole seismic spectrum, from the permanent deformation to the Nyquist frequency of the accelerometer. This algorithm will be demonstrated and evaluated through outdoor shake table tests and recordings of large earthquakes, notably the 2010 Mw 7.2 El Mayor-Cucapah earthquake and the 2011 Mw 9.0 Tohoku-oki events. This dissertations will also show that strong motion velocity and displacement data obtained from the seismogeodetic solution can be instrumental to quickly determine basic parameters of the earthquake source. We will show how GPS and seismogeodetic data can produce rapid estimates of centroid moment tensors, static slip inversions, and most importantly, kinematic slip inversions. Throughout the dissertation special emphasis will be placed on how to compute these source models with minimal interaction from a network operator. Finally we will show that the incorporation of off-shore data such as ocean-bottom pressure and RTK-GPS buoys can better-constrain the shallow slip of large subduction events. We will demonstrate through numerical simulations of tsunami propagation that the earthquake sources derived from the seismogeodetic and ocean-based sensors is detailed enough to provide a timely and accurate assessment of expected tsunami intensity immediately following a large earthquake.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/35977-solving-multistage-stochastic-programming-models-portfolio-selection-outstanding-liabilities','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/35977-solving-multistage-stochastic-programming-models-portfolio-selection-outstanding-liabilities"><span>Solving multistage stochastic programming models of portfolio selection with outstanding liabilities</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Edirisinghe, C.</p> <p>1994-12-31</p> <p>Models for portfolio selection in the presence of an outstanding liability have received significant attention, for example, models for pricing options. The problem may be described briefly as follows: given a set of risky securities (and a riskless security such as a bond), and given a set of cash flows, i.e., outstanding liability, to be met at some future date, determine an initial portfolio and a dynamic trading strategy for the underlying securities such that the initial cost of the portfolio is within a prescribed wealth level and the expected cash surpluses arising from trading is maximized. While the tradingmore » strategy should be self-financing, there may also be other restrictions such as leverage and short-sale constraints. Usually the treatment is limited to binomial evolution of uncertainty (of stock price), with possible extensions for developing computational bounds for multinomial generalizations. Posing as stochastic programming models of decision making, we investigate alternative efficient solution procedures under continuous evolution of uncertainty, for discrete time economies. We point out an important moment problem arising in the portfolio selection problem, the solution (or bounds) on which provides the basis for developing efficient computational algorithms. While the underlying stochastic program may be computationally tedious even for a modest number of trading opportunities (i.e., time periods), the derived algorithms may used to solve problems whose sizes are beyond those considered within stochastic optimization.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22489599-generalized-efficient-algorithm-computing-multipole-energies-gradients-based-cartesian-tensors','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22489599-generalized-efficient-algorithm-computing-multipole-energies-gradients-based-cartesian-tensors"><span>Generalized and efficient algorithm for computing multipole energies and gradients based on Cartesian tensors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Lin, Dejun, E-mail: dejun.lin@gmail.com</p> <p>2015-09-21</p> <p>Accurate representation of intermolecular forces has been the central task of classical atomic simulations, known as molecular mechanics. Recent advancements in molecular mechanics models have put forward the explicit representation of permanent and/or induced electric multipole (EMP) moments. The formulas developed so far to calculate EMP interactions tend to have complicated expressions, especially in Cartesian coordinates, which can only be applied to a specific kernel potential function. For example, one needs to develop a new formula each time a new kernel function is encountered. The complication of these formalisms arises from an intriguing and yet obscured mathematical relation between themore » kernel functions and the gradient operators. Here, I uncover this relation via rigorous derivation and find that the formula to calculate EMP interactions is basically invariant to the potential kernel functions as long as they are of the form f(r), i.e., any Green’s function that depends on inter-particle distance. I provide an algorithm for efficient evaluation of EMP interaction energies, forces, and torques for any kernel f(r) up to any arbitrary rank of EMP moments in Cartesian coordinates. The working equations of this algorithm are essentially the same for any kernel f(r). Recently, a few recursive algorithms were proposed to calculate EMP interactions. Depending on the kernel functions, the algorithm here is about 4–16 times faster than these algorithms in terms of the required number of floating point operations and is much more memory efficient. I show that it is even faster than a theoretically ideal recursion scheme, i.e., one that requires 1 floating point multiplication and 1 addition per recursion step. This algorithm has a compact vector-based expression that is optimal for computer programming. The Cartesian nature of this algorithm makes it fit easily into modern molecular simulation packages as compared with spherical coordinate-based algorithms. A software library based on this algorithm has been implemented in C++11 and has been released.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25281408','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25281408"><span>Extrinsic and intrinsic index finger muscle attachments in an OpenSim upper-extremity model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Jong Hwa; Asakawa, Deanna S; Dennerlein, Jack T; Jindrich, Devin L</p> <p>2015-04-01</p> <p>Musculoskeletal models allow estimation of muscle function during complex tasks. We used objective methods to determine possible attachment locations for index finger muscles in an OpenSim upper-extremity model. Data-driven optimization algorithms, Simulated Annealing and Hook-Jeeves, estimated tendon locations crossing the metacarpophalangeal (MCP), proximal interphalangeal (PIP) and distal interphalangeal (DIP) joints by minimizing the difference between model-estimated and experimentally-measured moment arms. Sensitivity analysis revealed that multiple sets of muscle attachments with similar optimized moment arms are possible, requiring additional assumptions or data to select a single set of values. The most smooth muscle paths were assumed to be biologically reasonable. Estimated tendon attachments resulted in variance accounted for (VAF) between calculated moment arms and measured values of 78% for flex/extension and 81% for ab/adduction at the MCP joint. VAF averaged 67% at the PIP joint and 54% at the DIP joint. VAF values at PIP and DIP joints partially reflected the constant moment arms reported for muscles about these joints. However, all moment arm values found through optimization were non-linear and non-constant. Relationships between moment arms and joint angles were best described with quadratic equations for tendons at the PIP and DIP joints.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016GGG....17.3754L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016GGG....17.3754L"><span>Ultra-high sensitivity moment magnetometry of geological samples using magnetic microscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lima, Eduardo A.; Weiss, Benjamin P.</p> <p>2016-09-01</p> <p>Useful paleomagnetic information is expected to be recorded by samples with moments up to three orders of magnitude below the detection limit of standard superconducting rock magnetometers. Such samples are now detectable using recently developed magnetic microscopes, which map the magnetic fields above room-temperature samples with unprecedented spatial resolutions and field sensitivities. However, realizing this potential requires the development of techniques for retrieving sample moments from magnetic microscopy data. With this goal, we developed a technique for uniquely obtaining the net magnetic moment of geological samples from magnetic microscopy maps of unresolved or nearly unresolved magnetization. This technique is particularly powerful for analyzing small, weakly magnetized samples such as meteoritic chondrules and terrestrial silicate crystals like zircons. We validated this technique by applying it to field maps generated from synthetic sources and also to field maps measured using a superconducting quantum interference device (SQUID) microscope above geological samples with moments down to 10-15 Am2. For the most magnetic rock samples, the net moments estimated from the SQUID microscope data are within error of independent moment measurements acquired using lower sensitivity standard rock magnetometers. In addition to its superior moment sensitivity, SQUID microscope net moment magnetometry also enables the identification and isolation of magnetic contamination and background sources, which is critical for improving accuracy in paleomagnetic studies of weakly magnetic samples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5876608','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5876608"><span>Multi-Target Angle Tracking Algorithm for Bistatic MIMO Radar Based on the Elements of the Covariance Matrix</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhang, Zhengyan; Zhang, Jianyun; Zhou, Qingsong; Li, Xiaobo</p> <p>2018-01-01</p> <p>In this paper, we consider the problem of tracking the direction of arrivals (DOA) and the direction of departure (DOD) of multiple targets for bistatic multiple-input multiple-output (MIMO) radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD) algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar. PMID:29518957</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29518957','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29518957"><span>Multi-Target Angle Tracking Algorithm for Bistatic Multiple-Input Multiple-Output (MIMO) Radar Based on the Elements of the Covariance Matrix.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Zhengyan; Zhang, Jianyun; Zhou, Qingsong; Li, Xiaobo</p> <p>2018-03-07</p> <p>In this paper, we consider the problem of tracking the direction of arrivals (DOA) and the direction of departure (DOD) of multiple targets for bistatic multiple-input multiple-output (MIMO) radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD) algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Using+AND+Multivariate+AND+Statistics&pg=4&id=EJ809795','ERIC'); return false;" href="https://eric.ed.gov/?q=Using+AND+Multivariate+AND+Statistics&pg=4&id=EJ809795"><span>Simulating Multivariate Nonnormal Data Using an Iterative Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ruscio, John; Kaczetow, Walter</p> <p>2008-01-01</p> <p>Simulating multivariate nonnormal data with specified correlation matrices is difficult. One especially popular method is Vale and Maurelli's (1983) extension of Fleishman's (1978) polynomial transformation technique to multivariate applications. This requires the specification of distributional moments and the calculation of an intermediate…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=mcdonald&pg=7&id=EJ651460','ERIC'); return false;" href="https://eric.ed.gov/?q=mcdonald&pg=7&id=EJ651460"><span>An Algorithm for the Hierarchical Organization of Path Diagrams and Calculation of Components of Expected Covariance.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Boker, Steven M.; McArdle, J. J.; Neale, Michael</p> <p>2002-01-01</p> <p>Presents an algorithm for the production of a graphical diagram from a matrix formula in such a way that its components are logically and hierarchically arranged. The algorithm, which relies on the matrix equations of J. McArdle and R. McDonald (1984), calculates the individual path components of expected covariance between variables and…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23701203','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23701203"><span>Giant enhancement and anomalous thermal hysteresis of saturation moment in magnetic nanoparticles embedded in multiwalled carbon nanotubes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhao, Guo-meng; Wang, Jun; Ren, Yang; Beeli, Pieder</p> <p>2013-06-12</p> <p>We report high-energy synchrotron X-ray diffraction spectrum and high-temperature magnetic data for multiwalled carbon nanotubes (MWCNTs) embedded with Fe and Fe3O4 nanoparticles. We unambiguously show that the saturation moments of the embedded Fe and Fe3O4 nanoparticles are enhanced by a factor of about 3.0 compared with what would be expected if they would be unembedded. More intriguingly the enhanced moments were completely lost when the sample was heated up to 1120 K, and the lost moments were completely recovered through two more thermal cycles below 1020 K. These novel results cannot be explained by the magnetism of the Fe and Fe3O4 impurity phases, the magnetic proximity effect between magnetic nanoparticles and carbon, and the ballistic transport of MWCNTs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12697951','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12697951"><span>Quantification of shoulder and elbow passive moments in the sagittal plane as a function of adjacent angle fixations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kodek, Timotej; Munih, Marko</p> <p>2003-01-01</p> <p>The goal of this study was an assessment of the shoulder and elbow joint passive moments in the sagittal plane for six healthy individuals. Either the shoulder or elbow joints were moved at a constant speed, very slowly throughout a large portion of their range by means of an industrial robot. During the whole process the arm was held fully passively, while the end point force data and the shoulder, elbow and wrist angle data were collected. The presented method unequivocally reveals a large passive moment adjacent angle dependency in the central angular range, where most everyday actions are performed. It is expected to prove useful in the future work when examining subjects with neuromuscular disorders. Their passive moments may show a fully different pattern than the ones obtained in this study.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1246339-applying-nonlinear-diffusion-acceleration-neutron-transport-eigenvalue-problem-anisotropic-scattering','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1246339-applying-nonlinear-diffusion-acceleration-neutron-transport-eigenvalue-problem-anisotropic-scattering"><span>Applying nonlinear diffusion acceleration to the neutron transport k-Eigenvalue problem with anisotropic scattering</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Willert, Jeffrey; Park, H.; Taitano, William</p> <p>2015-11-01</p> <p>High-order/low-order (or moment-based acceleration) algorithms have been used to significantly accelerate the solution to the neutron transport k-eigenvalue problem over the past several years. Recently, the nonlinear diffusion acceleration algorithm has been extended to solve fixed-source problems with anisotropic scattering sources. In this paper, we demonstrate that we can extend this algorithm to k-eigenvalue problems in which the scattering source is anisotropic and a significant acceleration can be achieved. Lastly, we demonstrate that the low-order, diffusion-like eigenvalue problem can be solved efficiently using a technique known as nonlinear elimination.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006APS..DPPGI2004N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006APS..DPPGI2004N"><span>Multidimensional kinetic simulations using dissipative closures and other reduced Vlasov methods for differing particle magnetizations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Newman, David L.</p> <p>2006-10-01</p> <p>Kinetic plasma simulations in which the phase-space distribution functions are advanced directly via the coupled Vlasov and Poisson (or Maxwell) equations---better known simply as Vlasov simulations---provide a valuable low-noise complement to the more commonly employed Particle-in-Cell (PIC) simulations. However, in more than one spatial dimension Vlasov simulations become numerically demanding due to the high dimensionality of x--v phase-space. Methods that can reduce this computational demand are therefore highly desirable. Several such methods will be presented, which treat the phase-space dynamics along a dominant dimension (e.g., parallel to a beam or current) with the full Vlasov propagator, while employing a reduced description, such as moment equations, for the evolution perpendicular to the dominant dimension. A key difference between the moment-based (and other reduced) methods considered here and standard fluid methods is that the moments are now functions of a phase-space coordinate (e.g. moments of vy in z--vz--y phase space, where z is the dominant dimension), rather than functions of spatial coordinates alone. Of course, moment-based methods require closure. For effectively unmagnetized species, new dissipative closure methods inspired by those of Hammett and Perkins [PRL, 64, 3019 (1990)] have been developed, which exactly reproduce the linear electrostatic response for a broad class of distributions with power-law tails, as are commonly measured in space plasmas. The nonlinear response, which requires more care, will also be discussed. For weakly magnetized species (i.e., φs<φs) an alternative algorithm has been developed in which the distributions are assumed to gyrate about the magnetic field with a fixed nominal perpendicular ``thermal'' velocity, thereby reducing the required phase-space dimension by one. These reduced algorithms have been incorporated into 2-D codes used to study the evolution of nonlinear structures such as double layers and electron holes in Earth's auroral zone.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25515671','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25515671"><span>Inattentional blindness reflects limitations on perception, not memory: Evidence from repeated failures of awareness.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ward, Emily J; Scholl, Brian J</p> <p>2015-06-01</p> <p>Perhaps the most striking phenomenon of visual awareness is inattentional blindness (IB), in which a surprisingly salient event right in front of you may go completely unseen when unattended. Does IB reflect a failure of perception, or only of subsequent memory? Previous work has been unable to answer this question, due to a seemingly intractable dilemma: ruling out memory requires immediate perceptual reports, but soliciting such reports fuels an expectation that eliminates IB. Here we introduce a way of evoking repeated IB in the same subjects and the same session: we show that observers fail to report seeing salient events' not only when they have no expectation, but also when they have the wrong expectations about the events nature. This occurs when observers must immediately report seeing anything unexpected, even mid-event. Repeated IB thus demonstrates that IB is aptly named: it reflects a genuine deficit in moment-by-moment conscious perception, rather than a form of inattentional amnesia.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015APS..MARZ29002E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015APS..MARZ29002E"><span>Calculation of the Curie temperature of Ni using first principles based Wang-Landau Monte-Carlo</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Eisenbach, Markus; Yin, Junqi; Li, Ying Wai; Nicholson, Don</p> <p>2015-03-01</p> <p>We combine constrained first principles density functional with a Wang-Landau Monte Carlo algorithm to calculate the Curie temperature of Ni. Mapping the magnetic interactions in Ni onto a Heisenberg like model to underestimates the Curie temperature. Using a model we show that the addition of the magnitude of the local magnetic moments can account for the difference in the calculated Curie temperature. For ab initio calculations, we have extended our Locally Selfconsistent Multiple Scattering (LSMS) code to constrain the magnitude of the local moments in addition to their direction and apply the Replica Exchange Wang-Landau method to sample the larger phase space efficiently to investigate Ni where the fluctuation in the magnitude of the local magnetic moments is of importance equal to their directional fluctuations. We will present our results for Ni where we compare calculations that consider only the moment directions and those including fluctuations of the magnetic moment magnitude on the Curie temperature. This research was sponsored by the Department of Energy, Offices of Basic Energy Science and Advanced Computing. We used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory, supported by US DOE under contract DE-AC05-00OR22725.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/10117953','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/10117953"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Matsuta, K.; Fukuda, M.; Tanigaki, M.</p> <p></p> <p>The magnetic moment of the proton drip-line nucleus {sup 9}C(I{sup {pi}}=3/2{sup -}, T{sub {1/2}}=126 ms) has been measured for the first time, using the {beta}-NMR detection technique with polarized radioactive beams. The measured value for the magnetic moment is {vert_bar} {mu}({sup 9}C) {vert_bar} = 1.3914{+-}0.0005 {mu}{sub N}. The deduced spin expectation value<{sigma}> of 1.44 is unusually larger than an other ones of even-odd nuclei.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_9 --> <div id="page_10" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="181"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012GeoJI.191..257O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012GeoJI.191..257O"><span>Centroid-moment tensor inversions using high-rate GPS waveforms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>O'Toole, Thomas B.; Valentine, Andrew P.; Woodhouse, John H.</p> <p>2012-10-01</p> <p>Displacement time-series recorded by Global Positioning System (GPS) receivers are a new type of near-field waveform observation of the seismic source. We have developed an inversion method which enables the recovery of an earthquake's mechanism and centroid coordinates from such data. Our approach is identical to that of the 'classical' Centroid-Moment Tensor (CMT) algorithm, except that we forward model the seismic wavefield using a method that is amenable to the efficient computation of synthetic GPS seismograms and their partial derivatives. We demonstrate the validity of our approach by calculating CMT solutions using 1 Hz GPS data for two recent earthquakes in Japan. These results are in good agreement with independently determined source models of these events. With wider availability of data, we envisage the CMT algorithm providing a tool for the systematic inversion of GPS waveforms, as is already the case for teleseismic data. Furthermore, this general inversion method could equally be applied to other near-field earthquake observations such as those made using accelerometers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4005072','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4005072"><span>Improved GSO Optimized ESN Soft-Sensor Model of Flotation Process Based on Multisource Heterogeneous Information Fusion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Jie-sheng; Han, Shuang; Shen, Na-na</p> <p>2014-01-01</p> <p>For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..327e2022K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..327e2022K"><span>Combined Optimal Control System for excavator electric drive</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kurochkin, N. S.; Kochetkov, V. P.; Platonova, E. V.; Glushkin, E. Y.; Dulesov, A. S.</p> <p>2018-03-01</p> <p>The article presents a synthesis of the combined optimal control algorithms of the AC drive rotation mechanism of the excavator. Synthesis of algorithms consists in the regulation of external coordinates - based on the theory of optimal systems and correction of the internal coordinates electric drive using the method "technical optimum". The research shows the advantage of optimal combined control systems for the electric rotary drive over classical systems of subordinate regulation. The paper presents a method for selecting the optimality criterion of coefficients to find the intersection of the range of permissible values of the coordinates of the control object. There is possibility of system settings by choosing the optimality criterion coefficients, which allows one to select the required characteristics of the drive: the dynamic moment (M) and the time of the transient process (tpp). Due to the use of combined optimal control systems, it was possible to significantly reduce the maximum value of the dynamic moment (M) and at the same time - reduce the transient time (tpp).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.usgs.gov/sir/2017/5038/sir20175038.pdf','USGSPUBS'); return false;" href="https://pubs.usgs.gov/sir/2017/5038/sir20175038.pdf"><span>Application of at-site peak-streamflow frequency analyses for very low annual exceedance probabilities</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Asquith, William H.; Kiang, Julie E.; Cohn, Timothy A.</p> <p>2017-07-17</p> <p>The U.S. Geological Survey (USGS), in cooperation with the U.S. Nuclear Regulatory Commission, has investigated statistical methods for probabilistic flood hazard assessment to provide guidance on very low annual exceedance probability (AEP) estimation of peak-streamflow frequency and the quantification of corresponding uncertainties using streamgage-specific data. The term “very low AEP” implies exceptionally rare events defined as those having AEPs less than about 0.001 (or 1 × 10–3 in scientific notation or for brevity 10–3). Such low AEPs are of great interest to those involved with peak-streamflow frequency analyses for critical infrastructure, such as nuclear power plants. Flood frequency analyses at streamgages are most commonly based on annual instantaneous peak streamflow data and a probability distribution fit to these data. The fitted distribution provides a means to extrapolate to very low AEPs. Within the United States, the Pearson type III probability distribution, when fit to the base-10 logarithms of streamflow, is widely used, but other distribution choices exist. The USGS-PeakFQ software, implementing the Pearson type III within the Federal agency guidelines of Bulletin 17B (method of moments) and updates to the expected moments algorithm (EMA), was specially adapted for an “Extended Output” user option to provide estimates at selected AEPs from 10–3 to 10–6. Parameter estimation methods, in addition to product moments and EMA, include L-moments, maximum likelihood, and maximum product of spacings (maximum spacing estimation). This study comprehensively investigates multiple distributions and parameter estimation methods for two USGS streamgages (01400500 Raritan River at Manville, New Jersey, and 01638500 Potomac River at Point of Rocks, Maryland). The results of this study specifically involve the four methods for parameter estimation and up to nine probability distributions, including the generalized extreme value, generalized log-normal, generalized Pareto, and Weibull. Uncertainties in streamflow estimates for corresponding AEP are depicted and quantified as two primary forms: quantile (aleatoric [random sampling] uncertainty) and distribution-choice (epistemic [model] uncertainty). Sampling uncertainties of a given distribution are relatively straightforward to compute from analytical or Monte Carlo-based approaches. Distribution-choice uncertainty stems from choices of potentially applicable probability distributions for which divergence among the choices increases as AEP decreases. Conventional goodness-of-fit statistics, such as Cramér-von Mises, and L-moment ratio diagrams are demonstrated in order to hone distribution choice. The results generally show that distribution choice uncertainty is larger than sampling uncertainty for very low AEP values.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25435867','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25435867"><span>Robust optimization model and algorithm for railway freight center location problem in uncertain environment.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong</p> <p>2014-01-01</p> <p>Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2674917','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2674917"><span>Parameter expansion for estimation of reduced rank covariance matrices (Open Access publication)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Meyer, Karin</p> <p>2008-01-01</p> <p>Parameter expanded and standard expectation maximisation algorithms are described for reduced rank estimation of covariance matrices by restricted maximum likelihood, fitting the leading principal components only. Convergence behaviour of these algorithms is examined for several examples and contrasted to that of the average information algorithm, and implications for practical analyses are discussed. It is shown that expectation maximisation type algorithms are readily adapted to reduced rank estimation and converge reliably. However, as is well known for the full rank case, the convergence is linear and thus slow. Hence, these algorithms are most useful in combination with the quadratically convergent average information algorithm, in particular in the initial stages of an iterative solution scheme. PMID:18096112</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9662E..2KW','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9662E..2KW"><span>FPGA based charge acquisition algorithm for soft x-ray diagnostics system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wojenski, A.; Kasprowicz, G.; Pozniak, K. T.; Zabolotny, W.; Byszuk, A.; Juszczyk, B.; Kolasinski, P.; Krawczyk, R. D.; Zienkiewicz, P.; Chernyshova, M.; Czarski, T.</p> <p>2015-09-01</p> <p>Soft X-ray (SXR) measurement systems working in tokamaks or with laser generated plasma can expect high photon fluxes. Therefore it is necessary to focus on data processing algorithms to have the best possible efficiency in term of processed photon events per second. This paper refers to recently designed algorithm and data-flow for implementation of charge data acquisition in FPGA. The algorithms are currently on implementation stage for the soft X-ray diagnostics system. In this paper despite of the charge processing algorithm is also described general firmware overview, data storage methods and other key components of the measurement system. The simulation section presents algorithm performance and expected maximum photon rate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JNEng..14b6011B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JNEng..14b6011B"><span>Neuromusculoskeletal model self-calibration for on-line sequential bayesian moment estimation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bueno, Diana R.; Montano, L.</p> <p>2017-04-01</p> <p>Objective. Neuromusculoskeletal models involve many subject-specific physiological parameters that need to be adjusted to adequately represent muscle properties. Traditionally, neuromusculoskeletal models have been calibrated with a forward-inverse dynamic optimization which is time-consuming and unfeasible for rehabilitation therapy. Non self-calibration algorithms have been applied to these models. To the best of our knowledge, the algorithm proposed in this work is the first on-line calibration algorithm for muscle models that allows a generic model to be adjusted to different subjects in a few steps. Approach. In this paper we propose a reformulation of the traditional muscle models that is able to sequentially estimate the kinetics (net joint moments), and also its full self-calibration (subject-specific internal parameters of the muscle from a set of arbitrary uncalibrated data), based on the unscented Kalman filter. The nonlinearity of the model as well as its calibration problem have obliged us to adopt the sum of Gaussians filter suitable for nonlinear systems. Main results. This sequential Bayesian self-calibration algorithm achieves a complete muscle model calibration using as input only a dataset of uncalibrated sEMG and kinematics data. The approach is validated experimentally using data from the upper limbs of 21 subjects. Significance. The results show the feasibility of neuromusculoskeletal model self-calibration. This study will contribute to a better understanding of the generalization of muscle models for subject-specific rehabilitation therapies. Moreover, this work is very promising for rehabilitation devices such as electromyography-driven exoskeletons or prostheses.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhDT.......220R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhDT.......220R"><span>Near real-time estimation of the seismic source parameters in a compressed domain</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rodriguez, Ismael A. Vera</p> <p></p> <p>Seismic events can be characterized by its origin time, location and moment tensor. Fast estimations of these source parameters are important in areas of geophysics like earthquake seismology, and the monitoring of seismic activity produced by volcanoes, mining operations and hydraulic injections in geothermal and oil and gas reservoirs. Most available monitoring systems estimate the source parameters in a sequential procedure: first determining origin time and location (e.g., epicentre, hypocentre or centroid of the stress glut density), and then using this information to initialize the evaluation of the moment tensor. A more efficient estimation of the source parameters requires a concurrent evaluation of the three variables. The main objective of the present thesis is to address the simultaneous estimation of origin time, location and moment tensor of seismic events. The proposed method displays the benefits of being: 1) automatic, 2) continuous and, depending on the scale of application, 3) of providing results in real-time or near real-time. The inversion algorithm is based on theoretical results from sparse representation theory and compressive sensing. The feasibility of implementation is determined through the analysis of synthetic and real data examples. The numerical experiments focus on the microseismic monitoring of hydraulic fractures in oil and gas wells, however, an example using real earthquake data is also presented for validation. The thesis is complemented with a resolvability analysis of the moment tensor. The analysis targets common monitoring geometries employed in hydraulic fracturing in oil wells. Additionally, it is presented an application of sparse representation theory for the denoising of one-component and three-component microseismicity records, and an algorithm for improved automatic time-picking using non-linear inversion constraints.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28285744','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28285744"><span>Dynamic Time Warping compared to established methods for validation of musculoskeletal models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gaspar, Martin; Welke, Bastian; Seehaus, Frank; Hurschler, Christof; Schwarze, Michael</p> <p>2017-04-11</p> <p>By means of Multi-Body musculoskeletal simulation, important variables such as internal joint forces and moments can be estimated which cannot be measured directly. Validation can ensued by qualitative or by quantitative methods. Especially when comparing time-dependent signals, many methods do not perform well and validation is often limited to qualitative approaches. The aim of the present study was to investigate the capabilities of the Dynamic Time Warping (DTW) algorithm for comparing time series, which can quantify phase as well as amplitude errors. We contrast the sensitivity of DTW with other established metrics: the Pearson correlation coefficient, cross-correlation, the metric according to Geers, RMSE and normalized RMSE. This study is based on two data sets, where one data set represents direct validation and the other represents indirect validation. Direct validation was performed in the context of clinical gait-analysis on trans-femoral amputees fitted with a 6 component force-moment sensor. Measured forces and moments from amputees' socket-prosthesis are compared to simulated forces and moments. Indirect validation was performed in the context of surface EMG measurements on a cohort of healthy subjects with measurements taken of seven muscles of the leg, which were compared to simulated muscle activations. Regarding direct validation, a positive linear relation between results of RMSE and nRMSE to DTW can be seen. For indirect validation, a negative linear relation exists between Pearson correlation and cross-correlation. We propose the DTW algorithm for use in both direct and indirect quantitative validation as it correlates well with methods that are most suitable for one of the tasks. However, in DV it should be used together with methods resulting in a dimensional error value, in order to be able to interpret results more comprehensible. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPSJ...87c3710A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPSJ...87c3710A"><span>Rare-Earth Fourth-Order Multipole Moment in Cubic ErCo2 Probed by Linear Dichroism in Core-Level Photoemission</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abozeed, Amina A.; Kadono, Toshiharu; Sekiyama, Akira; Fujiwara, Hidenori; Higashiya, Atsushi; Yamasaki, Atsushi; Kanai, Yuina; Yamagami, Kohei; Tamasaku, Kenji; Yabashi, Makina; Ishikawa, Tetsuya; Andreev, Alexander V.; Wada, Hirofumi; Imada, Shin</p> <p>2018-03-01</p> <p>We developed a method to experimentally quantify the fourth-order multipole moment of the rare-earth 4f orbital. Linear dichroism (LD) in the Er 3d5/2 core-level photoemission spectra of cubic ErCo2 was measured using bulk-sensitive hard X-ray photoemission spectroscopy. Theoretical calculation reproduced the observed LD, and the result showed that the observed result does not contradict the suggested Γ 83 ground state. Theoretical calculation further showed a linear relationship between the LD size and the size of the fourth-order multipole moment of the Er3+ ion, which is proportional to the expectation value < O40 + 5O44> , where Onm are the Stevens operators. These analyses indicate that the LD in 3d photoemission spectra can be used to quantify the average fourth-order multipole moment of rare-earth atoms in a cubic crystal electric field.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=JAZZ&id=EJ1072958','ERIC'); return false;" href="https://eric.ed.gov/?q=JAZZ&id=EJ1072958"><span>Professional Notes: Creativity in the Jazz Ensemble--Let's Get away from the Written Jazz Solo</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Larson, Robert</p> <p>2015-01-01</p> <p>Performing jazz offers students the opportunity to participate in a unique group activity where notated passages are blended with exciting moments of improvisation. Expectations have risen over the years as middle and high school jazz ensembles have proved that they can perform at a very high level. This expectation however, has led to the…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JHyd..545..197C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JHyd..545..197C"><span>Comparison of methods for non-stationary hydrologic frequency analysis: Case study using annual maximum daily precipitation in Taiwan</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Po-Chun; Wang, Yuan-Heng; You, Gene Jiing-Yun; Wei, Chih-Chiang</p> <p>2017-02-01</p> <p>Future climatic conditions likely will not satisfy stationarity assumption. To address this concern, this study applied three methods to analyze non-stationarity in hydrologic conditions. Based on the principle of identifying distribution and trends (IDT) with time-varying moments, we employed the parametric weighted least squares (WLS) estimation in conjunction with the non-parametric discrete wavelet transform (DWT) and ensemble empirical mode decomposition (EEMD). Our aim was to evaluate the applicability of non-parameter approaches, compared with traditional parameter-based methods. In contrast to most previous studies, which analyzed the non-stationarity of first moments, we incorporated second-moment analysis. Through the estimation of long-term risk, we were able to examine the behavior of return periods under two different definitions: the reciprocal of the exceedance probability of occurrence and the expected recurrence time. The proposed framework represents an improvement over stationary frequency analysis for the design of hydraulic systems. A case study was performed using precipitation data from major climate stations in Taiwan to evaluate the non-stationarity of annual maximum daily precipitation. The results demonstrate the applicability of these three methods in the identification of non-stationarity. For most cases, no significant differences were observed with regard to the trends identified using WLS, DWT, and EEMD. According to the results, a linear model should be able to capture time-variance in either the first or second moment while parabolic trends should be used with caution due to their characteristic rapid increases. It is also observed that local variations in precipitation tend to be overemphasized by DWT and EEMD. The two definitions provided for the concept of return period allows for ambiguous interpretation. With the consideration of non-stationarity, the return period is relatively small under the definition of expected recurrence time comparing to the estimation using the reciprocal of the exceedance probability of occurrence. However, the calculation of expected recurrence time is based on the assumption of perfect knowledge of long-term risk, which involves high uncertainty. When the risk is decreasing with time, the expected recurrence time will lead to the divergence of return period and make this definition inapplicable for engineering purposes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JCoPh.340..138S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JCoPh.340..138S"><span>Efficient algorithms and implementations of entropy-based moment closures for rarefied gases</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel</p> <p>2017-07-01</p> <p>We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFMGP53B1139M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFMGP53B1139M"><span>Unlocking the spatial inversion of large scanning magnetic microscopy datasets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.</p> <p>2013-12-01</p> <p>Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of spatial domain inversions. We show that the TNT algorithm reduces the execution time of spatial domain inversions from months to hours and that inverse solution accuracy is improved as the TNT algorithm naturally produces solutions with small norms. Using sIRM and NRM measures of multiple synthetic and natural samples we show that the capabilities of the TNT algorithm allow very large samples to be inverted without the need for alternative techniques to make the problems tractable. Ultimately, the TNT algorithm enables accurate spatial domain analysis of scanning magnetic microscopy data on an accelerated time scale that renders spatial domain analyses tractable for numerous studies, including searches for the best fit of unidirectional magnetization direction and high-resolution step-wise magnetization and demagnetization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24109456','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24109456"><span>Mathematical biomarkers for the autonomic regulation of cardiovascular system.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Campos, Luciana A; Pereira, Valter L; Muralikrishna, Amita; Albarwani, Sulayma; Brás, Susana; Gouveia, Sónia</p> <p>2013-10-07</p> <p>Heart rate and blood pressure are the most important vital signs in diagnosing disease. Both heart rate and blood pressure are characterized by a high degree of short term variability from moment to moment, medium term over the normal day and night as well as in the very long term over months to years. The study of new mathematical algorithms to evaluate the variability of these cardiovascular parameters has a high potential in the development of new methods for early detection of cardiovascular disease, to establish differential diagnosis with possible therapeutic consequences. The autonomic nervous system is a major player in the general adaptive reaction to stress and disease. The quantitative prediction of the autonomic interactions in multiple control loops pathways of cardiovascular system is directly applicable to clinical situations. Exploration of new multimodal analytical techniques for the variability of cardiovascular system may detect new approaches for deterministic parameter identification. A multimodal analysis of cardiovascular signals can be studied by evaluating their amplitudes, phases, time domain patterns, and sensitivity to imposed stimuli, i.e., drugs blocking the autonomic system. The causal effects, gains, and dynamic relationships may be studied through dynamical fuzzy logic models, such as the discrete-time model and discrete-event model. We expect an increase in accuracy of modeling and a better estimation of the heart rate and blood pressure time series, which could be of benefit for intelligent patient monitoring. We foresee that identifying quantitative mathematical biomarkers for autonomic nervous system will allow individual therapy adjustments to aim at the most favorable sympathetic-parasympathetic balance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3791874','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3791874"><span>Mathematical biomarkers for the autonomic regulation of cardiovascular system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Campos, Luciana A.; Pereira, Valter L.; Muralikrishna, Amita; Albarwani, Sulayma; Brás, Susana; Gouveia, Sónia</p> <p>2013-01-01</p> <p>Heart rate and blood pressure are the most important vital signs in diagnosing disease. Both heart rate and blood pressure are characterized by a high degree of short term variability from moment to moment, medium term over the normal day and night as well as in the very long term over months to years. The study of new mathematical algorithms to evaluate the variability of these cardiovascular parameters has a high potential in the development of new methods for early detection of cardiovascular disease, to establish differential diagnosis with possible therapeutic consequences. The autonomic nervous system is a major player in the general adaptive reaction to stress and disease. The quantitative prediction of the autonomic interactions in multiple control loops pathways of cardiovascular system is directly applicable to clinical situations. Exploration of new multimodal analytical techniques for the variability of cardiovascular system may detect new approaches for deterministic parameter identification. A multimodal analysis of cardiovascular signals can be studied by evaluating their amplitudes, phases, time domain patterns, and sensitivity to imposed stimuli, i.e., drugs blocking the autonomic system. The causal effects, gains, and dynamic relationships may be studied through dynamical fuzzy logic models, such as the discrete-time model and discrete-event model. We expect an increase in accuracy of modeling and a better estimation of the heart rate and blood pressure time series, which could be of benefit for intelligent patient monitoring. We foresee that identifying quantitative mathematical biomarkers for autonomic nervous system will allow individual therapy adjustments to aim at the most favorable sympathetic-parasympathetic balance. PMID:24109456</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.G51B1093W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.G51B1093W"><span>Coseismic and Afterslip Model Related to 25 April 2015, Mw7.8 Gorkha, Nepal Earthquake and its Potential Future Risk Regions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, S.; Xu, C.; Jiang, G.</p> <p>2016-12-01</p> <p>Evidences from geologic, geophysical and geomorphic prove that 2015 Mw 7.8 Gorkha(Nepal) earthquake happened on the two ramp-flats fault structure of Main Himalayan Thrust(MHT). We approximated this more realistic fault model by a smooth curved fault surface, which was derived by the method of hybrid iterative inversion algorithm(HIIA) with additional constraints from coseismic geodetic data. Then the coseismic slip distribution of 2015 Gorkha earthquake was imaged based on this curved variably triangular sized fault model. The inverted maximum thrust and right-lateral slip components are 6 and 1.5 m, respectively, with the maximum slip magnitude 6.2 m located at a depth of 15 km. The released seismic moment derived from our best slip model is 8.58×1020 Nm, equivalent to a moment magnitude of Mw 7.89. We find two interesting tongue-shape slip areas, the maximum slip is about 1.5 m, along the up-dip of fault plane, which tappers off at the depth of 7 km, the up-dip propagation of ruptures may be impeded by the complicated geometry structures on the MHT interface. Coulomb Failure Stress(CFS), triggered by our optimal slip model, indicating a potential shallower rupture in the future. Considering historical earthquakes distribution and the calculated strain and strain gradient before this earthquake, earthquakes are expected to occur in the northwest areas of the epicenter. The spatio-temporal afterslip model over the first 180 days following the Mw 7.8 main shock was infered from the post-seismic GPS time series. One significant afterslip region can be observed in the downdip of the regions that ruptured by coseismic slip. Another afterslip region arresting our attention, is located around 40 km depth, with about 180 mm slip amplitude, but tappers off at the depth of 50 km. What's more, afterslip mainly occurs within 100 days after the 2015 Gorkha earthquake. Under the assumption of rigidity modulus u = 30 GPa, the released seismic moment by afterslip corresponding to 8.0×1019 Nm, equivalent moment magnitude is Mw 7.23. Our coseismic and afterslip models are in line with previous studies, but with a more accurate geometric fault model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1744021','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1744021"><span>Crisis management during anaesthesia: the development of an anaesthetic crisis management manual</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Runciman, W; Kluger, M; Morris, R; Paix, A; Watterson, L; Webb, R</p> <p>2005-01-01</p> <p>Background: All anaesthetists have to handle life threatening crises with little or no warning. However, some cognitive strategies and work practices that are appropriate for speed and efficiency under normal circumstances may become maladaptive in a crisis. It was judged in a previous study that the use of a structured "core" algorithm (based on the mnemonic COVER ABCD–A SWIFT CHECK) would diagnose and correct the problem in 60% of cases and provide a functional diagnosis in virtually all of the remaining 40%. It was recommended that specific sub-algorithms be developed for managing the problems underlying the remaining 40% of crises and assembled in an easy-to-use manual. Sub-algorithms were therefore developed for these problems so that they could be checked for applicability and validity against the first 4000 anaesthesia incidents reported to the Australian Incident Monitoring Study (AIMS). Methods: The need for 24 specific sub-algorithms was identified. Teams of practising anaesthetists were assembled and sets of incidents relevant to each sub-algorithm were identified from the first 4000 reported to AIMS. Based largely on successful strategies identified in these reports, a set of 24 specific sub-algorithms was developed for trial against the 4000 AIMS reports and assembled into an easy-to-use manual. A process was developed for applying each component of the core algorithm COVER at one of four levels (scan-check-alert/ready-emergency) according to the degree of perceived urgency, and incorporated into the manual. The manual was disseminated at a World Congress and feedback was obtained. Results: Each of the 24 specific crisis management sub-algorithms was tested against the relevant incidents among the first 4000 reported to AIMS and compared with the actual management by the anaesthetist at the time. It was judged that, if the core algorithm had been correctly applied, the appropriate sub-algorithm would have been resolved better and/or faster in one in eight of all incidents, and would have been unlikely to have caused harm to any patient. The descriptions of the validation of each of the 24 sub-algorithms constitute the remaining 24 papers in this set. Feedback from five meetings each attended by 60–100 anaesthetists was then collated and is included. Conclusion: The 24 sub-algorithms developed form the basis for developing a rational evidence-based approach to crisis management during anaesthesia. The COVER component has been found to be satisfactory in real life resuscitation situations and the sub-algorithms have been used successfully for several years. It would now be desirable for carefully designed simulator based studies, using naive trainees at the start of their training, to systematically examine the merits and demerits of various aspects of the sub-algorithms. It would seem prudent that these sub-algorithms be regarded, for the moment, as decision aids to support and back up clinicians' natural responses to a crisis when all is not progressing as expected. PMID:15933282</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15933282','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15933282"><span>Crisis management during anaesthesia: the development of an anaesthetic crisis management manual.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Runciman, W B; Kluger, M T; Morris, R W; Paix, A D; Watterson, L M; Webb, R K</p> <p>2005-06-01</p> <p>All anaesthetists have to handle life threatening crises with little or no warning. However, some cognitive strategies and work practices that are appropriate for speed and efficiency under normal circumstances may become maladaptive in a crisis. It was judged in a previous study that the use of a structured "core" algorithm (based on the mnemonic COVER ABCD-A SWIFT CHECK) would diagnose and correct the problem in 60% of cases and provide a functional diagnosis in virtually all of the remaining 40%. It was recommended that specific sub-algorithms be developed for managing the problems underlying the remaining 40% of crises and assembled in an easy-to-use manual. Sub-algorithms were therefore developed for these problems so that they could be checked for applicability and validity against the first 4000 anaesthesia incidents reported to the Australian Incident Monitoring Study (AIMS). The need for 24 specific sub-algorithms was identified. Teams of practising anaesthetists were assembled and sets of incidents relevant to each sub-algorithm were identified from the first 4000 reported to AIMS. Based largely on successful strategies identified in these reports, a set of 24 specific sub-algorithms was developed for trial against the 4000 AIMS reports and assembled into an easy-to-use manual. A process was developed for applying each component of the core algorithm COVER at one of four levels (scan-check-alert/ready-emergency) according to the degree of perceived urgency, and incorporated into the manual. The manual was disseminated at a World Congress and feedback was obtained. Each of the 24 specific crisis management sub-algorithms was tested against the relevant incidents among the first 4000 reported to AIMS and compared with the actual management by the anaesthetist at the time. It was judged that, if the core algorithm had been correctly applied, the appropriate sub-algorithm would have been resolved better and/or faster in one in eight of all incidents, and would have been unlikely to have caused harm to any patient. The descriptions of the validation of each of the 24 sub-algorithms constitute the remaining 24 papers in this set. Feedback from five meetings each attended by 60-100 anaesthetists was then collated and is included. The 24 sub-algorithms developed form the basis for developing a rational evidence-based approach to crisis management during anaesthesia. The COVER component has been found to be satisfactory in real life resuscitation situations and the sub-algorithms have been used successfully for several years. It would now be desirable for carefully designed simulator based studies, using naive trainees at the start of their training, to systematically examine the merits and demerits of various aspects of the sub-algorithms. It would seem prudent that these sub-algorithms be regarded, for the moment, as decision aids to support and back up clinicians' natural responses to a crisis when all is not progressing as expected.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_10 --> <div id="page_11" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="201"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10696E..14B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10696E..14B"><span>Comparison of classification algorithms for various methods of preprocessing radar images of the MSTAR base</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Borodinov, A. A.; Myasnikov, V. V.</p> <p>2018-04-01</p> <p>The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100018541','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100018541"><span>Model Checking with Edge-Valued Decision Diagrams</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Roux, Pierre; Siminiceanu, Radu I.</p> <p>2010-01-01</p> <p>We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library. We provide efficient algorithms for manipulating EVMDDs and review the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi- Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools. Compared to the CUDD package, our tool is several orders of magnitude faster</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4764090','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4764090"><span>Agarra el momento/seize the moment: Developing communication activities for a drug prevention intervention with and for Latino families in the US Southwest</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ayón, Cecilia; Baldwin, Adrienne; Umaña-Taylor, Adriana J; Marsiglia, Flavio F; Harthun, Mary</p> <p>2015-01-01</p> <p>This article presents the development of parent–child communication activities by applying Community-Based Participatory Research and focus group methodology. Three parent–child communication activities were developed to enhance an already efficacious parenting intervention: (1) agarra el momento or seize the moment uses everyday situations to initiate conversations about substance use, (2) hay que adelantarnos or better sooner than later stresses being proactive about addressing critical issues with youth, and (3) setting rules and expectations engages parents in establishing rules and expectations for healthy and effective conversations with youth. Focus group data are presented to illustrate how thematic content from the focus groups was used to inform the development of the activities and, furthermore, how such methods supported the development of a culturally grounded intervention. PMID:26924943</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009JInst...4.9007K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009JInst...4.9007K"><span>Monitoring of bone regeneration process by means of texture analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kokkinou, E.; Boniatis, I.; Costaridou, L.; Saridis, A.; Panagiotopoulos, E.; Panayiotakis, G.</p> <p>2009-09-01</p> <p>An image analysis method is proposed for the monitoring of the regeneration of the tibial bone. For this purpose, 130 digitized radiographs of 13 patients, who had undergone tibial lengthening by the Ilizarov method, were studied. For each patient, 10 radiographs, taken at an equal number of postoperative successive time moments, were available. Employing available software, 3 Regions Of Interest (ROIs), corresponding to the: (a) upper, (b) central, and (c) lower aspect of the gap, where bone regeneration was expected to occur, were determined on each radiograph. Employing custom developed algorithms: (i) a number of textural features were generated from each of the ROIs, and (ii) a texture-feature based regression model was designed for the quantitative monitoring of the bone regeneration process. Statistically significant differences (p < 0.05) were derived for the initial and the final textural features values, generated from the first and the last postoperatively obtained radiographs, respectively. A quadratic polynomial regression equation fitted data adequately (r2 = 0.9, p < 0.001). The suggested method may contribute to the monitoring of the tibial bone regeneration process.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090032101','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090032101"><span>Estimating Thruster Impulses From IMU and Doppler Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lisano, Michael E.; Kruizinga, Gerhard L.</p> <p>2009-01-01</p> <p>A computer program implements a thrust impulse measurement (TIM) filter, which processes data on changes in velocity and attitude of a spacecraft to estimate the small impulsive forces and torques exerted by the thrusters of the spacecraft reaction control system (RCS). The velocity-change data are obtained from line-of-sight-velocity data from Doppler measurements made from the Earth. The attitude-change data are the telemetered from an inertial measurement unit (IMU) aboard the spacecraft. The TIM filter estimates the threeaxis thrust vector for each RCS thruster, thereby enabling reduction of cumulative navigation error attributable to inaccurate prediction of thrust vectors. The filter has been augmented with a simple mathematical model to compensate for large temperature fluctuations in the spacecraft thruster catalyst bed in order to estimate thrust more accurately at deadbanding cold-firing levels. Also, rigorous consider-covariance estimation is applied in the TIM to account for the expected uncertainty in the moment of inertia and the location of the center of gravity of the spacecraft. The TIM filter was built with, and depends upon, a sigma-point consider-filter algorithm implemented in a Python-language computer program.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014PhRvE..90e0801R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014PhRvE..90e0801R"><span>Underestimating extreme events in power-law behavior due to machine-dependent cutoffs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Radicchi, Filippo</p> <p>2014-11-01</p> <p>Power-law distributions are typical macroscopic features occurring in almost all complex systems observable in nature. As a result, researchers in quantitative analyses must often generate random synthetic variates obeying power-law distributions. The task is usually performed through standard methods that map uniform random variates into the desired probability space. Whereas all these algorithms are theoretically solid, in this paper we show that they are subject to severe machine-dependent limitations. As a result, two dramatic consequences arise: (i) the sampling in the tail of the distribution is not random but deterministic; (ii) the moments of the sample distribution, which are theoretically expected to diverge as functions of the sample sizes, converge instead to finite values. We provide quantitative indications for the range of distribution parameters that can be safely handled by standard libraries used in computational analyses. Whereas our findings indicate possible reinterpretations of numerical results obtained through flawed sampling methodologies, they also pave the way for the search for a concrete solution to this central issue shared by all quantitative sciences dealing with complexity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004AGUSM.H33A..10T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004AGUSM.H33A..10T"><span>Identification of PARMA Models and Their Application to the Modeling of River flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tesfaye, Y. G.; Meerschaert, M. M.; Anderson, P. L.</p> <p>2004-05-01</p> <p>The generation of synthetic river flow samples that can reproduce the essential statistical features of historical river flows is essential to the planning, design and operation of water resource systems. Most river flow series are periodically stationary; that is, their mean and covariance functions are periodic with respect to time. We employ a periodic ARMA (PARMA) model. The innovation algorithm can be used to obtain parameter estimates for PARMA models with finite fourth moment as well as infinite fourth moment but finite variance. Anderson and Meerschaert (2003) provide a method for model identification when the time series has finite fourth moment. This article, an extension of the previous work by Anderson and Meerschaert, demonstrates the effectiveness of the technique using simulated data. An application to monthly flow data for the Frazier River in British Columbia is also included to illustrate the use of these methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040058117','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040058117"><span>Band-Moment Compression of AVIRIS Hyperspectral Data and its Use in the Detection of Vegetation Stress</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Estep, L.; Davis, B.</p> <p>2001-01-01</p> <p>A remote sensing campaign was conducted over a U.S. Department of Agriculture test farm at Shelton, Nebraska. An experimental field was set off in plots that were differentially treated with anhydrous ammonia. Four replicates of 0-kg/ha to 200-kg/ha plots, in 50-kg/ha increments, were set out in a random block design. Low-altitude (GSD of 3 m) Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral data were collected over the site in 224 bands. Simultaneously, ground data were collected to support the airborne imagery. In an effort to reduce data load while maintaining or enhancing algorithm performance for vegetation stress detection, band-moment compression and analysis was applied to the AVIRIS image cube. The results indicated that band-moment techniques compress the AVIRIS dataset significantly while retaining the capability of detecting environmentally induced vegetation stress.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27857283','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27857283"><span>Tchebichef moment based restoration of Gaussian blurred images.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kumar, Ahlad; Paramesran, Raveendran; Lim, Chern-Loon; Dass, Sarat C</p> <p>2016-11-10</p> <p>With the knowledge of how edges vary in the presence of a Gaussian blur, a method that uses low-order Tchebichef moments is proposed to estimate the blur parameters: sigma (σ) and size (w). The difference between the Tchebichef moments of the original and the reblurred images is used as feature vectors to train an extreme learning machine for estimating the blur parameters (σ,w). The effectiveness of the proposed method to estimate the blur parameters is examined using cross-database validation. The estimated blur parameters from the proposed method are used in the split Bregman-based image restoration algorithm. A comparative analysis of the proposed method with three existing methods using all the images from the LIVE database is carried out. The results show that the proposed method in most of the cases performs better than the three existing methods in terms of the visual quality evaluated using the structural similarity index.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JARS...12a5019Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JARS...12a5019Y"><span>Efficient moving target analysis for inverse synthetic aperture radar images via joint speeded-up robust features and regular moment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Hongxin; Su, Fulin</p> <p>2018-01-01</p> <p>We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19229079','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19229079"><span>Improving Zernike moments comparison for optimal similarity and rotation angle retrieval.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Revaud, Jérôme; Lavoué, Guillaume; Baskurt, Atilla</p> <p>2009-04-01</p> <p>Zernike moments constitute a powerful shape descriptor in terms of robustness and description capability. However the classical way of comparing two Zernike descriptors only takes into account the magnitude of the moments and loses the phase information. The novelty of our approach is to take advantage of the phase information in the comparison process while still preserving the invariance to rotation. This new Zernike comparator provides a more accurate similarity measure together with the optimal rotation angle between the patterns, while keeping the same complexity as the classical approach. This angle information is particularly of interest for many applications, including 3D scene understanding through images. Experiments demonstrate that our comparator outperforms the classical one in terms of similarity measure. In particular the robustness of the retrieval against noise and geometric deformation is greatly improved. Moreover, the rotation angle estimation is also more accurate than state-of-the-art algorithms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16214657','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16214657"><span>Assessment of two-dimensional induced accelerations from measured kinematic and kinetic data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hof, A L; Otten, E</p> <p>2005-11-01</p> <p>A simple algorithm is presented to calculate the induced accelerations of body segments in human walking for the sagittal plane. The method essentially consists of setting up 2x4 force equations, 4 moment equations, 2x3 joint constraint equations and two constraints related to the foot-ground interaction. Data needed for the equations are, next to masses and moments of inertia, the positions of ankle, knee and hip. This set of equations is put in the form of an 18x18 matrix or 20x20 matrix, the solution of which can be found by inversion. By applying input vectors related to gravity, to centripetal accelerations or to muscle moments, the 'induced' accelerations and reaction forces related to these inputs can be found separately. The method was tested for walking in one subject. Good agreement was found with published results obtained by much more complicated three-dimensional forward dynamic models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19790041753&hterms=TAD&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3DTAD','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19790041753&hterms=TAD&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3DTAD"><span>An Improved Theoretical Aerodynamic Derivatives Computer Program for Sounding Rockets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Barrowman, J. S.; Fan, D. N.; Obosu, C. B.; Vira, N. R.; Yang, R. J.</p> <p>1979-01-01</p> <p>The paper outlines a Theoretical Aerodynamic Derivatives (TAD) computer program for computing the aerodynamics of sounding rockets. TAD outputs include normal force, pitching moment and rolling moment coefficient derivatives as well as center-of-pressure locations as a function of the flight Mach number. TAD is applicable to slender finned axisymmetric vehicles at small angles of attack in subsonic and supersonic flows. TAD improvement efforts include extending Mach number regions of applicability, improving accuracy, and replacement of some numerical integration algorithms with closed-form integrations. Key equations used in TAD are summarized and typical TAD outputs are illustrated for a second-stage Tomahawk configuration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/20795768-hairy-strings','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/20795768-hairy-strings"><span>Hairy strings</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Sahakian, Vatche</p> <p></p> <p>Zero modes of the world-sheet spinors of a closed string can source higher order moments of the bulk supergravity fields. In this work, we analyze various configurations of closed strings focusing on the imprints of the quantized spinor vacuum expectation values onto the tails of bulk fields. We identify supersymmetric arrangements for which all multipole charges vanish; while for others, we find that one is left with Neveu-Schwarz-Neveu-Schwarz, and Ramond-Ramond dipole and quadrupole moments. Our analysis is exhaustive with respect to all the bosonic fields of the bulk and to all higher order moments. We comment on the relevance ofmore » these results to entropy computations of hairy black holes of a single charge or more, and to open/closed string duality.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20070034987&hterms=microscopy&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dmicroscopy','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20070034987&hterms=microscopy&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dmicroscopy"><span>Paleomagnetic Analysis Using SQUID Microscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Weiss, Benjamin P.; Lima, Eduardo A.; Fong, Luis E.; Baudenbacher, Franz J.</p> <p>2007-01-01</p> <p>Superconducting quantum interference device (SQUID) microscopes are a new generation of instruments that map magnetic fields with unprecedented spatial resolution and moment sensitivity. Unlike standard rock magnetometers, SQUID microscopes map magnetic fields rather than measuring magnetic moments such that the sample magnetization pattern must be retrieved from source model fits to the measured field data. In this paper, we presented the first direct comparison between paleomagnetic analyses on natural samples using joint measurements from SQUID microscopy and moment magnetometry. We demonstrated that in combination with apriori geologic and petrographic data, SQUID microscopy can accurately characterize the magnetization of lunar glass spherules and Hawaiian basalt. The bulk moment magnitude and direction of these samples inferred from inversions of SQUID microscopy data match direct measurements on the same samples using moment magnetometry. In addition, these inversions provide unique constraints on the magnetization distribution within the sample. These measurements are among the most sensitive and highest resolution quantitative paleomagnetic studies of natural remanent magnetization to date. We expect that this technique will be able to extend many other standard paleomagnetic techniques to previously inaccessible microscale samples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1343168-empirical-moments-inertia-axially-asymmetric-nuclei','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1343168-empirical-moments-inertia-axially-asymmetric-nuclei"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Allmond, J. M.; Wood, J. L.</p> <p></p> <p>We extracted empirical moments of inertia, J1, J2, J3, of atomic nuclei with E(4more » $$+\\atop{1}$$)/E(2$$+\\atop{1}$$ ) > 2.7 from experimental 2$$+\\atop{g,y}$$, energies and electric quadrupole matrix elements, determined from multi- step Coulomb excitation data, and the results are compared to expectations based on rigid and irro- tational inertial flow. Only by having the signs of the E2 matrix elements, i.e., <2$$+\\atop{g}$$ ||M (E2)||2$$+\\atop{g}$$> and <0$$+\\atop{g}$$ ||M (E2)||2$$+\\atop{g}$$> < 2$$+\\atop{g}$$ ||M (E2)||2$$+\\atop{γ}$$> <2$$+\\atop{γ}$$ ||M (E2)||0$$+\\atop{g}$$> , can a unique solution to all three components of the inertia tensor of an asymmetric top be obtained. And while the absolute moments of inertia fall between the rigid and irrotational values as expected, the relative moments of inertia appear to be qualitatively consistent with the β 2 sin 2(γ ) dependence of the Bohr Hamiltonian which originates from a SO(5) in- variance. A better understanding of inertial flow is central to improving collective models, particularly hydrodynamic-based collective models. The results suggest that a better description of collective dynamics and inertial flow for atomic nuclei is needed. The inclusion of vorticity degrees of freedom may provide a path forward. This is our first report of empirical moments of inertia for all three axes and the results should challenge both collective and microscopic descriptions of inertial flow.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1992PhDT........11O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1992PhDT........11O"><span>Maneuvering a reentry body via magneto-gasdynamic forces</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ohare, Leo Patrick</p> <p>1992-04-01</p> <p>Some of the characteristics of the interaction of an electrically conducting fluid with a non-uniform applied magnetic field and a potential magnetogasdynamic control system which may be used on future aerospace vehicles are presented. The flow through a two dimensional channel is predicted by numerically solving the magnetogasdynamic equations using a time marching technique. The fluid was modeled as a compressible, inviscid, supersonic gas with finite electrical conductivity. Development of the algorithm provided a means to predict and analyze phenomena associated with magnetogasdynamic flows which had not been previously explored using numerical methods. One such phenomena was the prediction of oblique waves resulting from the interaction of an electrically conducting fluid with a non-uniform applied magnetic field. Development of this tool provided a means to explore an application which might have potential use for future aerospace vehicle missions. In order to appreciate the significance of this technology, predictions were made of the pitching moment about a slender blunted cone, generated by a system relying on the fluid-magnetic interaction. These moments were compared to predictions of a pitching moment generated by a deflecting control surface on the same vehicle. It was shown that the proposed magnetogasdynamic system could produce moments which were on the same order as the moments produced by the flap systems at low deflection angles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4379116','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4379116"><span>Prediction of Kinematic and Kinetic Performance in a Drop Vertical Jump with Individual Anthropometric Factors in Adolescent Female Athletes: Implications for Cadaveric Investigations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bates, Nathaniel A.; Myer, Gregory D.; Hewett, Timothy E.</p> <p>2014-01-01</p> <p>Anterior cruciate ligament injuries are common, expensive to repair, and often debilitate athletic careers. Robotic manipulators have evaluated knee ligament biomechanics in cadaveric specimens, but face limitations such as accounting for variation in bony geometry between specimens that may influence dynamic motion pathways. This study examined individual anthropometric measures for significant linear relationships with in vivo kinematic and kinetic performance and determined their implications for robotic studies. Anthropometrics and 3D motion during a 31 cm drop vertical jump task were collected in high school female basketball players. Anthropometric measures demonstrated differential statistical significance in linear regression models relative to kinematic variables (P-range < 0.01-0.95). However, none of the anthropometric relationships accounted for clinical variance or provided substantive univariate accuracy needed for clinical prediction algorithms (r2 < 0.20). Mass and BMI demonstrated models that were significant (P < 0.05) and predictive (r2 > 0.20) relative to peak flexion moment, peak adduction moment, flexion moment range, abduction moment range, and internal rotation moment range. The current findings indicate that anthropometric measures are less associated with kinematics than with kinetics. Relative to the robotic manipulation of cadaveric limbs, the results do not support the need to normalize kinematic rotations relative to specimen dimensions. PMID:25266933</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27208938','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27208938"><span>A moment-convergence method for stochastic analysis of biochemical reaction networks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Jiajun; Nie, Qing; Zhou, Tianshou</p> <p>2016-05-21</p> <p>Traditional moment-closure methods need to assume that high-order cumulants of a probability distribution approximate to zero. However, this strong assumption is not satisfied for many biochemical reaction networks. Here, we introduce convergent moments (defined in mathematics as the coefficients in the Taylor expansion of the probability-generating function at some point) to overcome this drawback of the moment-closure methods. As such, we develop a new analysis method for stochastic chemical kinetics. This method provides an accurate approximation for the master probability equation (MPE). In particular, the connection between low-order convergent moments and rate constants can be more easily derived in terms of explicit and analytical forms, allowing insights that would be difficult to obtain through direct simulation or manipulation of the MPE. In addition, it provides an accurate and efficient way to compute steady-state or transient probability distribution, avoiding the algorithmic difficulty associated with stiffness of the MPE due to large differences in sizes of rate constants. Applications of the method to several systems reveal nontrivial stochastic mechanisms of gene expression dynamics, e.g., intrinsic fluctuations can induce transient bimodality and amplify transient signals, and slow switching between promoter states can increase fluctuations in spatially heterogeneous signals. The overall approach has broad applications in modeling, analysis, and computation of complex biochemical networks with intrinsic noise.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70025418','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70025418"><span>On the expected relationships among apparent stress, static stress drop, effective shear fracture energy, and efficiency</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Beeler, N.M.; Wong, T.-F.; Hickman, S.H.</p> <p>2003-01-01</p> <p>We consider expected relationships between apparent stress ??a and static stress drop ????s using a standard energy balance and find ??a = ????s (0.5 - ??), where ?? is stress overshoot. A simple implementation of this balance is to assume overshoot is constant; then apparent stress should vary linearly with stress drop, consistent with spectral theories (Brune, 1970) and dynamic crack models (Madariaga, 1976). Normalizing this expression by the static stress drop defines an efficiency ??sw = ??sa/????s as follows from Savage and Wood (1971). We use this measure of efficiency to analyze data from one of a number of observational studies that find apparent stress to increase with seismic moment, namely earthquakes recorded in the Cajon Pass borehole by Abercrombie (1995). Increases in apparent stress with event size could reflect an increase in seismic efficiency; however, ??sw for the Cajon earthquakes shows no such increase and is approximately constant over the entire moment range. Thus, apparent stress and stress drop co-vary, as expected from the energy balance at constant overshoot. The median value of ??sw for the Cajon earthquakes is four times lower than ??sw for laboratory events. Thus, these Cajon-recorded earthquakes have relatively low and approximately constant efficiency. As the energy balance requires ??sw = 0.5 - ??, overshoot can be estimated directly from the Savage-Wood efficiency; overshoot is positive for Cajon Pass earthquakes. Variations in apparent stress with seismic moment for these earthquakes result primarily from systematic variations in static stress drop with seismic moment and do not require a relative decrease in sliding resistance with increasing event size (dynamic weakening). Based on the comparison of field and lab determinations of the Savage-Wood efficiency, we suggest the criterion ??sw > 0.3 as a test for dynamic weakening in excess of that seen in the lab.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_11 --> <div id="page_12" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="221"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28787612','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28787612"><span>Glass half-full: On-road glance metrics differentiate crashes from near-crashes in the 100-Car data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Seppelt, Bobbie D; Seaman, Sean; Lee, Joonbum; Angell, Linda S; Mehler, Bruce; Reimer, Bryan</p> <p>2017-10-01</p> <p>Much of the driver distraction and inattention work to date has focused on concerns over drivers removing their eyes from the forward roadway to perform non-driving-related tasks, and its demonstrable link to safety consequences when these glances are timed at inopportune moments. This extensive literature has established, through the analyses of glance from naturalistic datasets, a clear relationship between eyes-off-road, lead vehicle closing kinematics, and near-crash/crash involvement. This paper looks at the role of driver expectation in influencing drivers' decisions about when and for how long to remove their eyes from the forward roadway in an analysis that consider the combined role of on- and off-road glances. Using glance data collected in the 100-Car Naturalistic Driving Study (NDS), near-crashes were examined separately from crashes to examine how momentary differences in glance allocation over the 25-s prior to a precipitating event can differentiate between these two distinct outcomes. Individual glance metrics of mean single glance duration (MSGD), total glance time (TGT), and glance count for off-road and on-road glance locations were analyzed. Output from the AttenD algorithm (Kircher and Ahlström, 2009) was also analyzed as a hybrid measure; in threading together on- and off-road glances over time, its output produces a pattern of glance behavior meaningful for examining attentional effects. Individual glance metrics calculated at the epoch-level and binned by 10-s units of time across the available epoch lengths revealed that drivers in near-crashes have significantly longer on-road glances, and look less frequently between on- and off- road locations in the moments preceding a precipitating event as compared to crashes. During on-road glances, drivers in near-crashes were found to more frequently sample peripheral regions of the roadway than drivers in crashes. Output from the AttenD algorithm affirmed the cumulative net benefit of longer on-road glances and of improved attention management between on- and off-road locations. The finding of longer on-road glances differentiating between safety-critical outcomes in the 100-Car NDS data underscores the importance of attention management in how drivers look both on and off the road. It is in the pattern of glances to and from the forward roadway that drivers obtained critical information necessary to inform their expectation of hazard potential to avoid a crash. This work may have important implications for attention management in the context of the increasing prevalence of in-vehicle demands as well as of vehicle automation. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.usgs.gov/sir/2018/5046/sir20185046.pdf','USGSPUBS'); return false;" href="https://pubs.usgs.gov/sir/2018/5046/sir20185046.pdf"><span>Methods for peak-flow frequency analysis and reporting for streamgages in or near Montana based on data through water year 2015</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Sando, Steven K.; McCarthy, Peter M.</p> <p>2018-05-10</p> <p>This report documents the methods for peak-flow frequency (hereinafter “frequency”) analysis and reporting for streamgages in and near Montana following implementation of the Bulletin 17C guidelines. The methods are used to provide estimates of peak-flow quantiles for 50-, 42.9-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for selected streamgages operated by the U.S. Geological Survey Wyoming-Montana Water Science Center (WY–MT WSC). These annual exceedance probabilities correspond to 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.Standard procedures specific to the WY–MT WSC for implementing the Bulletin 17C guidelines include (1) the use of the Expected Moments Algorithm analysis for fitting the log-Pearson Type III distribution, incorporating historical information where applicable; (2) the use of weighted skew coefficients (based on weighting at-site station skew coefficients with generalized skew coefficients from the Bulletin 17B national skew map); and (3) the use of the Multiple Grubbs-Beck Test for identifying potentially influential low flows. For some streamgages, the peak-flow records are not well represented by the standard procedures and require user-specified adjustments informed by hydrologic judgement. The specific characteristics of peak-flow records addressed by the informed-user adjustments include (1) regulated peak-flow records, (2) atypical upper-tail peak-flow records, and (3) atypical lower-tail peak-flow records. In all cases, the informed-user adjustments use the Expected Moments Algorithm fit of the log-Pearson Type III distribution using the at-site station skew coefficient, a manual potentially influential low flow threshold, or both.Appropriate methods can be applied to at-site frequency estimates to provide improved representation of long-term hydroclimatic conditions. The methods for improving at-site frequency estimates by weighting with regional regression equations and by Maintenance of Variance Extension Type III record extension are described.Frequency analyses were conducted for 99 example streamgages to indicate various aspects of the frequency-analysis methods described in this report. The frequency analyses and results for the example streamgages are presented in a separate data release associated with this report consisting of tables and graphical plots that are structured to include information concerning the interpretive decisions involved in the frequency analyses. Further, the separate data release includes the input files to the PeakFQ program, version 7.1, including the peak-flow data file and the analysis specification file that were used in the peak-flow frequency analyses. Peak-flow frequencies are also reported in separate data releases for selected streamgages in the Beaverhead River and Clark Fork Basins and also for selected streamgages in the Ruby, Jefferson, and Madison River Basins.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JSMTE..11.3404M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JSMTE..11.3404M"><span>Deterministic quantum annealing expectation-maximization algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki</p> <p>2017-11-01</p> <p>Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SPIE.6057..227K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SPIE.6057..227K"><span>Human vision-based algorithm to hide defective pixels in LCDs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kimpe, Tom; Coulier, Stefaan; Van Hoey, Gert</p> <p>2006-02-01</p> <p>Producing displays without pixel defects or repairing defective pixels is technically not possible at this moment. This paper presents a new approach to solve this problem: defects are made invisible for the user by using image processing algorithms based on characteristics of the human eye. The performance of this new algorithm has been evaluated using two different methods. First of all the theoretical response of the human eye was analyzed on a series of images and this before and after applying the defective pixel compensation algorithm. These results show that indeed it is possible to mask a defective pixel. A second method was to perform a psycho-visual test where users were asked whether or not a defective pixel could be perceived. The results of these user tests also confirm the value of the new algorithm. Our "defective pixel correction" algorithm can be implemented very efficiently and cost-effectively as pixel-dataprocessing algorithms inside the display in for instance an FPGA, a DSP or a microprocessor. The described techniques are also valid for both monochrome and color displays ranging from high-quality medical displays to consumer LCDTV applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3477675','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3477675"><span>Sensory prediction on a whiskered robot: a tactile analogy to “optical flow”</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Schroeder, Christopher L.; Hartmann, Mitra J. Z.</p> <p>2012-01-01</p> <p>When an animal moves an array of sensors (e.g., the hand, the eye) through the environment, spatial and temporal gradients of sensory data are related by the velocity of the moving sensory array. In vision, the relationship between spatial and temporal brightness gradients is quantified in the “optical flow” equation. In the present work, we suggest an analog to optical flow for the rodent vibrissal (whisker) array, in which the perceptual intensity that “flows” over the array is bending moment. Changes in bending moment are directly related to radial object distance, defined as the distance between the base of a whisker and the point of contact with the object. Using both simulations and a 1×5 array (row) of artificial whiskers, we demonstrate that local object curvature can be estimated based on differences in radial distance across the array. We then develop two algorithms, both based on tactile flow, to predict the future contact points that will be obtained as the whisker array translates along the object. The translation of the robotic whisker array represents the rat's head velocity. The first algorithm uses a calculation of the local object slope, while the second uses a calculation of the local object curvature. Both algorithms successfully predict future contact points for simple surfaces. The algorithm based on curvature was found to more accurately predict future contact points as surfaces became more irregular. We quantify the inter-related effects of whisker spacing and the object's spatial frequencies, and examine the issues that arise in the presence of real-world noise, friction, and slip. PMID:23097641</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23097641','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23097641"><span>Sensory prediction on a whiskered robot: a tactile analogy to "optical flow".</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schroeder, Christopher L; Hartmann, Mitra J Z</p> <p>2012-01-01</p> <p>When an animal moves an array of sensors (e.g., the hand, the eye) through the environment, spatial and temporal gradients of sensory data are related by the velocity of the moving sensory array. In vision, the relationship between spatial and temporal brightness gradients is quantified in the "optical flow" equation. In the present work, we suggest an analog to optical flow for the rodent vibrissal (whisker) array, in which the perceptual intensity that "flows" over the array is bending moment. Changes in bending moment are directly related to radial object distance, defined as the distance between the base of a whisker and the point of contact with the object. Using both simulations and a 1×5 array (row) of artificial whiskers, we demonstrate that local object curvature can be estimated based on differences in radial distance across the array. We then develop two algorithms, both based on tactile flow, to predict the future contact points that will be obtained as the whisker array translates along the object. The translation of the robotic whisker array represents the rat's head velocity. The first algorithm uses a calculation of the local object slope, while the second uses a calculation of the local object curvature. Both algorithms successfully predict future contact points for simple surfaces. The algorithm based on curvature was found to more accurately predict future contact points as surfaces became more irregular. We quantify the inter-related effects of whisker spacing and the object's spatial frequencies, and examine the issues that arise in the presence of real-world noise, friction, and slip.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1196172','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1196172"><span>Fully implicit Particle-in-cell algorithms for multiscale plasma simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Chacon, Luis</p> <p></p> <p>The outline of the paper is as follows: Particle-in-cell (PIC) methods for fully ionized collisionless plasmas, explicit vs. implicit PIC, 1D ES implicit PIC (charge and energy conservation, moment-based acceleration), and generalization to Multi-D EM PIC: Vlasov-Darwin model (review and motivation for Darwin model, conservation properties (energy, charge, and canonical momenta), and numerical benchmarks). The author demonstrates a fully implicit, fully nonlinear, multidimensional PIC formulation that features exact local charge conservation (via a novel particle mover strategy), exact global energy conservation (no particle self-heating or self-cooling), adaptive particle orbit integrator to control errors in momentum conservation, and canonical momenta (EM-PICmore » only, reduced dimensionality). The approach is free of numerical instabilities: ω peΔt >> 1, and Δx >> λ D. It requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant CPU gains (vs explicit PIC) have been demonstrated. The method has much potential for efficiency gains vs. explicit in long-time-scale applications. Moment-based acceleration is effective in minimizing N FE, leading to an optimal algorithm.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..370a2032S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..370a2032S"><span>Magnetic attitude control torque generation of a gravity gradient stabilized satellite</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Suhadis, N. M.; Salleh, M. B.; Rajendran, P.</p> <p>2018-05-01</p> <p>Magnetic torquer is used to generate a magnetic dipole moment onboard satellites whereby a control torque for attitude control purposes is generated when it couples with the geomagnetic field. This technique has been considered very attractive for satellites operated in Low Earth Orbit (LEO) as the strength of the geomagnetic field is relatively high below the altitude of 1000 km. This paper presents the algorithm used to generate required magnetic dipole moment by 3 magnetic torquers mounted onboard a gravity gradient stabilized satellite operated at an altitude of 540 km with nadir pointing mission. As the geomagnetic field cannot be altered and its magnitude and direction vary with respect to the orbit altitude and inclination, a comparison study of attitude control torque generation performance with various orbit inclination is performed where the structured control algorithm is simulated for 13°, 33° and 53° orbit inclinations to see how the variation of the satellite orbit affects the satellite's attitude control torque generation. Results from simulation show that the higher orbit inclination generates optimum magnetic attitude control torque for accurate nadir pointing mission.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1223667-investigation-semimagic-nature-tin-isotopes-through-electromagnetic-moments','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1223667-investigation-semimagic-nature-tin-isotopes-through-electromagnetic-moments"><span>Investigation into the semimagic nature of the tin isotopes through electromagnetic moments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Allmond, J. M.; Stuchbery, A. E.; Galindo-Uribarri, A.; ...</p> <p>2015-10-19</p> <p>A complete set of electromagnetic moments, B(E2;0 + 1 2 + 1), Q(2 + 1), and g(2 + 1), have been measured from Coulomb excitation of semi-magic 112,114,116,118,120,122,124Sn (Z = 50) on natural carbon and titanium targets. The magnitude of the B(E2) values, measured to a precision of ~4%, disagree with a recent lifetime study [Phys. Lett. B 695, 110 (2011)] that employed the Doppler- shift attenuation method. The B(E2) values show an overall enhancement compared with recent theoretical calculations and a clear asymmetry about midshell, contrary to naive expectations. A new static electric quadrupole moment, Q(2 + 1), hasmore » been measured for 114Sn. The static quadrupole moments are generally consistent with zero but reveal an enhancement near midshell; this had not been previously observed. The magnetic dipole moments are consistent with previous measurements and show a near monotonic decrease in value with neutron number. The current theory calculations fail to reproduce the electromagnetic moments of the tin isotopes. The role of 2p-2h and 4p-4h intruders, which are lowest in energy at mid shell and outside of current model spaces, needs to be investigated in the future.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA096729','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA096729"><span>A Study to Determine the Optimal Frequency for Conducting Periodic Dental Examinations (OFDEX)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1980-06-01</p> <p>as the major vehicle by which patients enter into the incremental care system . Unpublished data from studies performed by the Health Care Studies...product-moment correlation was used to measure the strength of the relationship between two variables. Significance tests for correlation coefficients...expected if no relationship existed between variables at given row and column totals. These expected frequencies are then compared to the actual values</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22622300-efficient-algorithms-implementations-entropy-based-moment-closures-rarefied-gases','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22622300-efficient-algorithms-implementations-entropy-based-moment-closures-rarefied-gases"><span>Efficient algorithms and implementations of entropy-based moment closures for rarefied gases</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Schaerer, Roman Pascal, E-mail: schaerer@mathcces.rwth-aachen.de; Bansal, Pratyuksh; Torrilhon, Manuel</p> <p></p> <p>We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropymore » distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1953n0010K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1953n0010K"><span>Nondestructive evaluation of degradation in papaya fruit using intensity based algorithms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kumari, Shubhashri; Nirala, Anil Kumar</p> <p>2018-05-01</p> <p>In the proposed work degradation in Papaya fruit has been evaluated nondestructively using laser biospeckle technique. The biospeckle activity inside the fruit has been evaluated qualitatively and quantitatively during its maturity to degradation stage using intensity based algorithms. Co-occurrence matrix (COM) has been used for qualitative analysis whereas Inertia Moment (IM), Absolute value Difference (AVD) and Autocovariance methods have been used for quantitative analysis. The biospeckle activity has been found to first increase and then decrease during study period of five days. In addition Granulometric size distribution (GSD) has also been used for the first time for the evaluation of degradation of the papaya. It is concluded that the degradation process of papaya fruit can be evaluated nondestructively using all the mentioned algorithms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4703372','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4703372"><span>Path statistics, memory, and coarse-graining of continuous-time random walks on networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kion-Crosby, Willow; Morozov, Alexandre V.</p> <p>2015-01-01</p> <p>Continuous-time random walks (CTRWs) on discrete state spaces, ranging from regular lattices to complex networks, are ubiquitous across physics, chemistry, and biology. Models with coarse-grained states (for example, those employed in studies of molecular kinetics) or spatial disorder can give rise to memory and non-exponential distributions of waiting times and first-passage statistics. However, existing methods for analyzing CTRWs on complex energy landscapes do not address these effects. Here we use statistical mechanics of the nonequilibrium path ensemble to characterize first-passage CTRWs on networks with arbitrary connectivity, energy landscape, and waiting time distributions. Our approach can be applied to calculating higher moments (beyond the mean) of path length, time, and action, as well as statistics of any conservative or non-conservative force along a path. For homogeneous networks, we derive exact relations between length and time moments, quantifying the validity of approximating a continuous-time process with its discrete-time projection. For more general models, we obtain recursion relations, reminiscent of transfer matrix and exact enumeration techniques, to efficiently calculate path statistics numerically. We have implemented our algorithm in PathMAN (Path Matrix Algorithm for Networks), a Python script that users can apply to their model of choice. We demonstrate the algorithm on a few representative examples which underscore the importance of non-exponential distributions, memory, and coarse-graining in CTRWs. PMID:26646868</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29533221','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29533221"><span>Adaptive non-linear control for cancer therapy through a Fokker-Planck observer.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shakeri, Ehsan; Latif-Shabgahi, Gholamreza; Esmaeili Abharian, Amir</p> <p>2018-04-01</p> <p>In recent years, many efforts have been made to present optimal strategies for cancer therapy through the mathematical modelling of tumour-cell population dynamics and optimal control theory. In many cases, therapy effect is included in the drift term of the stochastic Gompertz model. By fitting the model with empirical data, the parameters of therapy function are estimated. The reported research works have not presented any algorithm to determine the optimal parameters of therapy function. In this study, a logarithmic therapy function is entered in the drift term of the Gompertz model. Using the proposed control algorithm, the therapy function parameters are predicted and adaptively adjusted. To control the growth of tumour-cell population, its moments must be manipulated. This study employs the probability density function (PDF) control approach because of its ability to control all the process moments. A Fokker-Planck-based non-linear stochastic observer will be used to determine the PDF of the process. A cost function based on the difference between a predefined desired PDF and PDF of tumour-cell population is defined. Using the proposed algorithm, the therapy function parameters are adjusted in such a manner that the cost function is minimised. The existence of an optimal therapy function is also proved. The numerical results are finally given to demonstrate the effectiveness of the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010069500','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010069500"><span>Actuator Placement Via Genetic Algorithm for Aircraft Morphing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Crossley, William A.; Cook, Andrea M.</p> <p>2001-01-01</p> <p>This research continued work that began under the support of NASA Grant NAG1-2119. The focus of this effort was to continue investigations of Genetic Algorithm (GA) approaches that could be used to solve an actuator placement problem by treating this as a discrete optimization problem. In these efforts, the actuators are assumed to be "smart" devices that change the aerodynamic shape of an aircraft wing to alter the flow past the wing, and, as a result, provide aerodynamic moments that could provide flight control. The earlier work investigated issued for the problem statement, developed the appropriate actuator modeling, recognized the importance of symmetry for this problem, modified the aerodynamic analysis routine for more efficient use with the genetic algorithm, and began a problem size study to measure the impact of increasing problem complexity. The research discussed in this final summary further investigated the problem statement to provide a "combined moment" problem statement to simultaneously address roll, pitch and yaw. Investigations of problem size using this new problem statement provided insight into performance of the GA as the number of possible actuator locations increased. Where previous investigations utilized a simple wing model to develop the GA approach for actuator placement, this research culminated with application of the GA approach to a high-altitude unmanned aerial vehicle concept to demonstrate that the approach is valid for an aircraft configuration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19870017501','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19870017501"><span>An Adaptive Numeric Predictor-corrector Guidance Algorithm for Atmospheric Entry Vehicles. M.S. Thesis - MIT, Cambridge</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Spratlin, Kenneth Milton</p> <p>1987-01-01</p> <p>An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IJMPA..3230011B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IJMPA..3230011B"><span>Chiral perturbation theory and nucleon-pion-state contaminations in lattice QCD</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bär, Oliver</p> <p>2017-05-01</p> <p>Multiparticle states with additional pions are expected to be a non-negligible source of excited-state contamination in lattice simulations at the physical point. It is shown that baryon chiral perturbation theory can be employed to calculate the contamination due to two-particle nucleon-pion-states in various nucleon observables. Leading order results are presented for the nucleon axial, tensor and scalar charge and three Mellin moments of parton distribution functions (quark momentum fraction, helicity and transversity moment). Taking into account phenomenological results for the charges and moments the impact of the nucleon-pion-states on lattice estimates for these observables can be estimated. The nucleon-pion-state contribution results in an overestimation of all charges and moments obtained with the plateau method. The overestimation is at the 5-10% level for source-sink separations of about 2 fm. The source-sink separations accessible in contemporary lattice simulations are found to be too small for chiral perturbation theory to be directly applicable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28780735','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28780735"><span>RNA folding kinetics using Monte Carlo and Gillespie algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Clote, Peter; Bayegan, Amir H</p> <p>2018-04-01</p> <p>RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny ([Formula: see text]20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to [Formula: see text] times that of the Gillespie algorithm, where [Formula: see text] denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by [Formula: see text]; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gillespie algorithms is publicly available, as is our software to compute the expected degree of the network of secondary structures of a given RNA sequence-see http://bioinformatics.bc.edu/clote/RNAexpNumNbors .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMSA21C..01K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMSA21C..01K"><span>Data Products From Particle Detectors On-Board NOAA's Newest Space Weather Monitor</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kress, B. T.; Rodriguez, J. V.; Onsager, T. G.</p> <p>2017-12-01</p> <p>NOAA's newest Geostationary Operational Environmental Satellite, GOES-16, was launched on 19 November 2016. Instrumentation on-board GOES-16 includes the new Space Environment In-Situ Suite (SEISS), which has been collecting data since 8 January 2017. SEISS is composed of five magnetospheric particle sensor units: an electrostatic analyzer for measuring 30 eV - 30 keV ions and electrons (MPS-LO), a high energy particle sensor (MPS-HI) that measures keV to MeV electrons and protons, east and west facing Solar and Galactic Proton Sensor (SGPS) units with 13 differential channels between 1-500 MeV, and an Energetic Heavy Ion Sensor (EHIS) that measures 30 species of heavy ions (He-Ni) in five energy bands in the 10-200 MeV/nuc range. Measurement of low energy magnetospheric particles by MPS-LO and heavy ions by EHIS are new capabilities not previously flown on the GOES system. Real-time data from GOES-16 will support space weather monitoring and first-principles space weather modeling by NOAA's Space Weather Prediction Center (SWPC). Space weather level 2+ data products under development at NOAA's National Centers for Environmental Information (NCEI) include the Solar Energetic Particle (SEP) Event Detection algorithm. Legacy components of the SEP event detection algorithm (currently produced by SWPC) include the Solar Radiation Storm Scales. New components will include, e.g., event fluences. New level 2+ data products also include the SEP event Linear Energy Transfer (LET) Algorithm, for transforming energy spectra from EHIS into LET spectra, and the Density and Temperature Moments and Spacecraft Charging algorithm. The moments and charging algorithm identifies electron and ion signatures of spacecraft surface (frame) charging in the MPS-LO fluxes. Densities and temperatures from MPS-LO will also be used to support a magnetopause crossing detection algorithm. The new data products will provide real-time indicators of potential radiation hazards for the satellite community and data for future studies of space weather effects. This presentation will include an overview of these algorithms and examples of their performance during recent co-rotation interaction region (CIR) associated radiation belt enhancements and a solar particle event on 14-15 July 2017.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120016750','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120016750"><span>Evaluation of Algorithms for a Miles-in-Trail Decision Support Tool</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bloem, Michael; Hattaway, David; Bambos, Nicholas</p> <p>2012-01-01</p> <p>Four machine learning algorithms were prototyped and evaluated for use in a proposed decision support tool that would assist air traffic managers as they set Miles-in-Trail restrictions. The tool would display probabilities that each possible Miles-in-Trail value should be used in a given situation. The algorithms were evaluated with an expected Miles-in-Trail cost that assumes traffic managers set restrictions based on the tool-suggested probabilities. Basic Support Vector Machine, random forest, and decision tree algorithms were evaluated, as was a softmax regression algorithm that was modified to explicitly reduce the expected Miles-in-Trail cost. The algorithms were evaluated with data from the summer of 2011 for air traffic flows bound to the Newark Liberty International Airport (EWR) over the ARD, PENNS, and SHAFF fixes. The algorithms were provided with 18 input features that describe the weather at EWR, the runway configuration at EWR, the scheduled traffic demand at EWR and the fixes, and other traffic management initiatives in place at EWR. Features describing other traffic management initiatives at EWR and the weather at EWR achieved relatively high information gain scores, indicating that they are the most useful for estimating Miles-in-Trail. In spite of a high variance or over-fitting problem, the decision tree algorithm achieved the lowest expected Miles-in-Trail costs when the algorithms were evaluated using 10-fold cross validation with the summer 2011 data for these air traffic flows.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2831400','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2831400"><span>Liver vessels segmentation using a hybrid geometrical moments/graph cuts method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Esneault, Simon; Lafon, Cyril; Dillenseger, Jean-Louis</p> <p>2010-01-01</p> <p>This paper describes a fast and fully-automatic method for liver vessel segmentation on CT scan pre-operative images. The basis of this method is the introduction of a 3-D geometrical moment-based detector of cylindrical shapes within the min-cut/max-flow energy minimization framework. This method represents an original way to introduce a data term as a constraint into the widely used Boykov’s graph cuts algorithm and hence, to automate the segmentation. The method is evaluated and compared with others on a synthetic dataset. Finally, the relevancy of our method regarding the planning of a -necessarily accurate- percutaneous high intensity focused ultrasound surgical operation is demonstrated with some examples. PMID:19783500</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.S43H2965A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.S43H2965A"><span>Full moment tensors with uncertainties for the 2017 North Korea declared nuclear test and for a collocated, subsequent event</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alvizuri, C. R.; Tape, C.</p> <p>2017-12-01</p> <p>A seismic moment tensor is a 3×3 symmetric matrix that characterizes the far-field seismic radiation from a source, whether it be an earthquake, volcanic event, explosion. We estimate full moment tensors and their uncertainties for the North Korea declared nuclear test and for a collocated event that occurred eight minutes later. The nuclear test and the subsequent event occurred on September 3, 2017 at around 03:30 and 03:38 UTC time. We perform a grid search over the six-dimensional space of moment tensors, generating synthetic waveforms at each moment tensor grid point and then evaluating a misfit function between the observed and synthetic waveforms. The synthetic waveforms are computed using a 1-D structure model for the region; this approximation requires careful assessment of time shifts between data and synthetics, as well as careful choice of the bandpass for filtering. For each moment tensor we characterize its uncertainty in terms of waveform misfit, a probability function, and a confidence curve for the probability that the true moment tensor lies within the neighborhood of the optimal moment tensor. For each event we estimate its moment tensor using observed waveforms from all available seismic stations within a 2000-km radius. We use as much of the waveform as possible, including surface waves for all stations, and body waves above 1 Hz for some of the closest stations. Our preliminary magnitude estimates are Mw 5.1-5.3 for the first event and Mw 4.7 for the second event. Our results show a dominantly positive isotropic moment tensor for the first event, and a dominantly negative isotropic moment tensor for the subsequent event. As expected, the details of the probability density, waveform fit, and confidence curves are influenced by the structural model, the choice of filter frequencies, and the selection of stations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22163775','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22163775"><span>ECS: efficient communication scheduling for underwater sensor networks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hong, Lu; Hong, Feng; Guo, Zhongwen; Li, Zhengbao</p> <p>2011-01-01</p> <p>TDMA protocols have attracted a lot of attention for underwater acoustic sensor networks (UWSNs), because of the unique characteristics of acoustic signal propagation such as great energy consumption in transmission, long propagation delay and long communication range. Previous TDMA protocols all allocated transmission time to nodes based on discrete time slots. This paper proposes an efficient continuous time scheduling TDMA protocol (ECS) for UWSNs, including the continuous time based and sender oriented conflict analysis model, the transmission moment allocation algorithm and the distributed topology maintenance algorithm. Simulation results confirm that ECS improves network throughput by 20% on average, compared to existing MAC protocols.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19790025753','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19790025753"><span>The CLASSY clustering algorithm: Description, evaluation, and comparison with the iterative self-organizing clustering system (ISOCLS). [used for LACIE data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lennington, R. K.; Malek, H.</p> <p>1978-01-01</p> <p>A clustering method, CLASSY, was developed, which alternates maximum likelihood iteration with a procedure for splitting, combining, and eliminating the resulting statistics. The method maximizes the fit of a mixture of normal distributions to the observed first through fourth central moments of the data and produces an estimate of the proportions, means, and covariances in this mixture. The mathematical model which is the basic for CLASSY and the actual operation of the algorithm is described. Data comparing the performances of CLASSY and ISOCLS on simulated and actual LACIE data are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25853869','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25853869"><span>Finger muscle attachments for an OpenSim upper-extremity model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Jong Hwa; Asakawa, Deanna S; Dennerlein, Jack T; Jindrich, Devin L</p> <p>2015-01-01</p> <p>We determined muscle attachment points for the index, middle, ring and little fingers in an OpenSim upper-extremity model. Attachment points were selected to match both experimentally measured locations and mechanical function (moment arms). Although experimental measurements of finger muscle attachments have been made, models differ from specimens in many respects such as bone segment ratio, joint kinematics and coordinate system. Likewise, moment arms are not available for all intrinsic finger muscles. Therefore, it was necessary to scale and translate muscle attachments from one experimental or model environment to another while preserving mechanical function. We used a two-step process. First, we estimated muscle function by calculating moment arms for all intrinsic and extrinsic muscles using the partial velocity method. Second, optimization using Simulated Annealing and Hooke-Jeeves algorithms found muscle-tendon paths that minimized root mean square (RMS) differences between experimental and modeled moment arms. The partial velocity method resulted in variance accounted for (VAF) between measured and calculated moment arms of 75.5% on average (range from 48.5% to 99.5%) for intrinsic and extrinsic index finger muscles where measured data were available. RMS error between experimental and optimized values was within one standard deviation (S.D) of measured moment arm (mean RMS error = 1.5 mm < measured S.D = 2.5 mm). Validation of both steps of the technique allowed for estimation of muscle attachment points for muscles whose moment arms have not been measured. Differences between modeled and experimentally measured muscle attachments, averaged over all finger joints, were less than 4.9 mm (within 7.1% of the average length of the muscle-tendon paths). The resulting non-proprietary musculoskeletal model of the human fingers could be useful for many applications, including better understanding of complex multi-touch and gestural movements.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4390324','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4390324"><span>Finger Muscle Attachments for an OpenSim Upper-Extremity Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lee, Jong Hwa; Asakawa, Deanna S.; Dennerlein, Jack T.; Jindrich, Devin L.</p> <p>2015-01-01</p> <p>We determined muscle attachment points for the index, middle, ring and little fingers in an OpenSim upper-extremity model. Attachment points were selected to match both experimentally measured locations and mechanical function (moment arms). Although experimental measurements of finger muscle attachments have been made, models differ from specimens in many respects such as bone segment ratio, joint kinematics and coordinate system. Likewise, moment arms are not available for all intrinsic finger muscles. Therefore, it was necessary to scale and translate muscle attachments from one experimental or model environment to another while preserving mechanical function. We used a two-step process. First, we estimated muscle function by calculating moment arms for all intrinsic and extrinsic muscles using the partial velocity method. Second, optimization using Simulated Annealing and Hooke-Jeeves algorithms found muscle-tendon paths that minimized root mean square (RMS) differences between experimental and modeled moment arms. The partial velocity method resulted in variance accounted for (VAF) between measured and calculated moment arms of 75.5% on average (range from 48.5% to 99.5%) for intrinsic and extrinsic index finger muscles where measured data were available. RMS error between experimental and optimized values was within one standard deviation (S.D) of measured moment arm (mean RMS error = 1.5 mm < measured S.D = 2.5 mm). Validation of both steps of the technique allowed for estimation of muscle attachment points for muscles whose moment arms have not been measured. Differences between modeled and experimentally measured muscle attachments, averaged over all finger joints, were less than 4.9 mm (within 7.1% of the average length of the muscle-tendon paths). The resulting non-proprietary musculoskeletal model of the human fingers could be useful for many applications, including better understanding of complex multi-touch and gestural movements. PMID:25853869</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26626592','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26626592"><span>Beyond the 'teachable moment' - A conceptual analysis of women's perinatal behaviour change.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Olander, Ellinor K; Darwin, Zoe J; Atkinson, Lou; Smith, Debbie M; Gardner, Benjamin</p> <p>2016-06-01</p> <p>Midwives are increasingly expected to promote healthy behaviour to women and pregnancy is often regarded as a 'teachable moment' for health behaviour change. This view focuses on motivational aspects, when a richer analysis of behaviour change may be achieved by viewing the perinatal period through the lens of the Capability-Opportunity-Motivation Behaviour framework. This framework proposes that behaviour has three necessary determinants: capability, opportunity, and motivation. To outline a broader analysis of perinatal behaviour change than is afforded by the existing conceptualisation of the 'teachable moment' by using the Capability-Opportunity-Motivation Behaviour framework. Research suggests that the perinatal period can be viewed as a time in which capability, opportunity or motivation naturally change such that unhealthy behaviours are disrupted, and healthy behaviours may be adopted. Moving away from a sole focus on motivation, an analysis utilising the Capability-Opportunity-Motivation Behaviour framework suggests that changes in capability and opportunity may also offer opportune points for intervention, and that lack of capability or opportunity may act as barriers to behaviour change that might be expected based solely on changes in motivation. Moreover, the period spanning pregnancy and the postpartum could be seen as a series of opportune intervention moments, that is, personally meaningful episodes initiated by changes in capability, opportunity or motivation. This analysis offers new avenues for research and practice, including identifying discrete events that may trigger shifts in capability, opportunity or motivation, and whether and how interventions might promote initiation and maintenance of perinatal health behaviours. Copyright © 2015 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS1015c2161A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS1015c2161A"><span>Mobile robot motion estimation using Hough transform</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu</p> <p>2018-05-01</p> <p>This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JChPh.148p4108D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JChPh.148p4108D"><span>Selected-node stochastic simulation algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duso, Lorenzo; Zechner, Christoph</p> <p>2018-04-01</p> <p>Stochastic simulations of biochemical networks are of vital importance for understanding complex dynamics in cells and tissues. However, existing methods to perform such simulations are associated with computational difficulties and addressing those remains a daunting challenge to the present. Here we introduce the selected-node stochastic simulation algorithm (snSSA), which allows us to exclusively simulate an arbitrary, selected subset of molecular species of a possibly large and complex reaction network. The algorithm is based on an analytical elimination of chemical species, thereby avoiding explicit simulation of the associated chemical events. These species are instead described continuously in terms of statistical moments derived from a stochastic filtering equation, resulting in a substantial speedup when compared to Gillespie's stochastic simulation algorithm (SSA). Moreover, we show that statistics obtained via snSSA profit from a variance reduction, which can significantly lower the number of Monte Carlo samples needed to achieve a certain performance. We demonstrate the algorithm using several biological case studies for which the simulation time could be reduced by orders of magnitude.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JCoPh.231.5805D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JCoPh.231.5805D"><span>A multivariate quadrature based moment method for LES based modeling of supersonic combustion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Donde, Pratik; Koo, Heeseok; Raman, Venkat</p> <p>2012-07-01</p> <p>The transported probability density function (PDF) approach is a powerful technique for large eddy simulation (LES) based modeling of scramjet combustors. In this approach, a high-dimensional transport equation for the joint composition-enthalpy PDF needs to be solved. Quadrature based approaches provide deterministic Eulerian methods for solving the joint-PDF transport equation. In this work, it is first demonstrated that the numerical errors associated with LES require special care in the development of PDF solution algorithms. The direct quadrature method of moments (DQMOM) is one quadrature-based approach developed for supersonic combustion modeling. This approach is shown to generate inconsistent evolution of the scalar moments. Further, gradient-based source terms that appear in the DQMOM transport equations are severely underpredicted in LES leading to artificial mixing of fuel and oxidizer. To overcome these numerical issues, a semi-discrete quadrature method of moments (SeQMOM) is formulated. The performance of the new technique is compared with the DQMOM approach in canonical flow configurations as well as a three-dimensional supersonic cavity stabilized flame configuration. The SeQMOM approach is shown to predict subfilter statistics accurately compared to the DQMOM approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22391545-electronic-magnetic-properties-small-rhodium-clusters','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22391545-electronic-magnetic-properties-small-rhodium-clusters"><span>Electronic and magnetic properties of small rhodium clusters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Soon, Yee Yeen; Yoon, Tiem Leong; Lim, Thong Leng</p> <p>2015-04-24</p> <p>We report a theoretical study of the electronic and magnetic properties of rhodium-atomic clusters. The lowest energy structures at the semi-empirical level of rhodium clusters are first obtained from a novel global-minimum search algorithm, known as PTMBHGA, where Gupta potential is used to describe the atomic interaction among the rhodium atoms. The structures are then re-optimized at the density functional theory (DFT) level with exchange-correlation energy approximated by Perdew-Burke-Ernzerhof generalized gradient approximation. For the purpose of calculating the magnetic moment of a given cluster, we calculate the optimized structure as a function of the spin multiplicity within the DFT framework.more » The resultant magnetic moments with the lowest energies so obtained allow us to work out the magnetic moment as a function of cluster size. Rhodium atomic clusters are found to display a unique variation in the magnetic moment as the cluster size varies. However, Rh{sub 4} and Rh{sub 6} are found to be nonmagnetic. Electronic structures of the magnetic ground-state structures are also investigated within the DFT framework. The results are compared against those based on different theoretical approaches available in the literature.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017RaPC..141..339R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017RaPC..141..339R"><span>An analytical method based on multipole moment expansion to calculate the flux distribution in Gammacell-220</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rezaeian, P.; Ataenia, V.; Shafiei, S.</p> <p>2017-12-01</p> <p>In this paper, the flux of photons inside the irradiation cell of the Gammacell-220 is calculated using an analytical method based on multipole moment expansion. The flux of the photons inside the irradiation cell is introduced as the function of monopole, dipoles and quadruples in the Cartesian coordinate system. For the source distribution of the Gammacell-220, the values of the multipole moments are specified by direct integrating. To confirm the validation of the presented methods, the flux distribution inside the irradiation cell was determined utilizing MCNP simulations as well as experimental measurements. To measure the flux inside the irradiation cell, Amber dosimeters were employed. The calculated values of the flux were in agreement with the values obtained by simulations and measurements, especially in the central zones of the irradiation cell. In order to show that the present method is a good approximation to determine the flux in the irradiation cell, the values of the multipole moments were obtained by fitting the simulation and experimental data using Levenberg-Marquardt algorithm. The present method leads to reasonable results for the all source distribution even without any symmetry which makes it a powerful tool for the source load planning.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EPJWC.17501007B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EPJWC.17501007B"><span>Multi-hadron-state contamination in nucleon observables from chiral perturbation theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bär, Oliver</p> <p>2018-03-01</p> <p>Multi-particle states with additional pions are expected to be a non-negligible source of the excited-state contamination in lattice simulations at the physical point. It is shown that baryon chiral perturbation theory (ChPT) can be employed to calculate the contamination due to two-particle nucleon-pion states in various nucleon observables. Results to leading order are presented for the nucleon axial, tensor and scalar charge and three Mellin moments of parton distribution functions: the average quark momentum fraction, the helicity and the transversity moment. Taking into account experimental and phenomenological results for the charges and moments the impact of the nucleon-pionstates on lattice estimates for these observables can be estimated. The nucleon-pion-state contribution leads to an overestimation of all charges and moments obtained with the plateau method. The overestimation is at the 5-10% level for source-sink separations of about 2 fm. Existing lattice data is not in conflict with the ChPT predictions, but the comparison suggests that significantly larger source-sink separations are needed to compute the charges and moments with few-percent precision. Talk given at the 35th International Symposium on Lattice Field Theory, 18 - 24 June 2017, Granada, Spain.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003WRR....39.1243E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003WRR....39.1243E"><span>Comparisons of two moments-based estimators that utilize historical and paleoflood data for the log Pearson type III distribution</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>England, John F.; Salas, José D.; Jarrett, Robert D.</p> <p>2003-09-01</p> <p>The expected moments algorithm (EMA) [, 1997] and the Bulletin 17B [, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed-threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed-threshold exceedance cases. EMA performed comparatively much better in other fixed-threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV-simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22657788-moment-convergence-method-stochastic-analysis-biochemical-reaction-networks','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22657788-moment-convergence-method-stochastic-analysis-biochemical-reaction-networks"><span>A moment-convergence method for stochastic analysis of biochemical reaction networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zhang, Jiajun; Nie, Qing; Zhou, Tianshou, E-mail: mcszhtsh@mail.sysu.edu.cn</p> <p></p> <p>Traditional moment-closure methods need to assume that high-order cumulants of a probability distribution approximate to zero. However, this strong assumption is not satisfied for many biochemical reaction networks. Here, we introduce convergent moments (defined in mathematics as the coefficients in the Taylor expansion of the probability-generating function at some point) to overcome this drawback of the moment-closure methods. As such, we develop a new analysis method for stochastic chemical kinetics. This method provides an accurate approximation for the master probability equation (MPE). In particular, the connection between low-order convergent moments and rate constants can be more easily derived in termsmore » of explicit and analytical forms, allowing insights that would be difficult to obtain through direct simulation or manipulation of the MPE. In addition, it provides an accurate and efficient way to compute steady-state or transient probability distribution, avoiding the algorithmic difficulty associated with stiffness of the MPE due to large differences in sizes of rate constants. Applications of the method to several systems reveal nontrivial stochastic mechanisms of gene expression dynamics, e.g., intrinsic fluctuations can induce transient bimodality and amplify transient signals, and slow switching between promoter states can increase fluctuations in spatially heterogeneous signals. The overall approach has broad applications in modeling, analysis, and computation of complex biochemical networks with intrinsic noise.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1392885','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1392885"><span>Rotation invariants of vector fields from orthogonal moments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Yang, Bo; Kostková, Jitka; Flusser, Jan</p> <p></p> <p>Vector field images are a type of new multidimensional data that appear in many engineering areas. Although the vector fields can be visualized as images, they differ from graylevel and color images in several aspects. In order to analyze them, special methods and algorithms must be originally developed or substantially adapted from the traditional image processing area. Here, we propose a method for the description and matching of vector field patterns under an unknown rotation of the field. Rotation of a vector field is so-called total rotation, where the action is applied not only on the spatial coordinates but alsomore » on the field values. Invariants of vector fields with respect to total rotation constructed from orthogonal Gaussian–Hermite moments and Zernike moments are introduced. Their numerical stability is shown to be better than that of the invariants published so far. We demonstrate their usefulness in a real world template matching application of rotated vector fields.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20130013584','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20130013584"><span>Investigation of Optimal Control Allocation for Gust Load Alleviation in Flight Control</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Frost, Susan A.; Taylor, Brian R.; Bodson, Marc</p> <p>2012-01-01</p> <p>Advances in sensors and avionics computation power suggest real-time structural load measurements could be used in flight control systems for improved safety and performance. A conventional transport flight control system determines the moments necessary to meet the pilot's command, while rejecting disturbances and maintaining stability of the aircraft. Control allocation is the problem of converting these desired moments into control effector commands. In this paper, a framework is proposed to incorporate real-time structural load feedback and structural load constraints in the control allocator. Constrained optimal control allocation can be used to achieve desired moments without exceeding specified limits on monitored load points. Minimization of structural loads by the control allocator is used to alleviate gust loads. The framework to incorporate structural loads in the flight control system and an optimal control allocation algorithm will be described and then demonstrated on a nonlinear simulation of a generic transport aircraft with flight dynamics and static structural loads.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20080047755&hterms=interferometry&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dinterferometry','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20080047755&hterms=interferometry&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dinterferometry"><span>White-light Interferometry using a Channeled Spectrum: II. Calibration Methods, Numerical and Experimental Results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.; Best, Paul K.</p> <p>2007-01-01</p> <p>In the companion paper, [Appl. Opt. 46, 5853 (2007)] a highly accurate white light interference model was developed from just a few key parameters characterized in terms of various moments of the source and instrument transmission function. We develop and implement the end-to-end process of calibrating these moment parameters together with the differential dispersion of the instrument and applying them to the algorithms developed in the companion paper. The calibration procedure developed herein is based on first obtaining the standard monochromatic parameters at the pixel level: wavenumber, phase, intensity, and visibility parameters via a nonlinear least-squares procedure that exploits the structure of the model. The pixel level parameters are then combined to obtain the required 'global' moment and dispersion parameters. The process is applied to both simulated scenarios of astrometric observations and to data from the microarcsecond metrology testbed (MAM), an interferometer testbed that has played a prominent role in the development of this technology.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1392885-rotation-invariants-vector-fields-from-orthogonal-moments','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1392885-rotation-invariants-vector-fields-from-orthogonal-moments"><span>Rotation invariants of vector fields from orthogonal moments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Yang, Bo; Kostková, Jitka; Flusser, Jan; ...</p> <p>2017-09-11</p> <p>Vector field images are a type of new multidimensional data that appear in many engineering areas. Although the vector fields can be visualized as images, they differ from graylevel and color images in several aspects. In order to analyze them, special methods and algorithms must be originally developed or substantially adapted from the traditional image processing area. Here, we propose a method for the description and matching of vector field patterns under an unknown rotation of the field. Rotation of a vector field is so-called total rotation, where the action is applied not only on the spatial coordinates but alsomore » on the field values. Invariants of vector fields with respect to total rotation constructed from orthogonal Gaussian–Hermite moments and Zernike moments are introduced. Their numerical stability is shown to be better than that of the invariants published so far. We demonstrate their usefulness in a real world template matching application of rotated vector fields.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19890066087&hterms=Orientation+basis&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DOrientation%2Bbasis','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19890066087&hterms=Orientation+basis&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DOrientation%2Bbasis"><span>A method for estimating the mass properties of a manipulator by measuring the reaction moments at its base</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>West, Harry; Papadopoulos, Evangelos; Dubowsky, Steven; Cheah, Hanson</p> <p>1989-01-01</p> <p>Emulating on earth the weightlessness of a manipulator floating in space requires knowledge of the manipulator's mass properties. A method for calculating these properties by measuring the reaction forces and moments at the base of the manipulator is described. A manipulator is mounted on a 6-DOF sensor, and the reaction forces and moments at its base are measured for different positions of the links as well as for different orientations of its base. A procedure is developed to calculate from these measurements some combinations of the mass properties. The mass properties identified are not sufficiently complete for computed torque and other dynamic control techniques, but do allow compensation for the gravitational load on the links, and for simulation of weightless conditions on a space emulator. The algorithm has been experimentally demonstrated on a PUMA 260 and used to measure the independent combinations of the 16 mass parameters of the base and three proximal links.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22306327-magnetic-structure-co-ncnh-determined-spin-polarized-neutron-diffraction','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22306327-magnetic-structure-co-ncnh-determined-spin-polarized-neutron-diffraction"><span>The magnetic structure of Co(NCNH)₂ as determined by (spin-polarized) neutron diffraction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Jacobs, Philipp; Houben, Andreas; Senyshyn, Anatoliy</p> <p></p> <p>The magnetic structure of Co(NCNH)₂ has been studied by neutron diffraction data below 10 K using the SPODI and DNS instruments at FRM II, Munich. There is an intensity change in the (1 1 0) and (0 2 0) reflections around 4 K, to be attributed to the onset of a magnetic ordering of the Co²⁺ spins. Four different spin orientations have been evaluated on the basis of Rietveld refinements, comprising antiferromagnetic as well as ferromagnetic ordering along all three crystallographic axes. Both residual values and supplementary susceptibility measurements evidence that only a ferromagnetic ordering with all Co²⁺ spins parallelmore » to the c axis is a suitable description of the low-temperature magnetic ground state of Co(NCNH)₂. The deviation of the magnetic moment derived by the Rietveld refinement from the expectancy value may be explained either by an incomplete saturation of the moment at temperatures slightly below the Curie temperature or by a small Jahn–Teller distortion. - Graphical abstract: The magnetic ground state of Co(NCNH)₂ has been clarified by (spin-polarized) neutron diffraction data at low temperatures. Intensity changes below 4 K arise due to the onset of ferromagnetic ordering of the Co²⁺ spins parallel to the c axis, corroborated by various (magnetic) Rietveld refinements. Highlights: • Powderous Co(NCNH)₂ has been subjected to (spin-polarized) neutron diffraction. • Magnetic susceptibility data of Co(NCNH)₂ have been collected. • Below 4 K, the magnetic moments align ferromagnetically with all Co²⁺ spins parallel to the c axis. • The magnetic susceptibility data yield an effective magnetic moment of 4.68 and a Weiss constant of -13(2) K. • The ferromagnetic Rietveld refinement leads to a magnetic moment of 2.6 which is close to the expectancy value of 3.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ISPAr42.3..607H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ISPAr42.3..607H"><span>a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.</p> <p>2018-04-01</p> <p>Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1862c0040A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1862c0040A"><span>Modeling of full-Heusler alloys within tight-binding approximation: Case study of Fe2MnAl</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Azhar, A.; Majidi, M. A.; Nanto, D.</p> <p>2017-07-01</p> <p>Heusler alloys have been known for about a century, and predictions of magnetic moment values using Slater-Pauling rule have been successful for many such materials. However, such a simple counting rule has been found not to always work for all Heusler alloys. For instance, Fe2CuAl has been found to have magnetic moment of 3.30 µB per formula unit although the Slater-Pauling rule suggests the value of 2 µB. On the other hand, a recent experiment shows that a non-stoichiometric Heusler compound Fe2Mn0.5Cu0.5Al possesses magnetic moment of 4 µB, closer to the Slater-Pauling prediction for the stoichiometric compound. Such discrepancies signify that the theory to predict the magnetic moment of Heusler alloys in general is still far from being complete. Motivated by this issue, we propose to do a theoretical study on a full-Heusler alloy Fe2MnAl to understand the formation of magnetic moment microscopically. We model the system by constructing a density-functional-theory-based tight-binding Hamiltonian and incorporating Hubbard repulsive as well as spin-spin interactions for the electrons occupying the d-orbitals. Then, we solve the model using Green's function approach, and treat the interaction terms within the mean-field approximation. At this stage, we aim to formulate the computational algorithm for the overall calculation process. Our final goal is to compute the total magnetic moment per unit cell of this system and compare it with the experimental data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5061423','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5061423"><span>Multi-Sensor Based State Prediction for Personal Mobility Vehicles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gupta, Pankaj; Umata, Ichiro; Watanabe, Atsushi; Even, Jani; Suyama, Takayuki; Ishii, Shin</p> <p>2016-01-01</p> <p>This paper presents a study on multi-modal human emotional state detection while riding a powered wheelchair (PMV; Personal Mobility Vehicle) in an indoor labyrinth-like environment. The study reports findings on the habituation of human stress response during self-driving. In addition, the effects of “loss of controllability”, change in the role of the driver to a passenger, are investigated via an autonomous driving modality. The multi-modal emotional state detector sensing framework consists of four sensing devices: electroencephalograph (EEG), heart inter-beat interval (IBI), galvanic skin response (GSR) and stressor level lever (in the case of autonomous riding). Physiological emotional state measurement characteristics are organized by time-scale, in terms of capturing slower changes (long-term) and quicker changes from moment-to-moment. Experimental results with fifteen participants regarding subjective emotional state reports and commercial software measurements validated the proposed emotional state detector. Short-term GSR and heart signal characterizations captured moment-to-moment emotional state during autonomous riding (Spearman correlation; ρ = 0.6, p < 0.001). Short-term GSR and EEG characterizations reliably captured moment-to-moment emotional state during self-driving (Classification accuracy; 69.7). Finally, long-term GSR and heart characterizations were confirmed to reliably capture slow changes during autonomous riding and also of emotional state during participant resting state. The purpose of this study and the exploration of various algorithms and sensors in a structured framework is to provide a comprehensive background for multi-modal emotional state prediction experiments and/or applications. Additional discussion regarding the feasibility and utility of the possibilities of these concepts are given. PMID:27732589</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27732589','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27732589"><span>Multi-Sensor Based State Prediction for Personal Mobility Vehicles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Abdur-Rahim, Jamilah; Morales, Yoichi; Gupta, Pankaj; Umata, Ichiro; Watanabe, Atsushi; Even, Jani; Suyama, Takayuki; Ishii, Shin</p> <p>2016-01-01</p> <p>This paper presents a study on multi-modal human emotional state detection while riding a powered wheelchair (PMV; Personal Mobility Vehicle) in an indoor labyrinth-like environment. The study reports findings on the habituation of human stress response during self-driving. In addition, the effects of "loss of controllability", change in the role of the driver to a passenger, are investigated via an autonomous driving modality. The multi-modal emotional state detector sensing framework consists of four sensing devices: electroencephalograph (EEG), heart inter-beat interval (IBI), galvanic skin response (GSR) and stressor level lever (in the case of autonomous riding). Physiological emotional state measurement characteristics are organized by time-scale, in terms of capturing slower changes (long-term) and quicker changes from moment-to-moment. Experimental results with fifteen participants regarding subjective emotional state reports and commercial software measurements validated the proposed emotional state detector. Short-term GSR and heart signal characterizations captured moment-to-moment emotional state during autonomous riding (Spearman correlation; ρ = 0.6, p < 0.001). Short-term GSR and EEG characterizations reliably captured moment-to-moment emotional state during self-driving (Classification accuracy; 69.7). Finally, long-term GSR and heart characterizations were confirmed to reliably capture slow changes during autonomous riding and also of emotional state during participant resting state. The purpose of this study and the exploration of various algorithms and sensors in a structured framework is to provide a comprehensive background for multi-modal emotional state prediction experiments and/or applications. Additional discussion regarding the feasibility and utility of the possibilities of these concepts are given.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1995SPIE.2567..151G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1995SPIE.2567..151G"><span>Automatic comparison of striation marks and automatic classification of shoe prints</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Geradts, Zeno J.; Keijzer, Jan; Keereweer, Isaac</p> <p>1995-09-01</p> <p>A database for toolmarks (named TRAX) and a database for footwear outsole designs (named REBEZO) have been developed on a PC. The databases are filled with video-images and administrative data about the toolmarks and the footwear designs. An algorithm for the automatic comparison of the digitized striation patterns has been developed for TRAX. The algorithm appears to work well for deep and complete striation marks and will be implemented in TRAX. For REBEZO some efforts have been made to the automatic classification of outsole patterns. The algorithm first segments the shoeprofile. Fourier-features are selected for the separate elements and are classified with a neural network. In future developments information on invariant moments of the shape and rotation angle will be included in the neural network.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19890066081&hterms=algebra&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dalgebra','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19890066081&hterms=algebra&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dalgebra"><span>A spatial operator algebra for manipulator modeling and control</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rodriguez, G.; Kreutz, K.; Jain, A.</p> <p>1989-01-01</p> <p>A spatial operator algebra for modeling the control and trajectory design of manipulation is discussed, with emphasis on its analytical formulation and implementation in the Ada programming language. The elements of this algebra are linear operators whose domain and range spaces consist of forces, moments, velocities, and accelerations. The effect of these operators is equivalent to a spatial recursion along the span of the manipulator. Inversion is obtained using techniques of recursive filtering and smoothing. The operator alegbra provides a high-level framework for describing the dynamic and kinematic behavior of a manipulator and control and trajectory design algorithms. Implementable recursive algorithms can be immediately derived from the abstract operator expressions by inspection, thus greatly simplifying the transition from an abstract problem formulation and solution to the detailed mechanization of a specific algorithm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24313496','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24313496"><span>Quantum rotor model for a Bose-Einstein condensate of dipolar molecules.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Armaitis, J; Duine, R A; Stoof, H T C</p> <p>2013-11-22</p> <p>We show that a Bose-Einstein condensate of heteronuclear molecules in the regime of small and static electric fields is described by a quantum rotor model for the macroscopic electric dipole moment of the molecular gas cloud. We solve this model exactly and find the symmetric, i.e., rotationally invariant, and dipolar phases expected from the single-molecule problem, but also an axial and planar nematic phase due to many-body effects. Investigation of the wave function of the macroscopic dipole moment also reveals squeezing of the probability distribution for the angular momentum of the molecules.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26856522','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26856522"><span>Development of an inter-professional screening instrument for cancer patients' education process.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vaartio-Rajalin, Heli; Huumonen, Tuula; Iire, Liisa; Jekunen, Antti; Leino-Kilpi, Helena; Minn, Heikki; Paloniemi, Jenni; Zabalegui, Adelaida</p> <p>2016-02-01</p> <p>The aim of this paper is to describe the development of an inter-professional screening instrument for cancer patients' cognitive resources, knowledge expectations and inter-professional collaboration within patient education. Four empirical datasets during 2012-2014 were analyzed in order to identify main categories, subcategories and items for inter-professional screening instrument. Our inter-professional screening instrument integrates the critical moments of cancer patient education and the knowledge expectation types obtained from patient datasets to assessment of patients' cognitive resources, knowledge expectations and comprehension; and intra; and inter-professional. Copyright © 2015 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004PhDT........35T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004PhDT........35T"><span>Statistics based sampling for controller and estimator design</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tenne, Dirk</p> <p></p> <p>The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010IEITC..91..314J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010IEITC..91..314J"><span>A Network Selection Algorithm Considering Power Consumption in Hybrid Wireless Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Joe, Inwhee; Kim, Won-Tae; Hong, Seokjoon</p> <p></p> <p>In this paper, we propose a novel network selection algorithm considering power consumption in hybrid wireless networks for vertical handover. CDMA, WiBro, WLAN networks are candidate networks for this selection algorithm. This algorithm is composed of the power consumption prediction algorithm and the final network selection algorithm. The power consumption prediction algorithm estimates the expected lifetime of the mobile station based on the current battery level, traffic class and power consumption for each network interface card of the mobile station. If the expected lifetime of the mobile station in a certain network is not long enough compared the handover delay, this particular network will be removed from the candidate network list, thereby preventing unnecessary handovers in the preprocessing procedure. On the other hand, the final network selection algorithm consists of AHP (Analytic Hierarchical Process) and GRA (Grey Relational Analysis). The global factors of the network selection structure are QoS, cost and lifetime. If user preference is lifetime, our selection algorithm selects the network that offers longest service duration due to low power consumption. Also, we conduct some simulations using the OPNET simulation tool. The simulation results show that the proposed algorithm provides longer lifetime in the hybrid wireless network environment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28208387','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28208387"><span>Network clustering and community detection using modulus of families of loops.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shakeri, Heman; Poggi-Corradini, Pietro; Albin, Nathan; Scoglio, Caterina</p> <p>2017-01-01</p> <p>We study the structure of loops in networks using the notion of modulus of loop families. We introduce an alternate measure of network clustering by quantifying the richness of families of (simple) loops. Modulus tries to minimize the expected overlap among loops by spreading the expected link usage optimally. We propose weighting networks using these expected link usages to improve classical community detection algorithms. We show that the proposed method enhances the performance of certain algorithms, such as spectral partitioning and modularity maximization heuristics, on standard benchmarks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5298648','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5298648"><span>Estimation of Ground Reaction Forces and Moments During Gait Using Only Inertial Motion Capture</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Karatsidis, Angelos; Bellusci, Giovanni; Schepers, H. Martin; de Zee, Mark; Andersen, Michael S.; Veltink, Peter H.</p> <p>2016-01-01</p> <p>Ground reaction forces and moments (GRF&M) are important measures used as input in biomechanical analysis to estimate joint kinetics, which often are used to infer information for many musculoskeletal diseases. Their assessment is conventionally achieved using laboratory-based equipment that cannot be applied in daily life monitoring. In this study, we propose a method to predict GRF&M during walking, using exclusively kinematic information from fully-ambulatory inertial motion capture (IMC). From the equations of motion, we derive the total external forces and moments. Then, we solve the indeterminacy problem during double stance using a distribution algorithm based on a smooth transition assumption. The agreement between the IMC-predicted and reference GRF&M was categorized over normal walking speed as excellent for the vertical (ρ = 0.992, rRMSE = 5.3%), anterior (ρ = 0.965, rRMSE = 9.4%) and sagittal (ρ = 0.933, rRMSE = 12.4%) GRF&M components and as strong for the lateral (ρ = 0.862, rRMSE = 13.1%), frontal (ρ = 0.710, rRMSE = 29.6%), and transverse GRF&M (ρ = 0.826, rRMSE = 18.2%). Sensitivity analysis was performed on the effect of the cut-off frequency used in the filtering of the input kinematics, as well as the threshold velocities for the gait event detection algorithm. This study was the first to use only inertial motion capture to estimate 3D GRF&M during gait, providing comparable accuracy with optical motion capture prediction. This approach enables applications that require estimation of the kinetics during walking outside the gait laboratory. PMID:28042857</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23967344','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23967344"><span>Dissection of a single rat muscle-tendon complex changes joint moments exerted by neighboring muscles: implications for invasive surgical interventions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Maas, Huub; Baan, Guus C; Huijing, Peter A</p> <p>2013-01-01</p> <p>The aim of this paper is to investigate mechanical functioning of a single skeletal muscle, active within a group of (previously) synergistic muscles. For this purpose, we assessed wrist angle-active moment characteristics exerted by a group of wrist flexion muscles in the rat for three conditions: (i) after resection of the upper arm skin; (ii) after subsequent distal tenotomy of flexor carpi ulnaris muscle (FCU); and (iii) after subsequent freeing of FCU distal tendon and muscle belly from surrounding tissues (MT dissection). Measurements were performed for a control group and for an experimental group after recovery (5 weeks) from tendon transfer of FCU to extensor carpi radialis (ECR) insertion. To assess if FCU tenotomy and MT dissection affects FCU contributions to wrist moments exclusively or also those of neighboring wrist flexion muscles, these data were compared to wrist angle-moment characteristics of selectively activated FCU. FCU tenotomy and MT dissection decreased wrist moments of the control group at all wrist angles tested, including also angles for which no or minimal wrist moments were measured when activating FCU exclusively. For the tendon transfer group, wrist flexion moment increased after FCU tenotomy, but to a greater extent than can be expected based on wrist extension moments exerted by selectively excited transferred FCU. We conclude that dissection of a single muscle in any surgical treatment does not only affect mechanical characteristics of the target muscle, but also those of other muscles within the same compartment. Our results demonstrate also that even after agonistic-to-antagonistic tendon transfer, mechanical interactions with previously synergistic muscles do remain present.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008AGUSMGP31D..03P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008AGUSMGP31D..03P"><span>3D magnetic sources' framework estimation using Genetic Algorithm (GA)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ponte-Neto, C. F.; Barbosa, V. C.</p> <p>2008-05-01</p> <p>We present a method for inverting total-field anomaly for determining simple 3D magnetic sources' framework such as: batholiths, dikes, sills, geological contacts, kimberlite and lamproite pipes. We use GA to obtain magnetic sources' frameworks and their magnetic features simultaneously. Specifically, we estimate the magnetization direction (inclination and declination) and the total dipole moment intensity, and the horizontal and vertical positions, in Cartesian coordinates , of a finite set of elementary magnetic dipoles. The spatial distribution of these magnetic dipoles composes the skeletal outlines of the geologic sources. We assume that the geologic sources have a homogeneous magnetization distribution and, thus all dipoles have the same magnetization direction and dipole moment intensity. To implement the GA, we use real-valued encoding with crossover, mutation, and elitism. To obtain a unique and stable solution, we set upper and lower bounds on declination and inclination of [0,360°] and [-90°, 90°], respectively. We also set the criterion of minimum scattering of the dipole-position coordinates, to guarantee that spatial distribution of the dipoles (defining the source skeleton) be as close as possible to continuous distribution. To this end, we fix the upper and lower bounds of the dipole moment intensity and we evaluate the dipole-position estimates. If the dipole scattering is greater than a value expected by the interpreter, the upper bound of the dipole moment intensity is reduced by 10 % of the latter. We repeat this procedure until the dipole scattering and the data fitting are acceptable. We apply our method to noise-corrupted magnetic data from simulated 3D magnetic sources with simple geometries and located at different depths. In tests simulating sources such as sphere and cube, all estimates of the dipole coordinates are agreeing with center of mass of these sources. To elongated-prismatic sources in an arbitrary direction, we estimate dipole-position coordinates coincident with principal axis of sources. In tests with synthetic data, simulating the magnetic anomaly yielded by intrusive 2D structures such as dikes and sills, the estimates of the dipole coordinates are coincident with the principal plane of these 2D sources. We also inverted the aeromagnetic data from Serra do Cabral, in southeastern, Brazil, and we estimated dipoles distributed on a horizontal plane at depth of 30 km, with inclination and declination of 59.1° and -48.0°, respectively. The results showed close agreement with previous interpretation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26019610','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26019610"><span>Clustering performance comparison using K-means and expectation maximization algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jung, Yong Gyu; Kang, Min Soo; Heo, Jun</p> <p>2014-11-14</p> <p>Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1415796-prediction-novel-stable-fe-si-ternary-phase','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1415796-prediction-novel-stable-fe-si-ternary-phase"><span>Prediction of novel stable Fe-V-Si ternary phase</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Nguyen, Manh Cuong; Chen, Chong; Zhao, Xin</p> <p></p> <p>Genetic algorithm searches based on a cluster expansion model are performed to search for stable phases of Fe-V-Si ternary. Here, we identify a new thermodynamically, dynamically and mechanically stable ternary phase of Fe 5V 2Si with 2 formula units in a tetragonal unit cell. The formation energy of this new ternary phase is -36.9 meV/atom below the current ternary convex hull. The magnetic moment of Fe in the new structure varies from -0.30-2.52 μ B depending strongly on the number of Fe nearest neighbors. The total magnetic moment is 10.44 μ B/unit cell for new Fe 5V 2Si structure andmore » the system is ordinarily metallic.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1415796-prediction-novel-stable-fe-si-ternary-phase','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1415796-prediction-novel-stable-fe-si-ternary-phase"><span>Prediction of novel stable Fe-V-Si ternary phase</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Nguyen, Manh Cuong; Chen, Chong; Zhao, Xin; ...</p> <p>2018-10-28</p> <p>Genetic algorithm searches based on a cluster expansion model are performed to search for stable phases of Fe-V-Si ternary. Here, we identify a new thermodynamically, dynamically and mechanically stable ternary phase of Fe 5V 2Si with 2 formula units in a tetragonal unit cell. The formation energy of this new ternary phase is -36.9 meV/atom below the current ternary convex hull. The magnetic moment of Fe in the new structure varies from -0.30-2.52 μ B depending strongly on the number of Fe nearest neighbors. The total magnetic moment is 10.44 μ B/unit cell for new Fe 5V 2Si structure andmore » the system is ordinarily metallic.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AcAau.102..103Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AcAau.102..103Y"><span>Directional passability and quadratic steering logic for pyramid-type single gimbal control moment gyros</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yamada, Katsuhiko; Jikuya, Ichiro</p> <p>2014-09-01</p> <p>Singularity analysis and the steering logic of pyramid-type single gimbal control moment gyros are studied. First, a new concept of directional passability in a specified direction is introduced to investigate the structure of an elliptic singular surface. The differences between passability and directional passability are discussed in detail and are visualized for 0H, 2H, and 4H singular surfaces. Second, quadratic steering logic (QSL), a new steering logic for passing the singular surface, is investigated. The algorithm is based on the quadratic constrained quadratic optimization problem and is reduced to the Newton method by using Gröbner bases. The proposed steering logic is demonstrated through numerical simulations for both constant torque maneuvering examples and attitude control examples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940024155','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940024155"><span>Steady pressure measurements on an Aeroelastic Research Wing (ARW-2)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sandford, Maynard C.; Seidel, David A.; Eckstrom, Clinton V.</p> <p>1994-01-01</p> <p>Transonic steady and unsteady pressure tests have been conducted in the Langley transonic dynamics tunnel on a large elastic wing known as the DAST ARW-2. The wing has a supercritical airfoil, an aspect ratio of 10.3, a leading-edge sweep back angle of 28.8 degrees, and two inboard and one outboard trailing-edge control surfaces. Only the outboard control surface was deflected to generate steady and unsteady flow over the wing during this study. Only the steady surface pressure, control-surface hinge moment, wing-tip deflection, and wing-root bending moment measurements are presented. The results from this elastic wing test are in tabulated form to assist in calibrating advanced computational fluid dynamics (CFD) algorithms.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20703901','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20703901"><span>Using Elman recurrent neural networks with conjugate gradient algorithm in determining the anesthetic the amount of anesthetic medicine to be applied.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Güntürkün, Rüştü</p> <p>2010-08-01</p> <p>In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. The feed forward neural networks are also used for comparison. The conjugate gradient algorithm is compared with back propagation (BP) for training of the neural Networks. The applied artificial neural network is composed of three layers, namely the input layer, the hidden layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. EEG data has been recorded with Nihon Kohden 9200 brand 22-channel EEG device. The international 8-channel bipolar 10-20 montage system (8 TB-b system) has been used in assembling the recording electrodes. EEG data have been recorded by being sampled once in every 2 milliseconds. The artificial neural network has been designed so as to have 60 neurons in the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer. The values of the power spectral density (PSD) of 10-second EEG segments which correspond to the 1-50 Hz frequency range; the ratio of the total power of PSD values of the EEG segment at that moment in the same range to the total of PSD values of EEG segment taken prior to the anesthesia.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002APS..CCP.B2001T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002APS..CCP.B2001T"><span>Prospects for Finite-Difference Time-Domain (FDTD) Computational Electrodynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Taflove, Allen</p> <p>2002-08-01</p> <p>FDTD is the most powerful numerical solution of Maxwell's equations for structures having internal details. Relative to moment-method and finite-element techniques, FDTD can accurately model such problems with 100-times more field unknowns and with nonlinear and/or time-variable parameters. Hundreds of FDTD theory and applications papers are published each year. Currently, there are at least 18 commercial FDTD software packages for solving problems in: defense (especially vulnerability to electromagnetic pulse and high-power microwaves); design of antennas and microwave devices/circuits; electromagnetic compatibility; bioelectromagnetics (especially assessment of cellphone-generated RF absorption in human tissues); signal integrity in computer interconnects; and design of micro-photonic devices (especially photonic bandgap waveguides, microcavities; and lasers). This paper explores emerging prospects for FDTD computational electromagnetics brought about by continuing advances in computer capabilities and FDTD algorithms. We conclude that advances already in place point toward the usage by 2015 of ultralarge-scale (up to 1E11 field unknowns) FDTD electromagnetic wave models covering the frequency range from about 0.1 Hz to 1E17 Hz. We expect that this will yield significant benefits for our society in areas as diverse as computing, telecommunications, defense, and public health and safety.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PPCF...59a4044E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PPCF...59a4044E"><span>Runaway electron generation and control</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Esposito, B.; Boncagni, L.; Buratti, P.; Carnevale, D.; Causa, F.; Gospodarczyk Martin-Solis, M., Jr.; Popovic, Z.; Agostini, M.; Apruzzese, G.; Bin, W.; Cianfarani, C.; De Angelis, R.; Granucci, G.; Grosso, A.; Maddaluno, G.; Marocco, D.; Piergotti, V.; Pensa, A.; Podda, S.; Pucella, G.; Ramogida, G.; Rocchi, G.; Riva, M.; Sibio, A.; Sozzi, C.; Tilia, B.; Tudisco, O.; Valisa, M.; FTU Team</p> <p>2017-01-01</p> <p>We present an overview of FTU experiments on runaway electron (RE) generation and control carried out through a comprehensive set of real-time (RT) diagnostics/control systems and newly installed RE diagnostics. An RE imaging spectrometer system detects visible and infrared synchrotron radiation. A Cherenkov probe measures RE escaping the plasma. A gamma camera provides hard x-ray radial profiles from RE bremsstrahlung interactions in the plasma. Experiments on the onset and suppression of RE show that the threshold electric field for RE generation is larger than that expected according to a purely collisional theory, but consistent with an increase due to synchrotron radiation losses. This might imply a lower density to be targeted with massive gas injection for RE suppression in ITER. Experiments on active control of disruption-generated RE have been performed through feedback on poloidal coils by implementing an RT boundary-reconstruction algorithm evaluated on magnetic moments. The results indicate that the slow plasma current ramp-down and the simultaneous reduction of the reference plasma external radius are beneficial in dissipating the RE beam energy and population, leading to reduced RE interactions with plasma facing components. RE active control is therefore suggested as a possible alternative or complementary technique to massive gas injection.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29734764','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29734764"><span>Simultaneous Mean and Covariance Correction Filter for Orbit Estimation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Xiaoxu; Pan, Quan; Ding, Zhengtao; Ma, Zhengya</p> <p>2018-05-05</p> <p>This paper proposes a novel filtering design, from a viewpoint of identification instead of the conventional nonlinear estimation schemes (NESs), to improve the performance of orbit state estimation for a space target. First, a nonlinear perturbation is viewed or modeled as an unknown input (UI) coupled with the orbit state, to avoid the intractable nonlinear perturbation integral (INPI) required by NESs. Then, a simultaneous mean and covariance correction filter (SMCCF), based on a two-stage expectation maximization (EM) framework, is proposed to simply and analytically fit or identify the first two moments (FTM) of the perturbation (viewed as UI), instead of directly computing such the INPI in NESs. Orbit estimation performance is greatly improved by utilizing the fit UI-FTM to simultaneously correct the state estimation and its covariance. Third, depending on whether enough information is mined, SMCCF should outperform existing NESs or the standard identification algorithms (which view the UI as a constant independent of the state and only utilize the identified UI-mean to correct the state estimation, regardless of its covariance), since it further incorporates the useful covariance information in addition to the mean of the UI. Finally, our simulations demonstrate the superior performance of SMCCF via an orbit estimation example.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27914174','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27914174"><span>Mathematical detection of aortic valve opening (B point) in impedance cardiography: A comparison of three popular algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Árbol, Javier Rodríguez; Perakakis, Pandelis; Garrido, Alba; Mata, José Luis; Fernández-Santaella, M Carmen; Vila, Jaime</p> <p>2017-03-01</p> <p>The preejection period (PEP) is an index of left ventricle contractility widely used in psychophysiological research. Its computation requires detecting the moment when the aortic valve opens, which coincides with the B point in the first derivative of impedance cardiogram (ICG). Although this operation has been traditionally made via visual inspection, several algorithms based on derivative calculations have been developed to enable an automatic performance of the task. However, despite their popularity, data about their empirical validation are not always available. The present study analyzes the performance in the estimation of the aortic valve opening of three popular algorithms, by comparing their performance with the visual detection of the B point made by two independent scorers. Algorithm 1 is based on the first derivative of the ICG, Algorithm 2 on the second derivative, and Algorithm 3 on the third derivative. Algorithm 3 showed the highest accuracy rate (78.77%), followed by Algorithm 1 (24.57%) and Algorithm 2 (13.82%). In the automatic computation of PEP, Algorithm 2 resulted in significantly more missed cycles (48.57%) than Algorithm 1 (6.3%) and Algorithm 3 (3.5%). Algorithm 2 also estimated a significantly lower average PEP (70 ms), compared with the values obtained by Algorithm 1 (119 ms) and Algorithm 3 (113 ms). Our findings indicate that the algorithm based on the third derivative of the ICG performs significantly better. Nevertheless, a visual inspection of the signal proves indispensable, and this article provides a novel visual guide to facilitate the manual detection of the B point. © 2016 Society for Psychophysiological Research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002RaSc...37.1019T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002RaSc...37.1019T"><span>Some issues related to the novel spectral acceleration method for the fast computation of radiation/scattering from one-dimensional extremely large scale quasi-planar structures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Torrungrueng, Danai; Johnson, Joel T.; Chou, Hsi-Tseng</p> <p>2002-03-01</p> <p>The novel spectral acceleration (NSA) algorithm has been shown to produce an $[\\mathcal{O}]$(Ntot) efficient iterative method of moments for the computation of radiation/scattering from both one-dimensional (1-D) and two-dimensional large-scale quasi-planar structures, where Ntot is the total number of unknowns to be solved. This method accelerates the matrix-vector multiplication in an iterative method of moments solution and divides contributions between points into ``strong'' (exact matrix elements) and ``weak'' (NSA algorithm) regions. The NSA method is based on a spectral representation of the electromagnetic Green's function and appropriate contour deformation, resulting in a fast multipole-like formulation in which contributions from large numbers of points to a single point are evaluated simultaneously. In the standard NSA algorithm the NSA parameters are derived on the basis of the assumption that the outermost possible saddle point, φs,max, along the real axis in the complex angular domain is small. For given height variations of quasi-planar structures, this assumption can be satisfied by adjusting the size of the strong region Ls. However, for quasi-planar structures with large height variations, the adjusted size of the strong region is typically large, resulting in significant increases in computational time for the computation of the strong-region contribution and degrading overall efficiency of the NSA algorithm. In addition, for the case of extremely large scale structures, studies based on the physical optics approximation and a flat surface assumption show that the given NSA parameters in the standard NSA algorithm may yield inaccurate results. In this paper, analytical formulas associated with the NSA parameters for an arbitrary value of φs,max are presented, resulting in more flexibility in selecting Ls to compromise between the computation of the contributions of the strong and weak regions. In addition, a ``multilevel'' algorithm, decomposing 1-D extremely large scale quasi-planar structures into more than one weak region and appropriately choosing the NSA parameters for each weak region, is incorporated into the original NSA method to improve its accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29615852','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29615852"><span>Comparison of Visually Guided Flight in Insects and Birds.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Altshuler, Douglas L; Srinivasan, Mandyam V</p> <p>2018-01-01</p> <p>Over the last half century, work with flies, bees, and moths have revealed a number of visual guidance strategies for controlling different aspects of flight. Some algorithms, such as the use of pattern velocity in forward flight, are employed by all insects studied so far, and are used to control multiple flight tasks such as regulation of speed, measurement of distance, and positioning through narrow passages. Although much attention has been devoted to long-range navigation and homing in birds, until recently, very little was known about how birds control flight in a moment-to-moment fashion. A bird that flies rapidly through dense foliage to land on a branch-as birds often do-engages in a veritable three-dimensional slalom, in which it has to continually dodge branches and leaves, and find, and possibly even plan a collision-free path to the goal in real time. Each mode of flight from take-off to goal could potentially involve a different visual guidance algorithm. Here, we briefly review strategies for visual guidance of flight in insects, synthesize recent work from short-range visual guidance in birds, and offer a general comparison between the two groups of organisms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22894207','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22894207"><span>Ambient noise imaging in warm shallow waters; robust statistical algorithms and range estimation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chitre, Mandar; Kuselan, Subash; Pallayil, Venugopalan</p> <p>2012-08-01</p> <p>The high frequency ambient noise in warm shallow waters is dominated by snapping shrimp. The loud snapping noises they produce are impulsive and broadband. As the noise propagates through the water, it interacts with the seabed, sea surface, and submerged objects. An array of acoustic pressure sensors can produce images of the submerged objects using this noise as the source of acoustic "illumination." This concept is called ambient noise imaging (ANI) and was demonstrated using ADONIS, an ANI camera developed at the Scripps Institution of Oceanography. To overcome some of the limitations of ADONIS, a second generation ANI camera (ROMANIS) was developed at the National University of Singapore. The acoustic time series recordings made by ROMANIS during field experiments in Singapore show that the ambient noise is well modeled by a symmetric α-stable (SαS) distribution. As high-order moments of SαS distributions generally do not converge, ANI algorithms based on low-order moments and fractiles are developed and demonstrated. By localizing nearby snaps and identifying the echoes from an object, the range to the object can be passively estimated. This technique is also demonstrated using the data collected with ROMANIS.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70027680','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70027680"><span>Can we estimate total magnetization directions from aeromagnetic data using Helbig's integrals?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Phillips, J.D.</p> <p>2005-01-01</p> <p>An algorithm that implements Helbig's (1963) integrals for estimating the vector components (mx, my, mz) of tile magnetic dipole moment from the first order moments of the vector magnetic field components (??X, ??Y, ??Z) is tested on real and synthetic data. After a grid of total field aeromagnetic data is converted to vector component grids using Fourier filtering, Helbig's infinite integrals are evaluated as finite integrals in small moving windows using a quadrature algorithm based on the 2-D trapezoidal rule. Prior to integration, best-fit planar surfaces must be removed from the component data within the data windows in order to make the results independent of the coordinate system origin. Two different approaches are described for interpreting the results of the integration. In the "direct" method, results from pairs of different window sizes are compared to identify grid nodes where the angular difference between solutions is small. These solutions provide valid estimates of total magnetization directions for compact sources such as spheres or dipoles, but not for horizontally elongated or 2-D sources. In the "indirect" method, which is more forgiving of source geometry, results of the quadrature analysis are scanned for solutions that are parallel to a specified total magnetization direction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvM...2e4413R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvM...2e4413R"><span>Canted antiferromagnetism in phase-pure CuMnSb</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Regnat, A.; Bauer, A.; Senyshyn, A.; Meven, M.; Hradil, K.; Jorba, P.; Nemkovski, K.; Pedersen, B.; Georgii, R.; Gottlieb-Schönmeyer, S.; Pfleiderer, C.</p> <p>2018-05-01</p> <p>We report the low-temperature properties of phase-pure single crystals of the half-Heusler compound CuMnSb grown by means of optical float zoning. The magnetization, specific heat, electrical resistivity, and Hall effect of our single crystals exhibit an antiferromagnetic transition at TN=55 K and a second anomaly at a temperature T*≈34 K. Powder and single-crystal neutron diffraction establish an ordered magnetic moment of (3.9 ±0.1 ) μB/f .u . , consistent with the effective moment inferred from the Curie-Weiss dependence of the susceptibility. Below TN, the Mn sublattice displays commensurate type-II antiferromagnetic order with propagation vectors and magnetic moments along <111 > (magnetic space group R [I ]3 c ). Surprisingly, below T*, the moments tilt away from <111 > by a finite angle δ ≈11∘ , forming a canted antiferromagnetic structure without uniform magnetization consistent with magnetic space group C [B ]c . Our results establish that type-II antiferromagnetism is not the zero-temperature magnetic ground state of CuMnSb as may be expected of the face-centered cubic Mn sublattice.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1227300-magnetic-structure-eucu2sb2','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1227300-magnetic-structure-eucu2sb2"><span>The magnetic structure of EuCu 2Sb 2</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Ryan, D. H.; Cadogan, J. M.; Anand, V. K.; ...</p> <p>2015-05-06</p> <p>Antiferromagnetic ordering of EuCu 2Sb 2 which forms in the tetragonal CaBe 2Ge 2-type structure (space group P4/nmm #129) has been studied using neutron powder diffraction and 151Eu Mössbauer spectroscopy. The room temperature 151Eu isomer shift of –12.8(1) mm/s shows the Eu to be divalent, while the 151Eu hyperfine magnetic field (B hf) reaches 28.7(2) T at 2.1 K, indicating a full Eu 2+ magnetic moment. B hf(T) follows a smoothmore » $$S=\\frac{7}{2}$$ Brillouin function and yields an ordering temperature of 5.1(1) K. Refinement of the neutron diffraction data reveals a collinear A-type antiferromagnetic arrangement with the Eu moments perpendicular to the tetragonal c-axis. As a result, the refined Eu magnetic moment at 0.4 K is 7.08(15) μ B which is the full free-ion moment expected for the Eu 2+ ion with $$S=\\frac{7}{2}$$ and a spectroscopic splitting factor of g = 2.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1998PhRvB..5713681S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1998PhRvB..5713681S"><span>Magnetic moments, coupling, and interface interdiffusion in Fe/V(001) superlattices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schwickert, M. M.; Coehoorn, R.; Tomaz, M. A.; Mayo, E.; Lederman, D.; O'brien, W. L.; Lin, Tao; Harp, G. R.</p> <p>1998-06-01</p> <p>Epitaxial Fe/V(001) multilayers are studied both experimentally and by theoretical calculations. Sputter-deposited epitaxial films are characterized by x-ray diffraction, magneto-optical Kerr effect, and x-ray magnetic circular dichroism. These results are compared with first-principles calculations modeling different amounts of interface interdiffusion. The exchange coupling across the V layers is observed to oscillate, with antiferromagnetic peaks near the V layer thicknesses tV~22, 32, and 42 Å. For all films including superlattices and alloys, the average V magnetic moment is antiparallel to that of Fe. The average V moment increases slightly with increasing interdiffusion at the Fe/V interface. Calculations modeling mixed interface layers and measurements indicate that all V atoms are aligned with one another for tV<~15 Å, although the magnitude of the V moment decays toward the center of the layer. This ``transient ferromagnetic'' state arises from direct (d-d) exchange coupling between V atoms in the layer. It is argued that the transient ferromagnetism suppresses the first antiferromagnetic coupling peak between Fe layers, expected to occur at tV~12 Å.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1257509','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1257509"><span>Weak hybridization and isolated localized magnetic moments in the compounds CeT 2Cd 20 (T = Ni, Pd)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>White, B. D.; Yazici, D.; Ho, P. -C.</p> <p>2015-07-20</p> <p>Here, we report the physical properties of single crystals of the compounds CeT 2Cd 20 (T = Ni, Pd) that were grown in a molten Cd flux. Large separations of ~6.7- 6.8 Å between Ce ions favor the localized magnetic moments that are observed in measurements of the magnetization. The strength of the Ruderman-Kittel-Kasuya- Yosida magnetic exchange interaction between the localized moments is severely limited by the large Ce-Ce separations and by weak hybridization between localized Ce 4f and itinerant electron states. Measurements of electrical resistivity performed down to 0.138 K were unable to observe evidence for the emergence ofmore » magnetic order; however, magnetically-ordered ground states with very low transition temperatures are still expected in these compounds despite the isolated nature of the localized magnetic moments. Such a fragile magnetic order could be highly susceptible to tuning via applied pressure, but evidence for the emergence of magnetic order has not been observed so far in our measurements up to 2.5 GPa.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018InvPr..34c4003A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018InvPr..34c4003A"><span>Dynamic discrete tomography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alpers, Andreas; Gritzmann, Peter</p> <p>2018-03-01</p> <p>We consider the problem of reconstructing the paths of a set of points over time, where, at each of a finite set of moments in time the current positions of points in space are only accessible through some small number of their x-rays. This particular particle tracking problem, with applications, e.g. in plasma physics, is the basic problem in dynamic discrete tomography. We introduce and analyze various different algorithmic models. In particular, we determine the computational complexity of the problem (and various of its relatives) and derive algorithms that can be used in practice. As a byproduct we provide new results on constrained variants of min-cost flow and matching problems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.894a2041K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.894a2041K"><span>Solution of internal ballistic problem for SRM with grain of complex shape during main firing phase</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kiryushkin, A. E.; Minkov, L. L.</p> <p>2017-10-01</p> <p>Solid rocket motor (SRM) internal ballistics problems are related to the problems with moving boundaries. The algorithm able to solve similar problems in axisymmetric formulation on Cartesian mesh with an arbitrary order of accuracy is considered in this paper. The base of this algorithm is the ghost point extrapolation using inverse Lax-Wendroff procedure. Level set method is used as an implicit representation of the domain boundary. As an example, the internal ballistics problem for SRM with umbrella type grain was solved during the main firing phase. In addition, flow parameters distribution in the combustion chamber was obtained for different time moments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017OIDP...53..364N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017OIDP...53..364N"><span>Synchronization algorithm for three-phase voltages of an inverter and a grid</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nos, O. V.</p> <p>2017-07-01</p> <p>This paper presents the results of designing a joint phase-locked loop for adjusting the phase shifts (speed) and Euclidean norm of three-phase voltages of an inverter to the same grid parameters. The design can be used, in particular, to match the potentials of two parallel-connected power sources for the fundamental harmonic at the moments of switching the stator windings of an induction AC motor from a converter to a centralized power-supply system and back. Technical implementation of the developed synchronization algorithm will significantly reduce the inductance of the current-balancing reactor and exclude emergency operation modes in the electric motor power circuit.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3231614','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3231614"><span>ECS: Efficient Communication Scheduling for Underwater Sensor Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hong, Lu; Hong, Feng; Guo, Zhongwen; Li, Zhengbao</p> <p>2011-01-01</p> <p>TDMA protocols have attracted a lot of attention for underwater acoustic sensor networks (UWSNs), because of the unique characteristics of acoustic signal propagation such as great energy consumption in transmission, long propagation delay and long communication range. Previous TDMA protocols all allocated transmission time to nodes based on discrete time slots. This paper proposes an efficient continuous time scheduling TDMA protocol (ECS) for UWSNs, including the continuous time based and sender oriented conflict analysis model, the transmission moment allocation algorithm and the distributed topology maintenance algorithm. Simulation results confirm that ECS improves network throughput by 20% on average, compared to existing MAC protocols. PMID:22163775</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70012749','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70012749"><span>Maximum likelihood estimation for periodic autoregressive moving average models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Vecchia, A.V.</p> <p>1985-01-01</p> <p>A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1996SPIE.2761...76L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1996SPIE.2761...76L"><span>Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Ti C.; Mitra, Sunanda</p> <p>1996-06-01</p> <p>Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25873907','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25873907"><span>The stream of experience when watching artistic movies. Dynamic aesthetic effects revealed by the Continuous Evaluation Procedure (CEP).</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Muth, Claudia; Raab, Marius H; Carbon, Claus-Christian</p> <p>2015-01-01</p> <p>Research in perception and appreciation is often focused on snapshots, stills of experience. Static approaches allow for multidimensional assessment, but are unable to catch the crucial dynamics of affective and perceptual processes; for instance, aesthetic phenomena such as the "Aesthetic-Aha" (the increase in liking after the sudden detection of Gestalt), effects of expectation, or Berlyne's idea that "disorientation" with a "promise of success" elicits interest. We conducted empirical studies on indeterminate artistic movies depicting the evolution and metamorphosis of Gestalt and investigated (i) the effects of sudden perceptual insights on liking; that is, "Aesthetic Aha"-effects, (ii) the dynamics of interest before moments of insight, and (iii) the dynamics of complexity before and after moments of insight. Via the so-called Continuous Evaluation Procedure (CEP) enabling analogous evaluation in a continuous way, participants assessed the material on two aesthetic dimensions blockwise either in a gallery or a laboratory. The material's inherent dynamics were described via assessments of liking, interest, determinacy, and surprise along with a computational analysis on the variable complexity. We identified moments of insight as peaks in determinacy and surprise. Statistically significant changes in liking and interest demonstrated that: (i) insights increase liking, (ii) interest already increases 1500 ms before such moments of insight, supporting the idea that it is evoked by an expectation of understanding, and (iii) insights occur during increasing complexity. We propose a preliminary model of dynamics in liking and interest with regard to complexity and perceptual insight and discuss descriptions of participants' experiences of insight. Our results point to the importance of systematic analyses of dynamics in art perception and appreciation.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoJI.213.2147M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoJI.213.2147M"><span>Moment tensor inversion with three-dimensional sensor configuration of mining induced seismicity (Kiruna mine, Sweden)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ma, Ju; Dineva, Savka; Cesca, Simone; Heimann, Sebastian</p> <p>2018-06-01</p> <p>Mining induced seismicity is an undesired consequence of mining operations, which poses significant hazard to miners and infrastructures and requires an accurate analysis of the rupture process. Seismic moment tensors of mining-induced events help to understand the nature of mining-induced seismicity by providing information about the relationship between the mining, stress redistribution and instabilities in the rock mass. In this work, we adapt and test a waveform-based inversion method on high frequency data recorded by a dense underground seismic system in one of the largest underground mines in the world (Kiruna mine, Sweden). A stable algorithm for moment tensor inversion for comparatively small mining induced earthquakes, resolving both the double-couple and full moment tensor with high frequency data, is very challenging. Moreover, the application to underground mining system requires accounting for the 3-D geometry of the monitoring system. We construct a Green's function database using a homogeneous velocity model, but assuming a 3-D distribution of potential sources and receivers. We first perform a set of moment tensor inversions using synthetic data to test the effects of different factors on moment tensor inversion stability and source parameters accuracy, including the network spatial coverage, the number of sensors and the signal-to-noise ratio. The influence of the accuracy of the input source parameters on the inversion results is also tested. Those tests show that an accurate selection of the inversion parameters allows resolving the moment tensor also in the presence of realistic seismic noise conditions. Finally, the moment tensor inversion methodology is applied to eight events chosen from mining block #33/34 at Kiruna mine. Source parameters including scalar moment, magnitude, double-couple, compensated linear vector dipole and isotropic contributions as well as the strike, dip and rake configurations of the double-couple term were obtained. The orientations of the nodal planes of the double-couple component in most cases vary from NNW to NNE with a dip along the ore body or in the opposite direction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoJI.tmp..116M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoJI.tmp..116M"><span>Moment Tensor Inversion with 3D sensor configuration of Mining Induced Seismicity (Kiruna mine, Sweden)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ma, Ju; Dineva, Savka; Cesca, Simone; Heimann, Sebastian</p> <p>2018-03-01</p> <p>Mining induced seismicity is an undesired consequence of mining operations, which poses significant hazard to miners and infrastructures and requires an accurate analysis of the rupture process. Seismic moment tensors of mining-induced events help to understand the nature of mining-induced seismicity by providing information about the relationship between the mining, stress redistribution and instabilities in the rock mass. In this work, we adapt and test a waveform-based inversion method on high frequency data recorded by a dense underground seismic system in one of the largest underground mines in the world (Kiruna mine, Sweden). Stable algorithm for moment tensor inversion for comparatively small mining induced earthquakes, resolving both the double couple and full moment tensor with high frequency data is very challenging. Moreover, the application to underground mining system requires accounting for the 3D geometry of the monitoring system. We construct a Green's function database using a homogeneous velocity model, but assuming a 3D distribution of potential sources and receivers. We first perform a set of moment tensor inversions using synthetic data to test the effects of different factors on moment tensor inversion stability and source parameters accuracy, including the network spatial coverage, the number of sensors and the signal-to-noise ratio. The influence of the accuracy of the input source parameters on the inversion results is also tested. Those tests show that an accurate selection of the inversion parameters allows resolving the moment tensor also in presence of realistic seismic noise conditions. Finally, the moment tensor inversion methodology is applied to 8 events chosen from mining block #33/34 at Kiruna mine. Source parameters including scalar moment, magnitude, double couple, compensated linear vector dipole and isotropic contributions as well as the strike, dip, rake configurations of the double couple term were obtained. The orientations of the nodal planes of the double-couple component in most cases vary from NNW to NNE with a dip along the ore body or in the opposite direction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008epsc.conf..369R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008epsc.conf..369R"><span>Improvements on the interior structure of Mercury expected from geodesy measurements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rivoldini, A.; van Hoolst, T.; Verhoeven, O.</p> <p>2008-09-01</p> <p>ABSTRACT We assess the improvements on the interior structure of Mercury provided by expected data from geodesy experiments to be performed with the MESSENGER and BepiColombo orbiters. The observation of obliquity will allow estimating the moment of inertia, whereas measurements of libration will determine the moment of inertia of the silicate shell (mantle and crust). Tidal measurements will constrain the Love numbers that characterize the response of Mercury to the solar tidal forcing. Here, we construct depth-dependent interior structure models of Mercury for several plausible chemical compositions of the core and of the mantle using recent data on core and mantle materials. In particular we study the core structure for different mantle mineralogies and two different temperature profiles. We investigate the influence of the core light element concentration, temperature, and melting law on core state and inner core size. We compute libration amplitude, obliquity, tidal deformation, and tidal changes in the external potential for our models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MApFl...6a5008L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MApFl...6a5008L"><span>The multi-resolution capability of Tchebichef moments and its applications to the analysis of fluorescence excitation-emission spectra</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Bao Qiong; Wang, Xue; Li Xu, Min; Zhai, Hong Lin; Chen, Jing; Liu, Jin Jin</p> <p>2018-01-01</p> <p>Fluorescence spectroscopy with an excitation-emission matrix (EEM) is a fast and inexpensive technique and has been applied to the detection of a very wide range of analytes. However, serious scattering and overlapping signals hinder the applications of EEM spectra. In this contribution, the multi-resolution capability of Tchebichef moments was investigated in depth and applied to the analysis of two EEM data sets (data set 1 consisted of valine-tyrosine-valine, tryptophan-glycine and phenylalanine, and data set 2 included vitamin B1, vitamin B2 and vitamin B6) for the first time. By means of the Tchebichef moments with different orders, the different information in the EEM spectra can be represented. It is owing to this multi-resolution capability that the overlapping problem was solved, and the information of chemicals and scatterings were separated. The obtained results demonstrated that the Tchebichef moment method is very effective, which provides a promising tool for the analysis of EEM spectra. It is expected that the applications of Tchebichef moment method could be developed and extended in complex systems such as biological fluids, food, environment and others to deal with the practical problems (overlapped peaks, unknown interferences, baseline drifts, and so on) with other spectra.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JChPh.136q4309C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JChPh.136q4309C"><span>Ab initio calculation of the rotational spectrum of methane vibrational ground state</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cassam-Chenaï, P.; Liévin, J.</p> <p>2012-05-01</p> <p>In a previous article we have introduced an alternative perturbation scheme to the traditional one starting from the harmonic oscillator, rigid rotator Hamiltonian, to find approximate solutions of the spectral problem for rotation-vibration molecular Hamiltonians. The convergence of our method for the methane vibrational ground state rotational energy levels was quicker than that of the traditional method, as expected, and our predictions were quantitative. In this second article, we study the convergence of the ab initio calculation of effective dipole moments for methane within the same theoretical frame. The first order of perturbation when applied to the electric dipole moment operator of a spherical top gives the expression used in previous spectroscopic studies. Higher orders of perturbation give corrections corresponding to higher centrifugal distortion contributions and are calculated accurately for the first time. Two potential energy surfaces of the literature have been used for solving the anharmonic vibrational problem by means of the vibrational mean field configuration interaction approach. Two corresponding dipole moment surfaces were calculated in this work at a high level of theory. The predicted intensities agree better with recent experimental values than their empirical fit. This suggests that our ab initio dipole moment surface and effective dipole moment operator are both highly accurate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19760004893','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19760004893"><span>Summary of initial results from the GSFC fluxgate magnetometer on Pioneer 11</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Acuna, M. H.; Ness, N. F.</p> <p>1975-01-01</p> <p>The main magnetic field of Jupiter was measured by the Fluxgate Magnetometer on Pioneer 11 and analysis reveals it to be relatively more complex than expected. In a centered spherical harmonic representation with a maximum order of n = 3 (designated GSFC model 04), the dipole term (with opposite polarity to the Earth's) has a moment of 4.28 Gauss x (Jupiter radius cubed), tilted by 9.6 deg towards a system 111 longitude of 232. The quadrupole and octupole moments are significant, 24% and 21% of the dipole moment respectively, and this leads to deviations of the planetary magnetic field from a simple offset tilted dipole for distances smaller than three Jupiter radii. The GSFC model shows a north polar field strength of 14 Gauss and a south polar field strength of 10.4 Gauss. Enhanced absorption effects in the radiation belts may be predicted as a result of field distortion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22654320-solar-cycle-another-moderate-cycle','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22654320-solar-cycle-another-moderate-cycle"><span>SOLAR CYCLE 25: ANOTHER MODERATE CYCLE?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Cameron, R. H.; Schüssler, M.; Jiang, J., E-mail: cameron@mps.mpg.de</p> <p>2016-06-01</p> <p>Surface flux transport simulations for the descending phase of Cycle 24 using random sources (emerging bipolar magnetic regions) with empirically determined scatter of their properties provide a prediction of the axial dipole moment during the upcoming activity minimum together with a realistic uncertainty range. The expectation value for the dipole moment around 2020 (2.5 ± 1.1 G) is comparable to that observed at the end of Cycle 23 (about 2 G). The empirical correlation between the dipole moment during solar minimum and the strength of the subsequent cycle thus suggests that Cycle 25 will be of moderate amplitude, not muchmore » higher than that of the current cycle. However, the intrinsic uncertainty of such predictions resulting from the random scatter of the source properties is considerable and fundamentally limits the reliability with which such predictions can be made before activity minimum is reached.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1319201-spin-orbit-coupling-control-anisotropy-ground-state-frustration','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1319201-spin-orbit-coupling-control-anisotropy-ground-state-frustration"><span>Spin-orbit coupling control of anisotropy, ground state and frustration in 5d 2Sr 2MgOsO 6</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Morrow, Ryan; Taylor, Alice E.; Singh, D. J.; ...</p> <p>2016-08-30</p> <p>The influence of spin-orbit coupling (SOC) on the physical properties of the 5d 2 system Sr 2MgOsO 6 is probed via a combination of magnetometry, specific heat measurements, elastic and inelastic neutron scattering, and density functional theory calculations. Although a significant degree of frustration is expected, we find that Sr 2MgOsO 6 orders in a type I antiferromagnetic structure at the remarkably high temperature of 108 K. The measurements presented allow for the first accurate quantification of the size of the magnetic moment in a 5d 2 system of 0.60(2) μ B a significantly reduced moment from the expected valuemore » for such a system. Furthermore, significant anisotropy is identified via a spin excitation gap, and we confirm by first principles calculations that SOC not only provides the magnetocrystalline anisotropy, but also plays a crucial role in determining both the ground state magnetic order and the moment size in this compound. In conclusion, through comparison to Sr 2ScOsO 6, it is demonstrated that SOC-induced anisotropy has the ability to relieve frustration in 5d 2 systems relative to their 5d 3 counterparts, providing an explanation of the high TN found in Sr 2MgOsO 6.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010mss..confEMF12W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010mss..confEMF12W"><span>In Pursuit of the Far-Infrared Spectrum of Cyanogen Iso-Thiocyanate Ncncs, Under the Influence of the Energy Level Dislocation due to Quantum Monodromy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Winnewisser, Manfred; Winnewisser, Brenda P.; Medvedev, Ivan R.; De Lucia, Frank, C.; Ross, Stephen C.; Koput, Jacek</p> <p>2010-06-01</p> <p>Quantum Monodromy has a strong impact on the ro-vibrational energy levels of chain molecules whose bending potential energy function has the form of the bottom of a champagne bottle (i.e. with a hump or punt) around the linear configuration. NCNCS is a particularly good example of such a molecule and clearly exhibits a distinctive monodromy-induced dislocation of the energy level pattern at the top of the potential energy hump. The generalized semi-rigid bender (GSRB) wave functions are used to show that the expectation values of any physical quantity which varies with the large amplitude bending coordinate will also have monodromy-induced dislocations. This includes the electric dipole moment components. High level ab initio calculations not only provided the molecular equilibrium structure of NCNCS, but also the electric dipole moment components μa and μb as functions of the large-amplitude bending coordinate. The calculated expectation values of these quantities indicate large ro-vibrational transition moments that will be discussed in pursuit of possible far-infrared bands. To our knowledge there is no NCNCS infrared spectrum reported in the literature. B. P. Winnewisser, M. Winnewisser, I. R. Medvedev, F. C. De Lucia, S. C. Ross and J. Koput, Phys. Chem. Chem. Phys., 2010, DOI:10.1039/B922023B.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5004149','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5004149"><span>Spin-orbit coupling control of anisotropy, ground state and frustration in 5d2 Sr2MgOsO6</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Morrow, Ryan; Taylor, Alice E.; Singh, D. J.; Xiong, Jie; Rodan, Steven; Wolter, A. U. B.; Wurmehl, Sabine; Büchner, Bernd; Stone, M. B.; Kolesnikov, A. I.; Aczel, Adam A.; Christianson, A. D.; Woodward, Patrick M.</p> <p>2016-01-01</p> <p>The influence of spin-orbit coupling (SOC) on the physical properties of the 5d2 system Sr2MgOsO6 is probed via a combination of magnetometry, specific heat measurements, elastic and inelastic neutron scattering, and density functional theory calculations. Although a significant degree of frustration is expected, we find that Sr2MgOsO6 orders in a type I antiferromagnetic structure at the remarkably high temperature of 108 K. The measurements presented allow for the first accurate quantification of the size of the magnetic moment in a 5d2 system of 0.60(2) μB –a significantly reduced moment from the expected value for such a system. Furthermore, significant anisotropy is identified via a spin excitation gap, and we confirm by first principles calculations that SOC not only provides the magnetocrystalline anisotropy, but also plays a crucial role in determining both the ground state magnetic order and the size of the local moment in this compound. Through comparison to Sr2ScOsO6, it is demonstrated that SOC-induced anisotropy has the ability to relieve frustration in 5d2 systems relative to their 5d3 counterparts, providing an explanation of the high TN found in Sr2MgOsO6. PMID:27571715</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27571715','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27571715"><span>Spin-orbit coupling control of anisotropy, ground state and frustration in 5d(2) Sr2MgOsO6.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Morrow, Ryan; Taylor, Alice E; Singh, D J; Xiong, Jie; Rodan, Steven; Wolter, A U B; Wurmehl, Sabine; Büchner, Bernd; Stone, M B; Kolesnikov, A I; Aczel, Adam A; Christianson, A D; Woodward, Patrick M</p> <p>2016-08-30</p> <p>The influence of spin-orbit coupling (SOC) on the physical properties of the 5d(2) system Sr2MgOsO6 is probed via a combination of magnetometry, specific heat measurements, elastic and inelastic neutron scattering, and density functional theory calculations. Although a significant degree of frustration is expected, we find that Sr2MgOsO6 orders in a type I antiferromagnetic structure at the remarkably high temperature of 108 K. The measurements presented allow for the first accurate quantification of the size of the magnetic moment in a 5d(2) system of 0.60(2) μB -a significantly reduced moment from the expected value for such a system. Furthermore, significant anisotropy is identified via a spin excitation gap, and we confirm by first principles calculations that SOC not only provides the magnetocrystalline anisotropy, but also plays a crucial role in determining both the ground state magnetic order and the size of the local moment in this compound. Through comparison to Sr2ScOsO6, it is demonstrated that SOC-induced anisotropy has the ability to relieve frustration in 5d(2) systems relative to their 5d(3) counterparts, providing an explanation of the high TN found in Sr2MgOsO6.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1999NuPhS..71..158G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1999NuPhS..71..158G"><span>Multiplicity distributions of gluon and quark jets and a test of QCD analytic calculations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gary, J. William</p> <p>1999-03-01</p> <p>Gluon jets are identified in e +e - hadronic annihilation events by tagging two quark jets in the same hemisphere of an event. The gluon jet is defined inclusively as all the particles in the opposite hemisphere. Gluon hets defined in this manner have a close correspondence to gluon jets as they are defined for analytic calculations, and are almost independent of a jet finding algorithm. The mean and first few higher moments of the gluon jet charged particle multiplicity distribution are compared to the analogous results found for light quark (uds) jets, also defined inclusively. Large differences are observed between the mean, skew and curtosis values of the gluon and quark jets, but not between their dispersions. The cumulant factorial moments of the distributions are also measured, and are used to test the predictions of QCD analytic calculations. A calculation which includes next-to-next-to-leading order corrections and energy conservation is observed to provide a much improved description of the separated gluon and quark jet cumulant moments compared to a next-to-leading order calculation without energy conservation. There is good quantitative agreement between the data and calculations for the ratios of the cumulant moments between gluon and quark jets. The data sample used is the LEP-1 sample of the OPAL experiment at LEP.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006AtmRe..80..165C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006AtmRe..80..165C"><span>Analysis of the moments and parameters of a gamma DSD to infer precipitation properties: A convective stratiform discrimination algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Caracciolo, C.; Prodi, F.; Battaglia, A.; Porcu', F.</p> <p>2006-05-01</p> <p>Drop size distribution is a fundamental property of rainfall for two main reasons: the shape of the distribution reflects the physics of rain formation processes, and it is of basic importance in determining most parameters used in radar-meteorology. Therefore, several authors have proposed in the past different parameterizations for the drop size distribution (DSD). The present work focuses attention on the gamma DSD properties, assumed to be the most suitable for describing the observed DSD and its variability. The data set comprises about 3 years of data (2001-2004) for about 1900 min of rain, collected in Ferrara in the Po Valley (Northern Italy) by a Joss and Waldvogel (JW) disdrometer. A new method of moments to determine the three gamma DSD parameters is developed and tested; this method involves the fourth, fifth and sixth moments of the DSD, which are less sensitive to the underestimation of small drops in the JW disdrometer. The method has been validated by comparing the observed rainfall rates with the computed ones from the fitted distribution, using two classical expressions for the hydrometeor terminal velocity. The 1-min observed spectra of some representative events that occurred in Ferrara are also presented, showing that with sufficient averaging, the distribution for the Ferrara rainfall can be approximately described by a gamma distribution. The discrimination of convective and stratiform precipitation is also an issue of intense interest. Over the past years, several works have aimed to discriminate between these two precipitation categories, on the basis of different instruments and techniques. The knowledge of the three gamma DSD parameters computed with the new method of moments is exploited to identify some characteristic parameters that give quantitative and useful information on the precipitation type and intensity. First, a key parameter derived from the knowledge of two gamma DSD parameters ( m and Λ), the peak (or modal) diameter Dp, defined as m/ Λ, is identified. A theoretical relationship between the m and Λ parameters is successively derived, conducing to a new convective/stratiform discrimination algorithm: in an m- Λ plot the line (1.635 Λ- m) = 1 can be considered as the discriminator; the stratiform events fall in the upper part, the convective ones in the lower. A classical tropical oceanic convective/stratiform discrimination algorithm is also tested, showing that it is not suitable to correctly discriminate the mid-latitude precipitations analyzed here.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23915724','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23915724"><span>Validation of a dynamic linked segment model to calculate joint moments in lifting.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>de Looze, M P; Kingma, I; Bussmann, J B; Toussaint, H M</p> <p>1992-08-01</p> <p>A two-dimensional dynamic linked segment model was constructed and applied to a lifting activity. Reactive forces and moments were calculated by an instantaneous approach involving the application of Newtonian mechanics to individual adjacent rigid segments in succession. The analysis started once at the feet and once at a hands/load segment. The model was validated by comparing predicted external forces and moments at the feet or at a hands/load segment to actual values, which were simultaneously measured (ground reaction force at the feet) or assumed to be zero (external moments at feet and hands/load and external forces, beside gravitation, at hands/load). In addition, results of both procedures, in terms of joint moments, including the moment at the intervertebral disc between the fifth lumbar and first sacral vertebra (L5-S1), were compared. A correlation of r = 0.88 between calculated and measured vertical ground reaction forces was found. The calculated external forces and moments at the hands showed only minor deviations from the expected zero level. The moments at L5-S1, calculated starting from feet compared to starting from hands/load, yielded a coefficient of correlation of r = 0.99. However, moments calculated from hands/load were 3.6% (averaged values) and 10.9% (peak values) higher. This difference is assumed to be due mainly to erroneous estimations of the positions of centres of gravity and joint rotation centres. The estimation of the location of L5-S1 rotation axis can affect the results significantly. Despite the numerous studies estimating the load on the low back during lifting on the basis of linked segment models, only a few attempts to validate these models have been made. This study is concerned with the validity of the presented linked segment model. The results support the model's validity. Effects of several sources of error threatening the validity are discussed. Copyright © 1992. Published by Elsevier Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20130011505','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20130011505"><span>Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai</p> <p>2013-01-01</p> <p>This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..DFDM30009B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..DFDM30009B"><span>Algorithm for computing descriptive statistics for very large data sets and the exa-scale era</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Beekman, Izaak</p> <p>2017-11-01</p> <p>An algorithm for Single-point, Parallel, Online, Converging Statistics (SPOCS) is presented. It is suited for in situ analysis that traditionally would be relegated to post-processing, and can be used to monitor the statistical convergence and estimate the error/residual in the quantity-useful for uncertainty quantification too. Today, data may be generated at an overwhelming rate by numerical simulations and proliferating sensing apparatuses in experiments and engineering applications. Monitoring descriptive statistics in real time lets costly computations and experiments be gracefully aborted if an error has occurred, and monitoring the level of statistical convergence allows them to be run for the shortest amount of time required to obtain good results. This algorithm extends work by Pébay (Sandia Report SAND2008-6212). Pébay's algorithms are recast into a converging delta formulation, with provably favorable properties. The mean, variance, covariances and arbitrary higher order statistical moments are computed in one pass. The algorithm is tested using Sillero, Jiménez, & Moser's (2013, 2014) publicly available UPM high Reynolds number turbulent boundary layer data set, demonstrating numerical robustness, efficiency and other favorable properties.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100024139','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100024139"><span>Model-Checking with Edge-Valued Decision Diagrams</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Roux, Pierre; Siminiceanu, Radu I.</p> <p>2010-01-01</p> <p>We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library along with state-of-the-art algorithms for building the transition relation and the state space of discrete state systems. We provide efficient algorithms for manipulating EVMDDs and give upper bounds of the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi-Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools: EVMDDs for encoding arithmetic expressions, identity-reduced MDDs for representing the transition relation, and the saturation algorithm for reachability analysis. We compare our new symbolic model checking EVMDD library with the widely used CUDD package and show that, in many cases, our tool is several orders of magnitude faster than CUDD.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27727120','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27727120"><span>Comparison of algorithms to quantify muscle fatigue in upper limb muscles based on sEMG signals.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kahl, Lorenz; Hofmann, Ulrich G</p> <p>2016-11-01</p> <p>This work compared the performance of six different fatigue detection algorithms quantifying muscle fatigue based on electromyographic signals. Surface electromyography (sEMG) was obtained by an experiment from upper arm contractions at three different load levels from twelve volunteers. Fatigue detection algorithms mean frequency (MNF), spectral moments ratio (SMR), the wavelet method WIRM1551, sample entropy (SampEn), fuzzy approximate entropy (fApEn) and recurrence quantification analysis (RQA%DET) were calculated. The resulting fatigue signals were compared considering the disturbances incorporated in fatiguing situations as well as according to the possibility to differentiate the load levels based on the fatigue signals. Furthermore we investigated the influence of the electrode locations on the fatigue detection quality and whether an optimized channel set is reasonable. The results of the MNF, SMR, WIRM1551 and fApEn algorithms fell close together. Due to the small amount of subjects in this study significant differences could not be found. In terms of disturbances the SMR algorithm showed a slight tendency to out-perform the others. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhDT........73C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhDT........73C"><span>Time-domain parameter identification of aeroelastic loads by forced-vibration method for response of flexible structures subject to transient wind</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cao, Bochao</p> <p></p> <p>Slender structures representing civil, mechanical and aerospace systems such as long-span bridges, high-rise buildings, stay cables, power-line cables, high light mast poles, crane-booms and aircraft wings could experience vortex-induced and buffeting excitations below their design wind speeds and divergent self-excited oscillations (flutter) beyond a critical wind speed because these are flexible. Traditional linear aerodynamic theories that are routinely applied for their response prediction are not valid in the galloping, or near-flutter regime, where large-amplitude vibrations could occur and during non-stationary and transient wind excitations that occur, for example, during hurricanes, thunderstorms and gust fronts. The linear aerodynamic load formulation for lift, drag and moment are expressed in terms of aerodynamic functions in frequency domain that are valid for straight-line winds which are stationary or weakly-stationary. Application of the frequency domain formulation is restricted from use in the nonlinear and transient domain because these are valid for linear models and stationary wind. The time-domain aerodynamic force formulations are suitable for finite element modeling, feedback-dependent structural control mechanism, fatigue-life prediction, and above all modeling of transient structural behavior during non-stationary wind phenomena. This has motivated the developing of time-domain models of aerodynamic loads that are in parallel to the existing frequency-dependent models. Parameters defining these time-domain models can be now extracted from wind tunnel tests, for example, the Rational Function Coefficients defining the self-excited wind loads can be extracted using section model tests using the free vibration technique. However, the free vibration method has some limitations because it is difficult to apply at high wind speeds, in turbulent wind environment, or on unstable cross sections with negative aerodynamic damping. In the current research, new algorithms were developed based on forced vibration technique for direct extraction of the Rational Functions. The first of the two algorithms developed uses the two angular phase lag values between the measured vertical or torsional displacement and the measured aerodynamic lift and moment produced on the section model subject to forced vibration to identify the Rational Functions. This algorithm uses two separate one-degree-of-freedom tests (vertical or torsional) to identify all the four Rational Functions or corresponding Rational Function Coefficients for a two degrees-of-freedom (DOF) vertical-torsional vibration model. It was applied to a streamlined section model and the results compared well with those obtained from earlier free vibration experiment. The second algorithm that was developed is based on direct least squares method. It uses all the data points of displacements and aerodynamic lift and moment instead of phase lag values for more accurate estimates. This algorithm can be used for one-, two- and three-degree-of-freedom motions. A two-degree-of-freedom forced vibration system was developed and the algorithm was shown to work well for both streamlined and bluff section models. The uniqueness of the second algorithms lies in the fact that it requires testing the model at only two wind speeds for extraction of all four Rational Functions. The Rational Function Coefficients that were extracted for a streamlined section model using the two-DOF Least Squares algorithm were validated in a separate wind tunnel by testing a larger scaled model subject to straight-line, gusty and boundary-layer wind.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ApPhL.103d3704V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ApPhL.103d3704V"><span>Accurate quantification of magnetic particle properties by intra-pair magnetophoresis for nanobiotechnology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>van Reenen, Alexander; Gao, Yang; Bos, Arjen H.; de Jong, Arthur M.; Hulsen, Martien A.; den Toonder, Jaap M. J.; Prins, Menno W. J.</p> <p>2013-07-01</p> <p>The application of magnetic particles in biomedical research and in-vitro diagnostics requires accurate characterization of their magnetic properties, with single-particle resolution and good statistics. Here, we report intra-pair magnetophoresis as a method to accurately quantify the field-dependent magnetic moments of magnetic particles and to rapidly generate histograms of the magnetic moments with good statistics. We demonstrate our method with particles of different sizes and from different sources, with a measurement precision of a few percent. We expect that intra-pair magnetophoresis will be a powerful tool for the characterization and improvement of particles for the upcoming field of particle-based nanobiotechnology.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1992moca.conf...30P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1992moca.conf...30P"><span>Proceedings of the Conference on Moments and Signal</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Purdue, P.; Solomon, H.</p> <p>1992-09-01</p> <p>The focus of this paper is (1) to describe systematic methodologies for selecting nonlinear transformations for blind equalization algorithms (and thus new types of cumulants), and (2) to give an overview of the existing blind equalization algorithms and point out their strengths as well as weaknesses. It is shown that all blind equalization algorithms belong in one of the following three categories, depending where the nonlinear transformation is being applied on the data: (1) the Bussgang algorithms, where the nonlinearity is in the output of the adaptive equalization filter; (2) the polyspectra (or Higher-Order Spectra) algorithms, where the nonlinearity is in the input of the adaptive equalization filter; and (3) the algorithms where the nonlinearity is inside the adaptive filter, i.e., the nonlinear filter or neural network. We describe methodologies for selecting nonlinear transformations based on various optimality criteria such as MSE or MAP. We illustrate that such existing algorithms as Sato, Benveniste-Goursat, Godard or CMA, Stop-and-Go, and Donoho are indeed special cases of the Bussgang family of techniques when the nonlinearity is memoryless. We present results that demonstrate the polyspectra-based algorithms exhibit faster convergence rate than Bussgang algorithms. However, this improved performance is at the expense of more computations per iteration. We also show that blind equalizers based on nonlinear filters or neural networks are more suited for channels that have nonlinear distortions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E3SWC..3110007A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E3SWC..3110007A"><span>Implementation of Rivest Shamir Adleman Algorithm (RSA) and Vigenere Cipher In Web Based Information System</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Aryanti, Aryanti; Mekongga, Ikhthison</p> <p>2018-02-01</p> <p>Data security and confidentiality is one of the most important aspects of information systems at the moment. One attempt to secure data such as by using cryptography. In this study developed a data security system by implementing the cryptography algorithm Rivest, Shamir Adleman (RSA) and Vigenere Cipher. The research was done by combining Rivest, Shamir Adleman (RSA) and Vigenere Cipher cryptographic algorithms to document file either word, excel, and pdf. This application includes the process of encryption and decryption of data, which is created by using PHP software and my SQL. Data encryption is done on the transmit side through RSA cryptographic calculations using the public key, then proceed with Vigenere Cipher algorithm which also uses public key. As for the stage of the decryption side received by using the Vigenere Cipher algorithm still use public key and then the RSA cryptographic algorithm using a private key. Test results show that the system can encrypt files, decrypt files and transmit files. Tests performed on the process of encryption and decryption of files with different file sizes, file size affects the process of encryption and decryption. The larger the file size the longer the process of encryption and decryption.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..DNP.EA057M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..DNP.EA057M"><span>Effects of a PID Control System on Electromagnetic Fields in an nEDM Experiment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Molina, Daniel</p> <p>2017-09-01</p> <p>The Kellogg Radiation Laboratory is currently testing a prototype for an experiment that hopes to identify the electric dipole moment of the neutron. As part of this testing, we have developed a PID (proportional, integral, derivative) feedback system that uses large coils to fix the value of local external magnetic fields, up to linear gradients. PID algorithms compare the current value to a set-point and use the integral and derivative of the field with respect to the set-point to maintain constant fields. We have also developed a method for zeroing linear gradients within the experimental apparatus. In order to determine the performance of the PID algorithm, measurements of both the internal and external fields were obtained with and without the algorithm running, and these results were compared for noise and time stability. We have seen that the PID algorithm can reduce the effect of disturbance to the field by a factor of 10.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4842081','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4842081"><span>Gait Planning and Stability Control of a Quadruped Robot</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Li, Junmin; Wang, Jinge; Yang, Simon X.; Zhou, Kedong; Tang, Huijuan</p> <p>2016-01-01</p> <p>In order to realize smooth gait planning and stability control of a quadruped robot, a new controller algorithm based on CPG-ZMP (central pattern generator-zero moment point) is put forward in this paper. To generate smooth gait and shorten the adjusting time of the model oscillation system, a new CPG model controller and its gait switching strategy based on Wilson-Cowan model are presented in the paper. The control signals of knee-hip joints are obtained by the improved multi-DOF reduced order control theory. To realize stability control, the adaptive speed adjustment and gait switch are completed by the real-time computing of ZMP. Experiment results show that the quadruped robot's gaits are efficiently generated and the gait switch is smooth in the CPG control algorithm. Meanwhile, the stability of robot's movement is improved greatly with the CPG-ZMP algorithm. The algorithm in this paper has good practicability, which lays a foundation for the production of the robot prototype. PMID:27143959</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19910035240&hterms=finite+fourier+transform&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dfinite%2Bfourier%2Btransform','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19910035240&hterms=finite+fourier+transform&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dfinite%2Bfourier%2Btransform"><span>A combined finite element-boundary integral formulation for solution of two-dimensional scattering problems via CGFFT. [Conjugate Gradient Fast Fourier Transformation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Collins, Jeffery D.; Volakis, John L.; Jin, Jian-Ming</p> <p>1990-01-01</p> <p>A new technique is presented for computing the scattering by 2-D structures of arbitrary composition. The proposed solution approach combines the usual finite element method with the boundary-integral equation to formulate a discrete system. This is subsequently solved via the conjugate gradient (CG) algorithm. A particular characteristic of the method is the use of rectangular boundaries to enclose the scatterer. Several of the resulting boundary integrals are therefore convolutions and may be evaluated via the fast Fourier transform (FFT) in the implementation of the CG algorithm. The solution approach offers the principal advantage of having O(N) memory demand and employs a 1-D FFT versus a 2-D FFT as required with a traditional implementation of the CGFFT algorithm. The speed of the proposed solution method is compared with that of the traditional CGFFT algorithm, and results for rectangular bodies are given and shown to be in excellent agreement with the moment method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900000997','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900000997"><span>A combined finite element and boundary integral formulation for solution via CGFFT of 2-dimensional scattering problems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Collins, Jeffery D.; Volakis, John L.</p> <p>1989-01-01</p> <p>A new technique is presented for computing the scattering by 2-D structures of arbitrary composition. The proposed solution approach combines the usual finite element method with the boundary integral equation to formulate a discrete system. This is subsequently solved via the conjugate gradient (CG) algorithm. A particular characteristic of the method is the use of rectangular boundaries to enclose the scatterer. Several of the resulting boundary integrals are therefore convolutions and may be evaluated via the fast Fourier transform (FFT) in the implementation of the CG algorithm. The solution approach offers the principle advantage of having O(N) memory demand and employs a 1-D FFT versus a 2-D FFT as required with a traditional implementation of the CGFFT algorithm. The speed of the proposed solution method is compared with that of the traditional CGFFT algorithm, and results for rectangular bodies are given and shown to be in excellent agreement with the moment method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27143959','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27143959"><span>Gait Planning and Stability Control of a Quadruped Robot.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Junmin; Wang, Jinge; Yang, Simon X; Zhou, Kedong; Tang, Huijuan</p> <p>2016-01-01</p> <p>In order to realize smooth gait planning and stability control of a quadruped robot, a new controller algorithm based on CPG-ZMP (central pattern generator-zero moment point) is put forward in this paper. To generate smooth gait and shorten the adjusting time of the model oscillation system, a new CPG model controller and its gait switching strategy based on Wilson-Cowan model are presented in the paper. The control signals of knee-hip joints are obtained by the improved multi-DOF reduced order control theory. To realize stability control, the adaptive speed adjustment and gait switch are completed by the real-time computing of ZMP. Experiment results show that the quadruped robot's gaits are efficiently generated and the gait switch is smooth in the CPG control algorithm. Meanwhile, the stability of robot's movement is improved greatly with the CPG-ZMP algorithm. The algorithm in this paper has good practicability, which lays a foundation for the production of the robot prototype.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..1613195S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..1613195S"><span>Higher order concentration moments collapse in the expected mass fraction (EMF) based risk assessment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Srzic, Veljko; Gotovac, Hrvoje; Cvetkovic, Vladimir; Andricevic, Roko</p> <p>2014-05-01</p> <p>In this work Langrangian framework is used for conservative tracer transport simulations through 2-D extremely heterogeneous porous media. Conducted numerical simulations enable large sets of concentration values in both spatial and temporal domains. In addition to the advection, which acts on all scales, an additional mechanism considered is local scale dispersion (LSD), accounting for both mechanical dispersion and molecular diffusion. The ratio between these two mechanisms is quantified by the Peclet (Pe) number. In its base, the work gives answers to concentration scalar features when influenced by: i) different log-conductivity variance; ii) log-conductivity structures defined by the same global variogram but with different log conductivity patterns correlated; and iii) for a wide range of Peclet values. Results conducted by Monte Carlo analysis show a complex interplay between the aforementioned parameters, indicating the influence of aquifer properties to temporal LSD evolution. A remarkable collapse of higher order to second-order concentration moments [Yee, 2009] leads to the conclusion that only two concentration moments are required for an accurate description of concentration fluctuations. This explicitly holds for the pure advection case, while in the case of LSD presence the moment deriving function(MDF) is involved to ensure the moment collapse validity. An inspection of the Beta distribution leads to the conclusion that the two-parametric distribution can be used for concentration fluctuation characterization even in cases of high aquifer heterogeneity and/or for different log-conductivity structures, independent of the sampling volume used. Furthermore, the expected mass fraction (EMF) [Heagy & Sullivan, 1996] concept is applied in groundwater transport. In its origin, EMF is function of the concentration but with lower number of realizations needed for its determination, compared to the one point PDF. From practical point of view, EMF excludes meandering effect and incorporates information about exposure time for each non-zero concentration value present. Also, it is shown that EMF is able to clearly reflect the effects of aquifer heterogeneity and structure as well as the Pe value. The latter is demonstrated through the non-carcinogenic risk assessment framework. To demonstrate the uniqueness of the moment collapse feature and ability of the Beta distribution to account for the concentration frequencies even in real cases, Macrodispersion Experiment (MADE1) [Boggs et al, 1992] data sets are used for validation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA636818','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA636818"><span>Blade Sections in Streamwise Oscillations into Reverse Flow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2015-05-07</p> <p>NC 27709-2211 Reverse Flow, Oscillating Airfoils , Oscillating Freesteam REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR...plate or bluff body rather than an airfoil . Reverse flow operation requires investigation and quantification to accurately capture these Submitted for... airfoil integrated quantities (lift, drag, moment) in reverse flow and developed new algorithms for comprehensive codes, reducing errors from 30 %–50</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PMB....63c5014K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PMB....63c5014K"><span>Interval-based reconstruction for uncertainty quantification in PET</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis</p> <p>2018-02-01</p> <p>A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26839608','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26839608"><span>Intra-Personal and Inter-Personal Kinetic Synergies During Jumping.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Slomka, Kajetan; Juras, Grzegorz; Sobota, Grzegorz; Furmanek, Mariusz; Rzepko, Marian; Latash, Mark L</p> <p>2015-12-22</p> <p>We explored synergies between two legs and two subjects during preparation for a long jump into a target. Synergies were expected during one-person jumping. No such synergies were expected between two persons jumping in parallel without additional contact, while synergies were expected to emerge with haptic contact and become stronger with strong mechanical contact. Subjects performed jumps either alone (each foot standing on a separate force platform) or in dyads (parallel to each other, each person standing on a separate force platform) without any contact, with haptic contact, and with strong coupling. Strong negative correlations between pairs of force variables (strong synergies) were seen in the vertical force in one-person jumps and weaker synergies in two-person jumps with the strong contact. For other force variables, only weak synergies were present in one-person jumps and no negative correlations between pairs of force variable for two-person jumps. Pairs of moment variables from the two force platforms at steady state showed positive correlations, which were strong in one-person jumps and weaker, but still significant, in two-person jumps with the haptic and strong contact. Anticipatory synergy adjustments prior to action initiation were observed in one-person trials only. We interpret the different results for the force and moment variables at steady state as reflections of postural sway.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4723184','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4723184"><span>Intra-Personal and Inter-Personal Kinetic Synergies During Jumping</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Slomka, Kajetan; Juras, Grzegorz; Sobota, Grzegorz; Furmanek, Mariusz; Rzepko, Marian; Latash, Mark L.</p> <p>2015-01-01</p> <p>We explored synergies between two legs and two subjects during preparation for a long jump into a target. Synergies were expected during one-person jumping. No such synergies were expected between two persons jumping in parallel without additional contact, while synergies were expected to emerge with haptic contact and become stronger with strong mechanical contact. Subjects performed jumps either alone (each foot standing on a separate force platform) or in dyads (parallel to each other, each person standing on a separate force platform) without any contact, with haptic contact, and with strong coupling. Strong negative correlations between pairs of force variables (strong synergies) were seen in the vertical force in one-person jumps and weaker synergies in two-person jumps with the strong contact. For other force variables, only weak synergies were present in one-person jumps and no negative correlations between pairs of force variable for two-person jumps. Pairs of moment variables from the two force platforms at steady state showed positive correlations, which were strong in one-person jumps and weaker, but still significant, in two-person jumps with the haptic and strong contact. Anticipatory synergy adjustments prior to action initiation were observed in one-person trials only. We interpret the different results for the force and moment variables at steady state as reflections of postural sway. PMID:26839608</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5386244','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5386244"><span>Polynomial probability distribution estimation using the method of moments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Mattsson, Lars; Rydén, Jesper</p> <p>2017-01-01</p> <p>We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018OptEn..57a4107Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018OptEn..57a4107Y"><span>Statistical photocalibration of photodetectors for radiometry without calibrated light sources</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yielding, Nicholas J.; Cain, Stephen C.; Seal, Michael D.</p> <p>2018-01-01</p> <p>Calibration of CCD arrays for identifying bad pixels and achieving nonuniformity correction is commonly accomplished using dark frames. This kind of calibration technique does not achieve radiometric calibration of the array since only the relative response of the detectors is computed. For this, a second calibration is sometimes utilized by looking at sources with known radiances. This process can be used to calibrate photodetectors as long as a calibration source is available and is well-characterized. A previous attempt at creating a procedure for calibrating a photodetector using the underlying Poisson nature of the photodetection required calculations of the skewness of the photodetector measurements. Reliance on the third moment of measurement meant that thousands of samples would be required in some cases to compute that moment. A photocalibration procedure is defined that requires only first and second moments of the measurements. The technique is applied to image data containing a known light source so that the accuracy of the technique can be surmised. It is shown that the algorithm can achieve accuracy of nearly 2.7% of the predicted number of photons using only 100 frames of image data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28394949','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28394949"><span>Polynomial probability distribution estimation using the method of moments.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper</p> <p>2017-01-01</p> <p>We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009JCoAM.223..304L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009JCoAM.223..304L"><span>Pricing American Asian options with higher moments in the underlying distribution</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lo, Keng-Hsin; Wang, Kehluh; Hsu, Ming-Feng</p> <p>2009-01-01</p> <p>We develop a modified Edgeworth binomial model with higher moment consideration for pricing American Asian options. With lognormal underlying distribution for benchmark comparison, our algorithm is as precise as that of Chalasani et al. [P. Chalasani, S. Jha, F. Egriboyun, A. Varikooty, A refined binomial lattice for pricing American Asian options, Rev. Derivatives Res. 3 (1) (1999) 85-105] if the number of the time steps increases. If the underlying distribution displays negative skewness and leptokurtosis as often observed for stock index returns, our estimates can work better than those in Chalasani et al. [P. Chalasani, S. Jha, F. Egriboyun, A. Varikooty, A refined binomial lattice for pricing American Asian options, Rev. Derivatives Res. 3 (1) (1999) 85-105] and are very similar to the benchmarks in Hull and White [J. Hull, A. White, Efficient procedures for valuing European and American path-dependent options, J. Derivatives 1 (Fall) (1993) 21-31]. The numerical analysis shows that our modified Edgeworth binomial model can value American Asian options with greater accuracy and speed given higher moments in their underlying distribution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=climate+AND+change+AND+commons&pg=5&id=EJ1040790','ERIC'); return false;" href="https://eric.ed.gov/?q=climate+AND+change+AND+commons&pg=5&id=EJ1040790"><span>Beyond the Classroom</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Smith, Malbert III; Schiano, Anne; Lattanzio, Elizabeth</p> <p>2014-01-01</p> <p>We are at a transformative moment in education with the almost universal adoption (forty-five states, the District of Columbia, and four territories) of the Common Core State Standards (CCSS). As we move from adoption to implementation of these standards across the country, the climate for educational reform has led to expectations of change that…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1119766.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1119766.pdf"><span>Liminal Moments: Designing, Thinking and Learning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Taboada, Manuela; Coombs, Gretchen</p> <p>2014-01-01</p> <p>This paper provides a contextual reflection for understanding best practice teaching to first year design students. The outcome (job) focussed approach to higher education has led to some unanticipated collateral damage for students, and in the case we discuss, has altered the students' expectations of course delivery with specific implications…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Failure&id=EJ1109129','ERIC'); return false;" href="https://eric.ed.gov/?q=Failure&id=EJ1109129"><span>Flipping the Mindset: Reframing Fear and Failure to Catalyze Development</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Boyd, Diane E.; Baudier, Josie; Stromie, Traci</p> <p>2015-01-01</p> <p>Despite the attempts to target success and predisposition to taking risks to promote innovation, sometimes educational developers encounter moments where they fail to meet expectations set forth--by their institutions, colleagues, or themselves. Attempts to avoid potential failures can stymie the creative process, preventing them from meeting…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ831297.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ831297.pdf"><span>Professional Vision in Action: An Exploratory Study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Sherin, Miriam Gamoran; Russ, Rosemary S.; Sherin, Bruce L.; Colestock, Adam</p> <p>2008-01-01</p> <p>The study of teachers' professional vision poses some unique challenges. The application of professional vision happens in a manner that is fleeting, and that is distributed through the moments of instruction. Because of the ongoing nature of instruction, it is not realistic to expect that one could "pause" instruction momentarily, ask a…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFM.H31G1200W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFM.H31G1200W"><span>Identification of Hot Moments and Hot Spots for Real-Time Adaptive Control of Multi-scale Environmental Sensor Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wietsma, T.; Minsker, B. S.</p> <p>2012-12-01</p> <p>Increased sensor throughput combined with decreasing hardware costs has led to a disruptive growth in data volume. This disruption, popularly termed "the data deluge," has placed new demands for cyberinfrastructure and information technology skills among researchers in many academic fields, including the environmental sciences. Adaptive sampling has been well established as an effective means of improving network resource efficiency (energy, bandwidth) without sacrificing sample set quality relative to traditional uniform sampling. However, using adaptive sampling for the explicit purpose of improving resolution over events -- situations displaying intermittent dynamics and unique hydrogeological signatures -- is relatively new. In this paper, we define hot spots and hot moments in terms of sensor signal activity as measured through discrete Fourier analysis. Following this frequency-based approach, we apply the Nyquist-Shannon sampling theorem, a fundamental contribution from signal processing that led to the field of information theory, for analysis of uni- and multivariate environmental signal data. In the scope of multi-scale environmental sensor networks, we present several sampling control algorithms, derived from the Nyquist-Shannon theorem, that operate at local (field sensor), regional (base station for aggregation of field sensor data), and global (Cloud-based, computationally intensive models) scales. Evaluated over soil moisture data, results indicate significantly greater sample density during precipitation events while reducing overall sample volume. Using these algorithms as indicators rather than control mechanisms, we also discuss opportunities for spatio-temporal modeling as a tool for planning/modifying sensor network deployments. Locally adaptive model based on Nyquist-Shannon sampling theorem Pareto frontiers for local, regional, and global models relative to uniform sampling. Objectives are (1) overall sampling efficiency and (2) sampling efficiency during hot moments as identified using heuristic approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4014403','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4014403"><span>A Computational Framework for Analyzing Stochasticity in Gene Expression</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sherman, Marc S.; Cohen, Barak A.</p> <p>2014-01-01</p> <p>Stochastic fluctuations in gene expression give rise to distributions of protein levels across cell populations. Despite a mounting number of theoretical models explaining stochasticity in protein expression, we lack a robust, efficient, assumption-free approach for inferring the molecular mechanisms that underlie the shape of protein distributions. Here we propose a method for inferring sets of biochemical rate constants that govern chromatin modification, transcription, translation, and RNA and protein degradation from stochasticity in protein expression. We asked whether the rates of these underlying processes can be estimated accurately from protein expression distributions, in the absence of any limiting assumptions. To do this, we (1) derived analytical solutions for the first four moments of the protein distribution, (2) found that these four moments completely capture the shape of protein distributions, and (3) developed an efficient algorithm for inferring gene expression rate constants from the moments of protein distributions. Using this algorithm we find that most protein distributions are consistent with a large number of different biochemical rate constant sets. Despite this degeneracy, the solution space of rate constants almost always informs on underlying mechanism. For example, we distinguish between regimes where transcriptional bursting occurs from regimes reflecting constitutive transcript production. Our method agrees with the current standard approach, and in the restrictive regime where the standard method operates, also identifies rate constants not previously obtainable. Even without making any assumptions we obtain estimates of individual biochemical rate constants, or meaningful ratios of rate constants, in 91% of tested cases. In some cases our method identified all of the underlying rate constants. The framework developed here will be a powerful tool for deducing the contributions of particular molecular mechanisms to specific patterns of gene expression. PMID:24811315</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12569222','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12569222"><span>Effect of shoe inserts on kinematics, center of pressure, and leg joint moments during running.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nigg, Benno M; Stergiou, Pro; Cole, Gerald; Stefanyshyn, Darren; Mündermann, Anne; Humble, Neil</p> <p>2003-02-01</p> <p>The purposes of this project were to assess the effect of four different shoe inserts on the path of the center of pressure (COP), to quantify the effect of these inserts on selected knee joint moments during running, and to assess the potential of COP data to predict the effects of inserts/orthotics on knee joint moments. Kinematics for the lower extremities, resultant ankle and knee joint moments, and the path of the COP were collected from the right foot of 15 male subjects while running heel-toe with five different shoe inserts (full or half with 4.5-mm postings). Individual movement changes with respect to the neutral insert condition were typically small and not systematic. Significant changes for the path of the COP were registered only for the full lateral insert condition with an average shift toward the lateral side. The mediolateral shift of the COP was not consistent for the full medial and the two half-shoe inserts. The subject-specific reactions to the inserts' intervention in the corresponding knee joint moments were typically not consistent. Compared with the neutral insert condition, subjects showed increases or decreases of the knee joint moments. The correlation between the individual COP shifts and the resultant knee joint moment was generally small. The results of this study showed that subject-specific reactions to the tested inserts were often not as expected. Additionally, reactions were not consistent between the subjects. This result suggests that the prescription of inserts and/or orthotics is a difficult task and that methods must be developed to test and assess these effects. Such methods, however, are not currently available.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..188a2009B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..188a2009B"><span>Theoretical study on the magnetic moments formation in Ta-doped anatase TiO2</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bupu, A.; Majidi, M. A.; Rusydi, A.</p> <p>2017-04-01</p> <p>We present a theoretical study on Ti-vacancy induced ferromagnetism in Ta-doped anatase TiO2. Experimental study of Ti1-x Ta x O2 thin film has shown that Ti-vacancies (assisted by Ta doping) induce the formation of localized magnetic moment around it, then, the observed ferromagnetism is caused by the alignment of localized magnetic moments through Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction. In this study, we focus on the formation of the localized magnetic moments in this system. We hypothesize that on a unit cell, Ti-vacancy has caused four electrons from the surrounding oxygen atoms to become unpaired. These unpaired electrons then arrange themselves into a configuration with a non-zero net magnetic moment. To examine our hypothesis, we construct a Hamiltonian of the four unpaired electrons, incorporating the Coulomb intra- and inter-orbital interactions, in matrix form. Using a set of chosen parameter values, we diagonalize the Hamiltonian to get the eigenstates and eigenvalues, then, with the resulting eigenstates, we calculate the magnetic moment, μ, by obtaining the expectation value of the square of total spin operator. Our calculation results show that in the ground state, provided that the ratio of parameters satisfies some criterion, μ ≈ 4μ B , corresponding to the four electron spins being almost perfectly aligned, can be achieved. Further, as long as we keep the Coulomb intra-orbital interaction between 0.5 and 1 eV, we find that μ ≈ 4μ B is robust up to far above room temperature. Our results demonstrate that Ti vacancies in anatase TiO2 can form very stable localized magnetic moments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JKPS...62..648H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JKPS...62..648H"><span>Upper limb joint motion of two different user groups during manual wheelchair propulsion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hwang, Seonhong; Kim, Seunghyeon; Son, Jongsang; Lee, Jinbok; Kim, Youngho</p> <p>2013-02-01</p> <p>Manual wheelchair users have a high risk of injury to the upper extremities. Recent studies have focused on kinematic and kinetic analyses of manual wheelchair propulsion in order to understand the physical demands on wheelchair users. The purpose of this study was to investigate upper limb joint motion by using a motion capture system and a dynamometer with two different groups of wheelchair users propelling their wheelchairs at different speeds under different load conditions. The variations in the contact time, release time, and linear velocity of the experienced group were all larger than they were in the novice group. The propulsion angles of the experienced users were larger than those of the novices under all conditions. The variances in the propulsion force (both radial and tangential) of the experienced users were larger than those of the novices. The shoulder joint moment had the largest variance with the conditions, followed by the wrist joint moment and the elbow joint moment. The variance of the maximum shoulder joint moment was over four times the variance of the maximum wrist joint moment and eight times the maximum elbow joint moment. The maximum joint moments increased significantly as the speed and load increased in both groups. Quick and significant manipulation ability based on environmental changes is considered an important factor in efficient propulsion. This efficiency was confirmed from the propulsion power results. Sophisticated strategies for efficient manual wheelchair propulsion could be understood by observation of the physical responses of each upper limb joint to changes in load and speed. We expect that the findings of this study will be utilized for designing a rehabilitation program to reduce injuries.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20170000396','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20170000396"><span>Dynamic Leading-Edge Stagnation Point Determination Utilizing an Array of Hot-Film Sensors with Unknown Calibration</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ellsworth, Joel C.</p> <p>2017-01-01</p> <p>During flight-testing of the National Aeronautics and Space Administration (NASA) Gulfstream III (G-III) airplane (Gulfstream Aerospace Corporation, Savannah, Georgia) SubsoniC Research Aircraft Testbed (SCRAT) between March 2013 and April 2015 it became evident that the sensor array used for stagnation point detection was not functioning as expected. The stagnation point detection system is a self calibrating hot-film array; the calibration was unknown and varied between flights, however, the channel with the lowest power consumption was expected to correspond with the point of least surface shear. While individual channels showed the expected behavior for the hot-film sensors, more often than not the lowest power consumption occurred at a single sensor (despite in-flight maneuvering) in the array located far from the expected stagnation point. An algorithm was developed to process the available system output and determine the stagnation point location. After multiple updates and refinements, the final algorithm was not sensitive to the failure of a single sensor in the array, but adjacent failures beneath the stagnation point crippled the algorithm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1207754','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1207754"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wollaber, Allan Benton; Park, HyeongKae; Lowrie, Robert Byron</p> <p></p> <p>Moment-based acceleration via the development of “high-order, low-order” (HO-LO) algorithms has provided substantial accuracy and efficiency enhancements for solutions of the nonlinear, thermal radiative transfer equations by CCS-2 and T-3 staff members. Accuracy enhancements over traditional, linearized methods are obtained by solving a nonlinear, timeimplicit HO-LO system via a Jacobian-free Newton Krylov procedure. This also prevents the appearance of non-physical maximum principle violations (“temperature spikes”) associated with linearization. Efficiency enhancements are obtained in part by removing “effective scattering” from the linearized system. In this highlight, we summarize recent work in which we formally extended the HO-LO radiation algorithm to includemore » operator-split radiation-hydrodynamics.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AcAau.143....9K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AcAau.143....9K"><span>Spherical gyroscopic moment stabilizer for attitude control of microsatellites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Keshtkar, Sajjad; Moreno, Jaime A.; Kojima, Hirohisa; Uchiyama, Kenji; Nohmi, Masahiro; Takaya, Keisuke</p> <p>2018-02-01</p> <p>This paper presents a new and improved concept of recently proposed two-degrees of freedom spherical stabilizer for triaxial orientation of microsatellites. The analytical analysis of the advantages of the proposed mechanism over the existing inertial attitude control devices are introduced. The extended equations of motion of the stabilizing satellite including the spherical gyroscope, for control law design and numerical simulations, are studied in detail. A new control algorithm based on continuous high-order sliding mode algorithms, for managing the torque produced by the stabilizer and therefore the attitude control of the satellite in the presence of perturbations/uncertainties, is presented. Some numerical simulations are carried out to prove the performance of the proposed mechanism and control laws.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27510446','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27510446"><span>Accurate and consistent automatic seismocardiogram annotation without concurrent ECG.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Laurin, A; Khosrow-Khavar, F; Blaber, A P; Tavakolian, Kouhyar</p> <p>2016-09-01</p> <p>Seismocardiography (SCG) is the measurement of vibrations in the sternum caused by the beating of the heart. Precise cardiac mechanical timings that are easily obtained from SCG are critically dependent on accurate identification of fiducial points. So far, SCG annotation has relied on concurrent ECG measurements. An algorithm capable of annotating SCG without the use any other concurrent measurement was designed. We subjected 18 participants to graded lower body negative pressure. We collected ECG and SCG, obtained R peaks from the former, and annotated the latter by hand, using these identified peaks. We also annotated the SCG automatically. We compared the isovolumic moment timings obtained by hand to those obtained using our algorithm. Mean  ±  confidence interval of the percentage of accurately annotated cardiac cycles were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for levels of negative pressure 0, -20, -30, -40, and  -50 mmHg. LF/HF ratios, the relative power of low-frequency variations to high-frequency variations in heart beat intervals, obtained from isovolumic moments were also compared to those obtained from R peaks. The mean differences  ±  confidence interval were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for increasing levels of negative pressure. The accuracy and consistency of the algorithm enables the use of SCG as a stand-alone heart monitoring tool in healthy individuals at rest, and could serve as a basis for an eventual application in pathological cases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70032334','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70032334"><span>Monitoring the Earthquake source process in North America</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Herrmann, Robert B.; Benz, H.; Ammon, C.J.</p> <p>2011-01-01</p> <p>With the implementation of the USGS National Earthquake Information Center Prompt Assessment of Global Earthquakes for Response system (PAGER), rapid determination of earthquake moment magnitude is essential, especially for earthquakes that are felt within the contiguous United States. We report an implementation of moment tensor processing for application to broad, seismically active areas of North America. This effort focuses on the selection of regional crustal velocity models, codification of data quality tests, and the development of procedures for rapid computation of the seismic moment tensor. We systematically apply these techniques to earthquakes with reported magnitude greater than 3.5 in continental North America that are not associated with a tectonic plate boundary. Using the 0.02-0.10 Hz passband, we can usually determine, with few exceptions, moment tensor solutions for earthquakes with M w as small as 3.7. The threshold is significantly influenced by the density of stations, the location of the earthquake relative to the seismic stations and, of course, the signal-to-noise ratio. With the existing permanent broadband stations in North America operated for rapid earthquake response, the seismic moment tensor of most earthquakes that are M w 4 or larger can be routinely computed. As expected the nonuniform spatial pattern of these solutions reflects the seismicity pattern. However, the orientation of the direction of maximum compressive stress and the predominant style of faulting is spatially coherent across large regions of the continent.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950018210','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950018210"><span>Aerodynamic parameter estimation via Fourier modulating function techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pearson, A. E.</p> <p>1995-01-01</p> <p>Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010LNCS.6124..103D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010LNCS.6124..103D"><span>A VaR Algorithm for Warrants Portfolio</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dai, Jun; Ni, Liyun; Wang, Xiangrong; Chen, Weizhong</p> <p></p> <p>Based on Gamma Vega-Cornish Fish methodology, this paper propose the algorithm for calculating VaR via adjusting the quantile under the given confidence level using the four moments (e.g. mean, variance, skewness and kurtosis) of the warrants portfolio return and estimating the variance of portfolio by EWMA methodology. Meanwhile, the proposed algorithm considers the attenuation of the effect of history return on portfolio return of future days. Empirical study shows that, comparing with Gamma-Cornish Fish method and standard normal method, the VaR calculated by Gamma Vega-Cornish Fish can improve the effectiveness of forecasting the portfolio risk by virture of considering the Gamma risk and the Vega risk of the warrants. The significance test is conducted on the calculation results by employing two-tailed test developed by Kupiec. Test results show that the calculated VaRs of the warrants portfolio all pass the significance test under the significance level of 5%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5755791','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5755791"><span>Real-time estimation of horizontal gaze angle by saccade integration using in-ear electrooculography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2018-01-01</p> <p>The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user’s eye gaze. PMID:29304120</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27547530','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27547530"><span>Morphological analysis of dendrites and spines by hybridization of ridge detection with twin support vector machine.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Shuihua; Chen, Mengmeng; Li, Yang; Shao, Ying; Zhang, Yudong; Du, Sidan; Wu, Jane</p> <p>2016-01-01</p> <p>Dendritic spines are described as neuronal protrusions. The morphology of dendritic spines and dendrites has a strong relationship to its function, as well as playing an important role in understanding brain function. Quantitative analysis of dendrites and dendritic spines is essential to an understanding of the formation and function of the nervous system. However, highly efficient tools for the quantitative analysis of dendrites and dendritic spines are currently undeveloped. In this paper we propose a novel three-step cascaded algorithm-RTSVM- which is composed of ridge detection as the curvature structure identifier for backbone extraction, boundary location based on differences in density, the Hu moment as features and Twin Support Vector Machine (TSVM) classifiers for spine classification. Our data demonstrates that this newly developed algorithm has performed better than other available techniques used to detect accuracy and false alarm rates. This algorithm will be used effectively in neuroscience research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29304120','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29304120"><span>Real-time estimation of horizontal gaze angle by saccade integration using in-ear electrooculography.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hládek, Ľuboš; Porr, Bernd; Brimijoin, W Owen</p> <p>2018-01-01</p> <p>The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user's eye gaze.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20060008093','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20060008093"><span>Data-Rate Estimation for Autonomous Receiver Operation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tkacenko, A.; Simon, M. K.</p> <p>2005-01-01</p> <p>In this article, we present a series of algorithms for estimating the data rate of a signal whose admissible data rates are integer base, integer powered multiples of a known basic data rate. These algorithms can be applied to the Electra radio currently used in the Deep Space Network (DSN), which employs data rates having the above relationship. The estimation is carried out in an autonomous setting in which very little a priori information is assumed. It is done by exploiting an elegant property of the split symbol moments estimator (SSME), which is traditionally used to estimate the signal-to-noise ratio (SNR) of the received signal. By quantizing the assumed symbol-timing error or jitter, we present an all-digital implementation of the SSME which can be used to jointly estimate the data rate, SNR, and jitter. Simulation results presented show that these joint estimation algorithms perform well, even in the low SNR regions typically encountered in the DSN.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4481961','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4481961"><span>A Self-Alignment Algorithm for SINS Based on Gravitational Apparent Motion and Sensor Data Denoising</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liu, Yiting; Xu, Xiaosu; Liu, Xixiang; Yao, Yiqing; Wu, Liang; Sun, Jin</p> <p>2015-01-01</p> <p>Initial alignment is always a key topic and difficult to achieve in an inertial navigation system (INS). In this paper a novel self-initial alignment algorithm is proposed using gravitational apparent motion vectors at three different moments and vector-operation. Simulation and analysis showed that this method easily suffers from the random noise contained in accelerometer measurements which are used to construct apparent motion directly. Aiming to resolve this problem, an online sensor data denoising method based on a Kalman filter is proposed and a novel reconstruction method for apparent motion is designed to avoid the collinearity among vectors participating in the alignment solution. Simulation, turntable tests and vehicle tests indicate that the proposed alignment algorithm can fulfill initial alignment of strapdown INS (SINS) under both static and swinging conditions. The accuracy can either reach or approach the theoretical values determined by sensor precision under static or swinging conditions. PMID:25923932</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20030107271','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20030107271"><span>Prediction of Aerodynamic Coefficients for Wind Tunnel Data using a Genetic Algorithm Optimized Neural Network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rajkumar, T.; Aragon, Cecilia; Bardina, Jorge; Britten, Roy</p> <p>2002-01-01</p> <p>A fast, reliable way of predicting aerodynamic coefficients is produced using a neural network optimized by a genetic algorithm. Basic aerodynamic coefficients (e.g. lift, drag, pitching moment) are modelled as functions of angle of attack and Mach number. The neural network is first trained on a relatively rich set of data from wind tunnel tests of numerical simulations to learn an overall model. Most of the aerodynamic parameters can be well-fitted using polynomial functions. A new set of data, which can be relatively sparse, is then supplied to the network to produce a new model consistent with the previous model and the new data. Because the new model interpolates realistically between the sparse test data points, it is suitable for use in piloted simulations. The genetic algorithm is used to choose a neural network architecture to give best results, avoiding over-and under-fitting of the test data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4061264','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4061264"><span>Real-Time Tracking of Knee Adduction Moment in Patients with Knee Osteoarthritis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kang, Sang Hoon; Lee, Song Joo; Zhang, Li-Qun</p> <p>2014-01-01</p> <p>Background The external knee adduction moment (EKAM) is closely associated with the presence, progression, and severity of knee osteoarthritis (OA). However, there is a lack of convenient and practical method to estimate and track in real-time the EKAM of patients with knee OA for clinical evaluation and gait training, especially outside of gait laboratories. New Method A real-time EKAM estimation method was developed and applied to track and investigate the EKAM and other knee moments during stepping on an elliptical trainer in both healthy subjects and a patient with knee OA. Results Substantial changes were observed in the EKAM and other knee moments during stepping in the patient with knee OA. Comparison with Existing Method(s) This is the first study to develop and test feasibility of real-time tracking method of the EKAM on patients with knee OA using 3-D inverse dynamics. Conclusions The study provides us an accurate and practical method to evaluate in real-time the critical EKAM associated with knee OA, which is expected to help us to diagnose and evaluate patients with knee OA and provide the patients with real-time EKAM feedback rehabilitation training. PMID:24361759</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1214938','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1214938"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Orozco, Luis A</p> <p></p> <p>This is a report of the construction of a Francium Trapping Facility (FTF) at the Isotope Separator and Accelerator (ISAC) of TRIUMF in Vancouver, Canada, where the Francium Parity Non Conservation (FrPNC) international collaboration has its home. This facility will be used to study fundamental symmetries with high-resolution atomic spectroscopy. The primary scientific objective of the program is a measurement of the anapole moment of francium in a chain of isotopes by observing the parity violation induced by the weak interaction. The anapole moment of francium and associated signal are expected to be ten times larger than in cesium, themore » only element in which an anapole moment has been observed. The measurement will provide crucial information for better understanding weak hadronic interactions in the context of Quantum Chromodynamics (QCD). The methodology combines nuclear and particle physics techniques for the production of francium with precision measurements based on laser cooling and trapping and microwave spectroscopy. The program builds on an initial series of atomic spectroscopy measurements of the nuclear structure of francium, based on isotope shifts and hyperfine anomalies, before conducting the anapole moment measurements, these measurements performed during commissioning runs help understand the atomic and nuclear structure of Fr.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110003168','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110003168"><span>Experimental Investigations of the NASA Common Research Model in the NASA Langley National Transonic Facility and NASA Ames 11-Ft Transonic Wind Tunnel (Invited)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rivers, S. M.; Dittberner, Ashley</p> <p>2011-01-01</p> <p>Experimental aerodynamic investigations of the NASA Common Research Model have been conducted in the NASA Langley National Transonic Facility and the NASA Ames 11-ft wind tunnel. Data have been obtained at chord Reynolds numbers of 5 million for five different configurations at both wind tunnels. Force and moment, surface pressure and surface flow visualization data were obtained in both facilities but only the force and moment data are presented herein. Nacelle/pylon, tail effects and tunnel to tunnel variations have been assessed. The data from both wind tunnels show that an addition of a nacelle/pylon gave an increase in drag, decrease in lift and a less nose down pitching moment around the design lift condition of 0.5 and that the tail effects also follow the expected trends. Also, all of the data shown fall within the 2-sigma limits for repeatability. The tunnel to tunnel differences are negligible for lift and pitching moment, while the drag shows a difference of less than ten counts for all of the configurations. These differences in drag may be due to the variation in the sting mounting systems at the two tunnels.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1371900','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1371900"><span>Magnetic properties of Dy nano-islands on graphene</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Anderson, Nathaniel A.; Zhang, Qiang; Hupalo, Myron</p> <p></p> <p>Here, we have determined the magnetic properties of epitaxially grown Dy islands on graphene/SiC(0001) that are passivated by a gold film (deposited in the ultra-high vacuum growth chamber) for ex-situ X-ray magnetic circular dichroism (XMCD). Our sum-rule analysis of the Dy M 4,5 XMCD spectra at low temperatures ( T = 15 K) as a function of magnetic field assuming Dy 3+ (spin configuration 6 H 15/2) indicate that the projection of the magnetic moment along an applied magnetic field of 5 T is 3.5(3) μ B. Temperature dependence of the magnetic moment (extracted from the M 5 XMCD spectra)more » shows an onset of a change in magnetic moment at about 175 K in proximity of the transition from paramagnetic to helical magnetic structure at T H = 179 K in bulk Dy. No feature at the vicinity of the ferromagnetic transition of hcp bulk Dy at T c = 88 K is observed. However, below ~130 K, the inverse magnetic moment (extracted from the XMCD) is linear in temperature as commonly expected from a paramagnetic system suggesting different behavior of Dy nano-island than bulk Dy.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1371900-magnetic-properties-dy-nano-islands-graphene','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1371900-magnetic-properties-dy-nano-islands-graphene"><span>Magnetic properties of Dy nano-islands on graphene</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Anderson, Nathaniel A.; Zhang, Qiang; Hupalo, Myron; ...</p> <p>2017-04-07</p> <p>Here, we have determined the magnetic properties of epitaxially grown Dy islands on graphene/SiC(0001) that are passivated by a gold film (deposited in the ultra-high vacuum growth chamber) for ex-situ X-ray magnetic circular dichroism (XMCD). Our sum-rule analysis of the Dy M 4,5 XMCD spectra at low temperatures ( T = 15 K) as a function of magnetic field assuming Dy 3+ (spin configuration 6 H 15/2) indicate that the projection of the magnetic moment along an applied magnetic field of 5 T is 3.5(3) μ B. Temperature dependence of the magnetic moment (extracted from the M 5 XMCD spectra)more » shows an onset of a change in magnetic moment at about 175 K in proximity of the transition from paramagnetic to helical magnetic structure at T H = 179 K in bulk Dy. No feature at the vicinity of the ferromagnetic transition of hcp bulk Dy at T c = 88 K is observed. However, below ~130 K, the inverse magnetic moment (extracted from the XMCD) is linear in temperature as commonly expected from a paramagnetic system suggesting different behavior of Dy nano-island than bulk Dy.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23629840','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23629840"><span>Time Series Modeling of Nano-Gold Immunochromatographic Assay via Expectation Maximization Algorithm.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zeng, Nianyin; Wang, Zidong; Li, Yurong; Du, Min; Cao, Jie; Liu, Xiaohui</p> <p>2013-12-01</p> <p>In this paper, the expectation maximization (EM) algorithm is applied to the modeling of the nano-gold immunochromatographic assay (nano-GICA) via available time series of the measured signal intensities of the test and control lines. The model for the nano-GICA is developed as the stochastic dynamic model that consists of a first-order autoregressive stochastic dynamic process and a noisy measurement. By using the EM algorithm, the model parameters, the actual signal intensities of the test and control lines, as well as the noise intensity can be identified simultaneously. Three different time series data sets concerning the target concentrations are employed to demonstrate the effectiveness of the introduced algorithm. Several indices are also proposed to evaluate the inferred models. It is shown that the model fits the data very well.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.S51D2721W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.S51D2721W"><span>Monte Carlo Volcano Seismic Moment Tensors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Waite, G. P.; Brill, K. A.; Lanza, F.</p> <p>2015-12-01</p> <p>Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70025457','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70025457"><span>Moment-tensor solutions estimated using optimal filter theory: Global seismicity, 2001</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Sipkin, S.A.; Bufe, C.G.; Zirbes, M.D.</p> <p>2003-01-01</p> <p>This paper is the 12th in a series published yearly containing moment-tensor solutions computed at the US Geological Survey using an algorithm based on the theory of optimal filter design (Sipkin, 1982 and Sipkin, 1986b). An inversion has been attempted for all earthquakes with a magnitude, mb or MS, of 5.5 or greater. Previous listings include solutions for earthquakes that occurred from 1981 to 2000 (Sipkin, 1986b; Sipkin and Needham, 1989, Sipkin and Needham, 1991, Sipkin and Needham, 1992, Sipkin and Needham, 1993, Sipkin and Needham, 1994a and Sipkin and Needham, 1994b; Sipkin and Zirbes, 1996 and Sipkin and Zirbes, 1997; Sipkin et al., 1998, Sipkin et al., 1999, Sipkin et al., 2000a, Sipkin et al., 2000b and Sipkin et al., 2002).The entire USGS moment-tensor catalog can be obtained via anonymous FTP at ftp://ghtftp.cr.usgs.gov. After logging on, change directory to “momten”. This directory contains two compressed ASCII files that contain the finalized solutions, “mt.lis.Z” and “fmech.lis.Z”. “mt.lis.Z” contains the elements of the moment tensors along with detailed event information; “fmech.lis.Z” contains the decompositions into the principal axes and best double-couples. The fast moment-tensor solutions for more recent events that have not yet been finalized and added to the catalog, are gathered by month in the files “jan01.lis.Z”, etc. “fmech.doc.Z” describes the various fields.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=algorithm&pg=4&id=EJ990383','ERIC'); return false;" href="https://eric.ed.gov/?q=algorithm&pg=4&id=EJ990383"><span>Global Convergence of the EM Algorithm for Unconstrained Latent Variable Models with Categorical Indicators</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Weissman, Alexander</p> <p>2013-01-01</p> <p>Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23036800','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23036800"><span>Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia</p> <p>2013-02-01</p> <p>The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090037063','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090037063"><span>A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bhaduri, Kanishka; Srivastava, Ashok N.</p> <p>2009-01-01</p> <p>This paper offers a local distributed algorithm for expectation maximization in large peer-to-peer environments. The algorithm can be used for a variety of well-known data mining tasks in a distributed environment such as clustering, anomaly detection, target tracking to name a few. This technology is crucial for many emerging peer-to-peer applications for bioinformatics, astronomy, social networking, sensor networks and web mining. Centralizing all or some of the data for building global models is impractical in such peer-to-peer environments because of the large number of data sources, the asynchronous nature of the peer-to-peer networks, and dynamic nature of the data/network. The distributed algorithm we have developed in this paper is provably-correct i.e. it converges to the same result compared to a similar centralized algorithm and can automatically adapt to changes to the data and the network. We show that the communication overhead of the algorithm is very low due to its local nature. This monitoring algorithm is then used as a feedback loop to sample data from the network and rebuild the model when it is outdated. We present thorough experimental results to verify our theoretical claims.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19810014192','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19810014192"><span>SCI model structure determination program (OSR) user's guide. [optimal subset regression</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1979-01-01</p> <p>The computer program, OSR (Optimal Subset Regression) which estimates models for rotorcraft body and rotor force and moment coefficients is described. The technique used is based on the subset regression algorithm. Given time histories of aerodynamic coefficients, aerodynamic variables, and control inputs, the program computes correlation between various time histories. The model structure determination is based on these correlations. Inputs and outputs of the program are given.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA598811','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA598811"><span>Moment-Based Physical Models of Broadband Clutter due to Aggregations of Fish</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2013-09-30</p> <p>statistical models for signal-processing algorithm development. These in turn will help to develop a capability to statistically forecast the impact of...aggregations of fish based on higher-order statistical measures describable in terms of physical and system parameters. Environmentally , these models...processing. In this experiment, we had good ground truth on (1) and (2), and had control over (3) and (4) except for environmentally -imposed restrictions</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1982mcdd.reptQ....M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1982mcdd.reptQ....M"><span>Algorithm for Surface of Translation Attached Radiators (A-STAR). Volume 1: Formulation of the analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Medgyesimitschang, L. N.; Putnam, J. M.</p> <p>1982-05-01</p> <p>A general analytical formulation, based on the method of moments (MM) is described for solving electromagnetic problems associated with off-surface (wire) and aperture radiators on finite-length cylinders of arbitrary cross section, denoted in this report as bodies of translation (BOT). This class of bodies can be used to model structures with noncircular cross sections such as wings, fins and aircraft fuselages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA219673','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA219673"><span>Sequential Decoding with Adaptive Reordering of Codeword Trees</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1990-03-01</p> <p>mother, and my brother Alexander. V ACKNOWLEDGEMENTS I would like to thank my two advisors, Professor Erdal Arikan and Pro- fessor Bruce Hajek, for their...invaluable assistance and guidance. In particu- lar, Professor Arikan provided conceptual insight and the original idea behind SDR algorithms, and...the codeword tree to result in the unbounded moments of compu- tation described above. Note that Arikan [1] has obtained an improved bound for the case</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA631423','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA631423"><span>Summary of Progress on SIG Ft. Ord ESTCP DemVal</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2007-04-01</p> <p>We report on progress under an ESTCP demonstration plan dedicated to demonstrating active learning - based UXO detection on an actual former UXO site...Ft. Ord), using EMI data. In addition to describing the details of the active - learning algorithm, we discuss techniques that were required when...terms of two dipole-moment magnitudes and two resonant frequencies. Information-theoretic active learning is then conducted on all anomalies to</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SPIE.6381E..05R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SPIE.6381E..05R"><span>Noninvasive forward-scattering system for rapid detection, characterization, and identification of Listeria colonies: image processing and data analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rajwa, Bartek; Bayraktar, Bulent; Banada, Padmapriya P.; Huff, Karleigh; Bae, Euiwon; Hirleman, E. Daniel; Bhunia, Arun K.; Robinson, J. Paul</p> <p>2006-10-01</p> <p>Bacterial contamination by Listeria monocytogenes puts the public at risk and is also costly for the food-processing industry. Traditional methods for pathogen identification require complicated sample preparation for reliable results. Previously, we have reported development of a noninvasive optical forward-scattering system for rapid identification of Listeria colonies grown on solid surfaces. The presented system included application of computer-vision and patternrecognition techniques to classify scatter pattern formed by bacterial colonies irradiated with laser light. This report shows an extension of the proposed method. A new scatterometer equipped with a high-resolution CCD chip and application of two additional sets of image features for classification allow for higher accuracy and lower error rates. Features based on Zernike moments are supplemented by Tchebichef moments, and Haralick texture descriptors in the new version of the algorithm. Fisher's criterion has been used for feature selection to decrease the training time of machine learning systems. An algorithm based on support vector machines was used for classification of patterns. Low error rates determined by cross-validation, reproducibility of the measurements, and robustness of the system prove that the proposed technology can be implemented in automated devices for detection and classification of pathogenic bacteria.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005SPIE.5839..211R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005SPIE.5839..211R"><span>Interpretation of the instantaneous frequency of phonocardiogram signals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rey, Alexis B.</p> <p>2005-06-01</p> <p>Short-Time Fourier transforms, Wigner-Ville distribution, and Wavelet Transforms have been commonly used when dealing with non-stationary signals, and they have been known as time-frequency distributions. Also, it is commonly intended to investigate the behaviour of phonocardiogram signals as a means of prediction some oh the pathologies of the human hart. For this, this paper aims to analyze the relationship between the instantaneous frequency of a PCG signal and the so-mentioned time-frequency distributions; three algorithms using Matlab functions have been developed: the first one, the estimation of the IF using the normalized linear moment, the second one, the estimation of the IF using the periodic first moment, and the third one, the computing of the WVD. Meanwhile, the computing of the STFT spectrogram is carried out with a Matlab function. Several simulations of the spectrogram for a set of PCG signals and the estimation of the IF are shown, and its relationship is validated through correlation. Finally, the second algorithm is a better choice because the estimation is not biased, whereas the WVD is very computing-demanding and offers no benefit since the estimation of the IF by using this TFD has an equivalent result when using the derivative of the phase of the analytic signal, which is also less computing-demanding.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27347872','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27347872"><span>Lower Extremity Movement Differences Persist After Anterior Cruciate Ligament Reconstruction and When Returning to Sports.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Butler, Robert J; Dai, Boyi; Huffman, Nikki; Garrett, William E; Queen, Robin M</p> <p>2016-09-01</p> <p>To examine how landing mechanics change in patients after anterior cruciate ligament reconstruction (ACL-R) between 6 months and 12 months after surgery. Case-series. Laboratory. Fifteen adolescent patients after ACL-R participated. Lower extremity three-dimensional motion analysis was conducted during a bilateral stop jump task in patients at 6 and 12 months after ACL-R. Joint kinematic and kinetic data, in addition to ground reaction forces, were collected at each time point. During the stop jump landing, the peak joint moments and the initial and peak joint motion at the ankle, knee, and hip were examined. The peak vertical ground reaction force was also examined. Interactions were observed for both the peak knee (P = 0.03) and hip extension moment (P = 0.07). However, only the hip extension moment was symmetrical level at 12 months. Statistically significant (P < 0.05) side-to-side differences existed for the ankle angle at initial contact, peak plantarflexion moment, peak hip flexion angle, and peak impact vertical ground reaction force independent of time. The findings of this study suggest that sagittal plane moments at the knee and hip demonstrate an increase in symmetry between 6 months and 1 year after ACL-R surgery, however, symmetry of the knee extension moment is not established by 12 months after surgery. The lack of change in the variables across time was unexpected. As a result, it is inappropriate to expect a change in landing mechanics solely as a result of time alone after discharge from rehabilitation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1332693','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1332693"><span>Fast Demand Forecast of Electric Vehicle Charging Stations for Cell Phone Application</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Majidpour, Mostafa; Qiu, Charlie; Chung, Ching-Yen</p> <p></p> <p>This paper describes the core cellphone application algorithm which has been implemented for the prediction of energy consumption at Electric Vehicle (EV) Charging Stations at UCLA. For this interactive user application, the total time of accessing database, processing the data and making the prediction, needs to be within a few seconds. We analyze four relatively fast Machine Learning based time series prediction algorithms for our prediction engine: Historical Average, kNearest Neighbor, Weighted k-Nearest Neighbor, and Lazy Learning. The Nearest Neighbor algorithm (k Nearest Neighbor with k=1) shows better performance and is selected to be the prediction algorithm implemented for themore » cellphone application. Two applications have been designed on top of the prediction algorithm: one predicts the expected available energy at the station and the other one predicts the expected charging finishing time. The total time, including accessing the database, data processing, and prediction is about one second for both applications.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.887a2008Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.887a2008Y"><span>Combing VFH with bezier for motion planning of an autonomous vehicle</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ye, Feng; Yang, Jing; Ma, Chao; Rong, Haijun</p> <p>2017-08-01</p> <p>Vector Field Histogram (VFH) is a method for mobile robot obstacle avoidance. However, due to the nonholonomic constraints of the vehicle, the algorithm is seldom applied to autonomous vehicles. Especially when we expect the vehicle to reach target location in a certain direction, the algorithm is often unsatisfactory. Fortunately, the Bezier Curve is defined by the states of the starting point and the target point. We can use this feature to make the vehicle in the expected direction. Therefore, we propose an algorithm to combine the Bezier Curve with the VFH algorithm, to search for the collision-free states with the VFH search method, and to select the optimal trajectory point with the Bezier Curve as the reference line. This means that we will improve the cost function in the VFH algorithm by comparing the distance between candidate directions and reference line. Finally, select the closest direction to the reference line to be the optimal motion direction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28254077','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28254077"><span>Automated identification of sleep states from EEG signals by means of ensemble empirical mode decomposition and random under sampling boosting.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hassan, Ahnaf Rashik; Bhuiyan, Mohammed Imamul Hassan</p> <p>2017-03-01</p> <p>Automatic sleep staging is essential for alleviating the burden of the physicians of analyzing a large volume of data by visual inspection. It is also a precondition for making an automated sleep monitoring system feasible. Further, computerized sleep scoring will expedite large-scale data analysis in sleep research. Nevertheless, most of the existing works on sleep staging are either multichannel or multiple physiological signal based which are uncomfortable for the user and hinder the feasibility of an in-home sleep monitoring device. So, a successful and reliable computer-assisted sleep staging scheme is yet to emerge. In this work, we propose a single channel EEG based algorithm for computerized sleep scoring. In the proposed algorithm, we decompose EEG signal segments using Ensemble Empirical Mode Decomposition (EEMD) and extract various statistical moment based features. The effectiveness of EEMD and statistical features are investigated. Statistical analysis is performed for feature selection. A newly proposed classification technique, namely - Random under sampling boosting (RUSBoost) is introduced for sleep stage classification. This is the first implementation of EEMD in conjunction with RUSBoost to the best of the authors' knowledge. The proposed feature extraction scheme's performance is investigated for various choices of classification models. The algorithmic performance of our scheme is evaluated against contemporary works in the literature. The performance of the proposed method is comparable or better than that of the state-of-the-art ones. The proposed algorithm gives 88.07%, 83.49%, 92.66%, 94.23%, and 98.15% for 6-state to 2-state classification of sleep stages on Sleep-EDF database. Our experimental outcomes reveal that RUSBoost outperforms other classification models for the feature extraction framework presented in this work. Besides, the algorithm proposed in this work demonstrates high detection accuracy for the sleep states S1 and REM. Statistical moment based features in the EEMD domain distinguish the sleep states successfully and efficaciously. The automated sleep scoring scheme propounded herein can eradicate the onus of the clinicians, contribute to the device implementation of a sleep monitoring system, and benefit sleep research. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016NIMPA.828..116S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016NIMPA.828..116S"><span>Electromagnetic Simulation and Design of a Novel Waveguide RF Wien Filter for Electric Dipole Moment Measurements of Protons and Deuterons</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Slim, J.; Gebel, R.; Heberling, D.; Hinder, F.; Hölscher, D.; Lehrach, A.; Lorentz, B.; Mey, S.; Nass, A.; Rathmann, F.; Reifferscheidt, L.; Soltner, H.; Straatmann, H.; Trinkel, F.; Wolters, J.</p> <p>2016-08-01</p> <p>The conventional Wien filter is a device with orthogonal static magnetic and electric fields, often used for velocity separation of charged particles. Here we describe the electromagnetic design calculations for a novel waveguide RF Wien filter that will be employed to solely manipulate the spins of protons or deuterons at frequencies of about 0.1-2 MHz at the COoler SYnchrotron COSY at Jülich. The device will be used in a future experiment that aims at measuring the proton and deuteron electric dipole moments, which are expected to be very small. Their determination, however, would have a huge impact on our understanding of the universe.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25591340','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25591340"><span>The fast multipole method and point dipole moment polarizable force fields.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Coles, Jonathan P; Masella, Michel</p> <p>2015-01-14</p> <p>We present an implementation of the fast multipole method for computing Coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of fast multipole method combined with state-of-the-art chemical models in molecular dynamical systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18352577','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18352577"><span>Electronic state of PuCoGa5 and NpCoGa5 as probed by polarized neutrons.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hiess, A; Stunault, A; Colineau, E; Rebizant, J; Wastin, F; Caciuffo, R; Lander, G H</p> <p>2008-02-22</p> <p>By using single crystals and polarized neutrons, we have measured the orbital and spin components of the microscopic magnetization in the paramagnetic state of NpCoGa(5) and PuCoGa(5). The microscopic magnetization of NpCoGa(5) agrees with that observed in bulk susceptibility measurements and the magnetic moment has spin and orbital contributions as expected for intermediate coupling. In contrast, for PuCoGa(5), which is a superconductor with a high transition temperature, the microscopic magnetization in the paramagnetic state is small, temperature-independent, and significantly below the value found with bulk techniques at low temperatures. The orbital moment dominates the magnetization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11736605','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11736605"><span>Portfolios of quantum algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Maurer, S M; Hogg, T; Huberman, B A</p> <p>2001-12-17</p> <p>Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1127184.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1127184.pdf"><span>Educative Supervision in International Cooperation Contexts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ortiz, Ana; Valdivia-Moral, Pedro; Cachón, Javier; Prieto, Joel</p> <p>2015-01-01</p> <p>This present paper has got a clear goal: to contextualize education for development in the present moment, planning its evolution and the keys that characterize education collected from the most representative and up-to-date pieces of work. The experience that we present is integrated in a project developed in Paraguay that is expected to describe…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=time+AND+travel+AND+real&id=EJ838928','ERIC'); return false;" href="https://eric.ed.gov/?q=time+AND+travel+AND+real&id=EJ838928"><span>Taking Your Library on the Road</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Weldon, Lorette S. J.</p> <p>2009-01-01</p> <p>Information professionals need to be reachable through email (through cell phones, laptops, Treos, and BlackBerries) and customers' questions have to be answered in "real time," meaning that once the question is sent, an answer is expected that moment. A library in a Google environment allows this to happen. It also allows the information…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1122554.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1122554.pdf"><span>Determination of Classroom Pre-Service Teachers' State of Personal Innovativeness</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Yorulmaz, Alper; Çokçaliskan, Halil; Önal, Halil</p> <p>2017-01-01</p> <p>Today, in every passing moment, a new piece of information is acquired and the accumulation of this information leads to social and technological developments. Therefore, today, individuals are expected to rapidly adjust to innovations. As such, individuals should be open to innovations and willing to adopt innovations; that is, they need to be…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED571004.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED571004.pdf"><span>Latinas/os in Community College Developmental Education: Increasing Moments of Academic and Interpersonal Validation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Acevedo-Gil, Nancy; Solorzano, Daniel G.; Santos, Ryan E.</p> <p>2014-01-01</p> <p>This qualitative study examines the experiences of Latinas/os in community college English and math developmental education courses. Critical race theory in education and the theory of validation serve as guiding frameworks. The authors find that institutional agents provide academic validation by emphasizing high expectations, focusing on social…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED482134.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED482134.pdf"><span>An Economic Approach to Setting Contribution Limits in Qualified State-Sponsored Tuition Savings Plans.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ma, Jennifer; Warshawsky, Mark J.; Ameriks, John; Blohm, Julia A.</p> <p></p> <p>This study used an expected utility framework with a mean-lower partial moment specification for investor utility to determine the asset allocation and the allowable contribution limits for qualified state-sponsored tuition savings plans. Given the assumptions about state policymakers' perceptions of investor utility, the study determined the…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=validation&pg=2&id=EJ1054361','ERIC'); return false;" href="https://eric.ed.gov/?q=validation&pg=2&id=EJ1054361"><span>Latinas/os in Community College Developmental Education: Increasing Moments of Academic and Interpersonal Validation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Acevedo-Gil, Nancy; Santos, Ryan E.; Alonso, LLuliana; Solorzano, Daniel G.</p> <p>2015-01-01</p> <p>This qualitative study examines the experiences of Latinas/os in community college English and math developmental education courses. Critical race theory in education and the theory of validation serve as guiding frameworks. The authors find that institutional agents provide academic validation by emphasizing high expectations, focusing on social…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvD..96c4515B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvD..96c4515B"><span>Using infinite-volume, continuum QED and lattice QCD for the hadronic light-by-light contribution to the muon anomalous magnetic moment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Blum, Thomas; Christ, Norman; Hayakawa, Masashi; Izubuchi, Taku; Jin, Luchang; Jung, Chulwoo; Lehner, Christoph</p> <p>2017-08-01</p> <p>In our previous work, Blum et al. [Phys. Rev. Lett. 118, 022005 (2017), 10.1103/PhysRevLett.118.022005], the connected and leading disconnected hadronic light-by-light contributions to the muon anomalous magnetic moment (g -2 ) have been computed using lattice QCD ensembles corresponding to physical pion mass generated by the RBC/UKQCD Collaboration. However, the calculation is expected to suffer from a significant finite-volume error that scales like 1 /L2 where L is the spatial size of the lattice. In this paper, we demonstrate that this problem is cured by treating the muon and photons in infinite-volume, continuum QED, resulting in a weighting function that is precomputed and saved with affordable cost and sufficient accuracy. We present numerical results for the case when the quark loop is replaced by a muon loop, finding the expected exponential approach to the infinite volume limit and consistency with the known analytic result. We have implemented an improved weighting function which reduces both discretization and finite-volume effects arising from the hadronic part of the amplitude.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1408711-using-infinite-volume-continuum-qed-lattice-qcd-hadronic-light-light-contribution-muon-anomalous-magnetic-moment','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1408711-using-infinite-volume-continuum-qed-lattice-qcd-hadronic-light-light-contribution-muon-anomalous-magnetic-moment"><span>Using infinite-volume, continuum QED and lattice QCD for the hadronic light-by-light contribution to the muon anomalous magnetic moment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Blum, Thomas; Christ, Norman; Hayakawa, Masashi; ...</p> <p>2017-08-22</p> <p>In our previous work, the connected and leading disconnected hadronic light-by-light contributions to the muon anomalous magnetic moment (g — 2) have been computed using lattice QCD ensembles corresponding to physical pion mass generated by the RBC/UKQCD Collaboration. However, the calculation is expected to suffer from a significant finite-volume error that scales like 1/L 2 where L is the spatial size of the lattice. In this paper, we demonstrate that this problem is cured by treating the muon and photons in infinite-volume, continuum QED, resulting in a weighting function that is precomputed and saved with affordable cost and sufficient accuracy.more » We present numerical results for the case when the quark loop is replaced by a muon loop, finding the expected exponential approach to the infinite volume limit and consistency with the known analytic result. Here, we have implemented an improved weighting function which reduces both discretization and finite-volume effects arising from the hadronic part of the amplitude.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1408711-using-infinite-volume-continuum-qed-lattice-qcd-hadronic-light-light-contribution-muon-anomalous-magnetic-moment','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1408711-using-infinite-volume-continuum-qed-lattice-qcd-hadronic-light-light-contribution-muon-anomalous-magnetic-moment"><span>Using infinite-volume, continuum QED and lattice QCD for the hadronic light-by-light contribution to the muon anomalous magnetic moment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Blum, Thomas; Christ, Norman; Hayakawa, Masashi</p> <p></p> <p>In our previous work, the connected and leading disconnected hadronic light-by-light contributions to the muon anomalous magnetic moment (g — 2) have been computed using lattice QCD ensembles corresponding to physical pion mass generated by the RBC/UKQCD Collaboration. However, the calculation is expected to suffer from a significant finite-volume error that scales like 1/L 2 where L is the spatial size of the lattice. In this paper, we demonstrate that this problem is cured by treating the muon and photons in infinite-volume, continuum QED, resulting in a weighting function that is precomputed and saved with affordable cost and sufficient accuracy.more » We present numerical results for the case when the quark loop is replaced by a muon loop, finding the expected exponential approach to the infinite volume limit and consistency with the known analytic result. Here, we have implemented an improved weighting function which reduces both discretization and finite-volume effects arising from the hadronic part of the amplitude.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120006655','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120006655"><span>Planning the FUSE Mission Using the SOVA Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lanzi, James; Heatwole, Scott; Ward, Philip R.; Civeit, Thomas; Calvani, Humberto; Kruk, Jeffrey W.; Suchkov, Anatoly</p> <p>2011-01-01</p> <p>Three documents discuss the Sustainable Objective Valuation and Attainability (SOVA) algorithm and software as used to plan tasks (principally, scientific observations and associated maneuvers) for the Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. SOVA is a means of managing risk in a complex system, based on a concept of computing the expected return value of a candidate ordered set of tasks as a product of pre-assigned task values and assessments of attainability made against qualitatively defined strategic objectives. For the FUSE mission, SOVA autonomously assembles a week-long schedule of target observations and associated maneuvers so as to maximize the expected scientific return value while keeping the satellite stable, managing the angular momentum of spacecraft attitude- control reaction wheels, and striving for other strategic objectives. A six-degree-of-freedom model of the spacecraft is used in simulating the tasks, and the attainability of a task is calculated at each step by use of strategic objectives as defined by use of fuzzy inference systems. SOVA utilizes a variant of a graph-search algorithm known as the A* search algorithm to assemble the tasks into a week-long target schedule, using the expected scientific return value to guide the search.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850004526','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850004526"><span>Algorithm for astronomical, point source, signal to noise ratio calculations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jayroe, R. R.; Schroeder, D. J.</p> <p>1984-01-01</p> <p>An algorithm was developed to simulate the expected signal to noise ratios as a function of observation time in the charge coupled device detector plane of an optical telescope located outside the Earth's atmosphere for a signal star, and an optional secondary star, embedded in a uniform cosmic background. By choosing the appropriate input values, the expected point source signal to noise ratio can be computed for the Hubble Space Telescope using the Wide Field/Planetary Camera science instrument.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.911a2013M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.911a2013M"><span>Relative Positioning Evaluation of a Tetrahedral Flight Formation’s Satellites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mahler, W. F. C.; Rocco, E. M.; Santos, D. P. S.</p> <p>2017-10-01</p> <p>This paper presents a study about the tetrahedral layout of four satellites in a way that every half-orbital period this set groups together while flying in formation. The formation is calculated analyzing the problem from a geometrical perspective and disposed by precisely adjusting the orbital parameters of each satellite. The dynamic modelling considers the orbital motion equations. The results are analyzed, compared and discussed. A detection algorithm is used as flag to signal the regular tetrahedron’s exact moments of occurrence. To do so, the volume calculated during the simulation is compared to the real volume, based on the initial conditions of the exact moment of formation and respecting a tolerance. This tolerance value is stablished arbitrarily depending on the mission and the formation’s geometrical parameters. The simulations will run on a computational environment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015CEEng..11..110P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015CEEng..11..110P"><span>Updating the Nomographical Diagrams for Dimensioning the Beams</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pop, Maria T.</p> <p>2015-12-01</p> <p>In order to reduce the time period needed for structures design it is strongly recommended to use nomographical diagrams. The base for formation and updating the nomographical diagrams, stands on the charts presented by different technical publications. The updated charts use the same algorithm and calculation elements as the former diagrams in accordance to the latest prescriptions and European standards. The result consists in a chart, having the same properties, similar with the nomogragraphical diagrams already in us. As a general conclusion, even in our days, the nomographical diagrams are very easy to use. Taking into consideration the value of the moment it's easy to find out the necessary reinforcement area and vice-verse, having the reinforcement area you can find out the capable moment. It still remains a useful opportunity for pre-sizing and designs the reinforced concrete sections.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15798799','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15798799"><span>Precipitation, pH and metal load in AMD river basins: an application of fuzzy clustering algorithms to the process characterization.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Grande, J A; Andújar, J M; Aroba, J; de la Torre, M L; Beltrán, R</p> <p>2005-04-01</p> <p>In the present work, Acid Mine Drainage (AMD) processes in the Chorrito Stream, which flows into the Cobica River (Iberian Pyrite Belt, Southwest Spain) are characterized by means of clustering techniques based on fuzzy logic. Also, pH behavior in contrast to precipitation is clearly explained, proving that the influence of rainfall inputs on the acidity and, as a result, on the metal load of a riverbed undergoing AMD processes highly depends on the moment when it occurs. In general, the riverbed dynamic behavior is the response to the sum of instant stimuli produced by isolated rainfall, the seasonal memory depending on the moment of the target hydrological year and, finally, the own inertia of the river basin, as a result of an accumulation process caused by age-long mining activity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EPJC...77..181B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EPJC...77..181B"><span>On the search for the electric dipole moment of strange and charm baryons at LHC</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Botella, F. J.; Garcia Martin, L. M.; Marangotto, D.; Martinez Vidal, F.; Merli, A.; Neri, N.; Oyanguren, A.; Ruiz Vidal, J.</p> <p>2017-03-01</p> <p>Permanent electric dipole moments (EDMs) of fundamental particles provide powerful probes for physics beyond the Standard Model. We propose to search for the EDM of strange and charm baryons at LHC, extending the ongoing experimental program on the neutron, muon, atoms, molecules and light nuclei. The EDM of strange Λ baryons, selected from weak decays of charm baryons produced in p p collisions at LHC, can be determined by studying the spin precession in the magnetic field of the detector tracking system. A test of CPT symmetry can be performed by measuring the magnetic dipole moment of Λ and \\overline{Λ} baryons. For short-lived {Λ} ^+c and {Ξ} ^+c baryons, to be produced in a fixed-target experiment using the 7 TeV LHC beam and channeled in a bent crystal, the spin precession is induced by the intense electromagnetic field between crystal atomic planes. The experimental layout based on the LHCb detector and the expected sensitivities in the coming years are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100021932','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100021932"><span>Space Station Control Moment Gyroscope Lessons Learned</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gurrisi, Charles; Seidel, Raymond; Dickerson, Scott; Didziulis, Stephen; Frantz, Peter; Ferguson, Kevin</p> <p>2010-01-01</p> <p>Four 4760 Nms (3510 ft-lbf-s) Double Gimbal Control Moment Gyroscopes (DGCMG) with unlimited gimbal freedom about each axis were adopted by the International Space Station (ISS) Program as the non-propulsive solution for continuous attitude control. These CMGs with a life expectancy of approximately 10 years contain a flywheel spinning at 691 rad/s (6600 rpm) and can produce an output torque of 258 Nm (190 ft-lbf)1. One CMG unexpectedly failed after approximately 1.3 years and one developed anomalous behavior after approximately six years. Both units were returned to earth for failure investigation. This paper describes the Space Station Double Gimbal Control Moment Gyroscope design, on-orbit telemetry signatures and a summary of the results of both failure investigations. The lessons learned from these combined sources have lead to improvements in the design that will provide CMGs with greater reliability to assure the success of the Space Station. These lessons learned and design improvements are not only applicable to CMGs but can be applied to spacecraft mechanisms in general.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1329167-single-domain-multiferroic-bifeo3-films','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1329167-single-domain-multiferroic-bifeo3-films"><span>Single-domain multiferroic BiFeO 3 films</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Kuo, Chang -Yang; Hu, Z.; Yang, J. C.; ...</p> <p>2016-09-01</p> <p>The strong coupling between antiferromagnetism and ferroelectricity at room temperature found in BiFeO 3 generates high expectations for the design and development of technological devices with novel functionalities. However, the multi-domain nature of the material tends to nullify the properties of interest and complicates the thorough understanding of the mechanisms that are responsible for those properties. Here we report the realization of a BiFeO 3 material in thin film form with single-domain behaviour in both its magnetism and ferroelectricity: the entire film shows its antiferromagnetic axis aligned along the crystallographic b axis and its ferroelectric polarization along the c axis.more » With this we are able to reveal that the canted ferromagnetic moment due to the Dzyaloshinskii–Moriya interaction is parallel to the a axis. Moreover, by fabricating a Co/BiFeO 3 heterostructure, we demonstrate that the ferromagnetic moment of the Co film does couple directly to the canted moment of BiFeO 3.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1411418-first-lattice-qcd-study-gluonic-structure-light-nuclei','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1411418-first-lattice-qcd-study-gluonic-structure-light-nuclei"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Winter, Frank; Detmold, William; Gambhir, Arjun S.</p> <p></p> <p>The role of gluons in the structure of the nucleon and light nuclei is investigated using lattice quantum chromodynamics (QCD) calculations. The first moment of the unpolarised gluon distribution is studied in nuclei up to atomic numbermore » $A=3$ at quark masses corresponding to pion masses of $$m_\\pi\\sim 450$$ and $806$ MeV. Nuclear modification of this quantity defines a gluonic analogue of the EMC effect and is constrained to be less than $$\\sim 10$$% in these nuclei. This is consistent with expectations from phenomenological quark distributions and the momentum sum rule. In the deuteron, the combination of gluon distributions corresponding to the $$b_1$$ structure function is found to have a small first moment compared with the corresponding momentum fraction. The first moment of the gluon transversity structure function is also investigated in the spin-1 deuteron, where a non-zero signal is observed at $$m_\\pi \\sim 806$$ MeV. In conclusion, this is the first indication of gluon contributions to nuclear structure that can not be associated with an individual nucleon.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.P53E..03H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.P53E..03H"><span>Enceladus' Internal Structure Inferred from Analysis of Cassini-derived Gravity and Topography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hemingway, D.; Nimmo, F.; Iess, L.</p> <p>2013-12-01</p> <p>The interior of the small Saturnian satellite, Enceladus, is of great interest as it bears on the body's unusually extensive and on-going geological activity [1,2]. The moon's shape, estimated from limb profiles [3,4], differs significantly from the expected hydrostatic shape and is perhaps related to lateral variations in ice shell thickness [5]. Recent Cassini radio tracking analysis [Iess et al., in preparation] has yielded preliminary estimates of the degree-2 gravity field and J3. Like the topography, the gravity field is not precisely hydrostatic, but both can be separated into their hydrostatic and non-hydrostatic components by assuming a particular moment of inertia. Here, we employ an admittance analysis [6,7] (ratio of gravity to topography) in an attempt to constrain Enceladus' moment of inertia. We estimate the non-hydrostatic admittance separately for both J2 and C22, over a range of possible moments of inertia. Assuming the true admittance is isotropic, the two estimates should converge for the correct moment of inertia. We find the best agreement between the two estimates with normalized moments of inertia (C/MR2) in the range 0.332-0.336, with a 2-sigma lower bound of 0.309 and a 2-sigma upper bound of 0.341, suggesting a differentiated Enceladus with a core density between ~2300 and ~3500 kg/m3 [1]. The admittance estimated from J3 is broadly consistent with this result in that the computed degree-2 and degree-3 admittances are related by approximately the expected ratio of 5/7. These admittance estimates are ~1/3 of what is expected for uncompensated topography, suggesting that the topography is significantly compensated. Assuming a fully isostatic model in which compensation occurs where the ice shell encounters a subsurface liquid ocean [8], and neglecting the role of the silicate interior [9], best estimates for the ice shell thickness range from 25-75 km. If surface loading dominates, our results are incompatible with an average elastic thickness in excess of ~100 m. [1] Schubert, G., Anderson, J. D., Travis, B. J. & Palguta, J., Icarus 188, 345-355 (2007). [2] Spencer, J. R. & Nimmo, F., Annu. Rev. Earth Planet. Sci. 41, 693-717 (2013). [3] Porco, C. C. et al., Science 311, 1393-1401 (2006). [4] Nimmo, F., Bills, B. G. & Thomas, P. C., J. Geophys. Res. 116, E11001 (2011). [5] Schenk, P. M. & McKinnon, W. B., Geophys. Res. Lett. 36, L16202 (2009). [6] McKenzie, D., Icarus 112, 55-88 (1994). [7] Hemingway, D., Nimmo, F., Zebker, H. & Iess, L., Nature (in press). [8] Collins, G. C. & Goodman, J. C., Icarus 189, 72-82 (2007). [9] McKinnon, W. B., AGU Fall Mtg. 2012, P32A-04 (2012).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21670488','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21670488"><span>Fast Inference with Min-Sum Matrix Product.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Felzenszwalb, Pedro F; McAuley, Julian J</p> <p>2011-12-01</p> <p>The MAP inference problem in many graphical models can be solved efficiently using a fast algorithm for computing min-sum products of n × n matrices. The class of models in question includes cyclic and skip-chain models that arise in many applications. Although the worst-case complexity of the min-sum product operation is not known to be much better than O(n(3)), an O(n(2.5)) expected time algorithm was recently given, subject to some constraints on the input matrices. In this paper, we give an algorithm that runs in O(n(2) log n) expected time, assuming that the entries in the input matrices are independent samples from a uniform distribution. We also show that two variants of our algorithm are quite fast for inputs that arise in several applications. This leads to significant performance gains over previous methods in applications within computer vision and natural language processing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26135719','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26135719"><span>Maximizing Submodular Functions under Matroid Constraints by Evolutionary Algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Friedrich, Tobias; Neumann, Frank</p> <p>2015-01-01</p> <p>Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function f under a given set of constraints. In this paper, we investigate the runtime of a simple single objective evolutionary algorithm called (1 + 1) EA and a multiobjective evolutionary algorithm called GSEMO until they have obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints, we show that the GSEMO achieves a (1 - 1/e)-approximation in expected polynomial time. For the case of monotone functions where the constraints are given by the intersection of K ≥ 2 matroids, we show that the (1 + 1) EA achieves a (1/k + δ)-approximation in expected polynomial time for any constant δ > 0. Turning to nonmonotone symmetric submodular functions with k ≥ 1 matroid intersection constraints, we show that the GSEMO achieves a 1/((k + 2)(1 + ε))-approximation in expected time O(n(k + 6)log(n)/ε.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70034662','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70034662"><span>A distribution-based parametrization for improved tomographic imaging of solute plumes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Pidlisecky, Adam; Singha, K.; Day-Lewis, F. D.</p> <p>2011-01-01</p> <p>Difference geophysical tomography (e.g. radar, resistivity and seismic) is used increasingly for imaging fluid flow and mass transport associated with natural and engineered hydrologic phenomena, including tracer experiments, in situ remediation and aquifer storage and recovery. Tomographic data are collected over time, inverted and differenced against a background image to produce 'snapshots' revealing changes to the system; these snapshots readily provide qualitative information on the location and morphology of plumes of injected tracer, remedial amendment or stored water. In principle, geometric moments (i.e. total mass, centres of mass, spread, etc.) calculated from difference tomograms can provide further quantitative insight into the rates of advection, dispersion and mass transfer; however, recent work has shown that moments calculated from tomograms are commonly biased, as they are strongly affected by the subjective choice of regularization criteria. Conventional approaches to regularization (Tikhonov) and parametrization (image pixels) result in tomograms which are subject to artefacts such as smearing or pixel estimates taking on the sign opposite to that expected for the plume under study. Here, we demonstrate a novel parametrization for imaging plumes associated with hydrologic phenomena. Capitalizing on the mathematical analogy between moment-based descriptors of plumes and the moment-based parameters of probability distributions, we design an inverse problem that (1) is overdetermined and computationally efficient because the image is described by only a few parameters, (2) produces tomograms consistent with expected plume behaviour (e.g. changes of one sign relative to the background image), (3) yields parameter estimates that are readily interpreted for plume morphology and offer direct insight into hydrologic processes and (4) requires comparatively few data to achieve reasonable model estimates. We demonstrate the approach in a series of numerical examples based on straight-ray difference-attenuation radar monitoring of the transport of an ionic tracer, and show that the methodology outlined here is particularly effective when limited data are available. ?? 2011 The Authors Geophysical Journal International ?? 2011 RAS.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4359122','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4359122"><span>Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Morris, Mark; Sellers, William I.</p> <p>2015-01-01</p> <p>Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25780778','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25780778"><span>Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Peyer, Kathrin E; Morris, Mark; Sellers, William I</p> <p>2015-01-01</p> <p>Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.855a2022K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.855a2022K"><span>A periodic review integrated inventory model with controllable safety stock and setup cost under service level constraint and distribution-free demand</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kurdhi, N. A.; Jamaluddin, A.; Jauhari, W. A.; Saputro, D. R. S.</p> <p>2017-06-01</p> <p>In this study, we consider a stochastic integrated manufacturer-retailer inventory model with service level constraint. The model analyzed in this article considers the situation in which the vendor and the buyer establish a long-term contract and strategic partnership to jointly determine the best strategy. The lead time and setup cost are assumed can be controlled by an additional crashing cost and an investment, respectively. It is assumed that shortages are allowed and partially backlogged on the buyer’s side, and that the protection interval (i.e., review period plus lead time) demand distribution is unknown but has given finite first and second moments. The objective is to apply the minmax distribution free approach to simultaneously optimize the review period, the lead time, the setup cost, the safety factor, and the number of deliveries in order to minimize the joint total expected annual cost. The service level constraint guarantees that the service level requirement can be satisfied at the worst case. By constructing Lagrange function, the analysis regarding the solution procedure is conducted, and a solution algorithm is then developed. Moreover, a numerical example and sensitivity analysis are given to illustrate the proposed model and to provide some observations and managerial implications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70027011','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70027011"><span>Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Griffis, V.W.; Stedinger, Jery R.; Cohn, T.A.</p> <p>2004-01-01</p> <p>The recently developed expected moments algorithm (EMA) [Cohn et al., 1997] does as well as maximum likelihood estimations at estimating log‐Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004WRR....40.7503G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004WRR....40.7503G"><span>Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Griffis, V. W.; Stedinger, J. R.; Cohn, T. A.</p> <p>2004-07-01</p> <p>The recently developed expected moments algorithm (EMA) [, 1997] does as well as maximum likelihood estimations at estimating log-Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3888725','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3888725"><span>Electricity Usage Scheduling in Smart Building Environments Using Smart Devices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lee, Eunji; Bahn, Hyokyung</p> <p>2013-01-01</p> <p>With the recent advances in smart grid technologies as well as the increasing dissemination of smart meters, the electricity usage of every moment can be detected in modern smart building environments. Thus, the utility company adopts different price of electricity at each time slot considering the peak time. This paper presents a new electricity usage scheduling algorithm for smart buildings that adopts real-time pricing of electricity. The proposed algorithm detects the change of electricity prices by making use of a smart device and changes the power mode of each electric device dynamically. Specifically, we formulate the electricity usage scheduling problem as a real-time task scheduling problem and show that it is a complex search problem that has an exponential time complexity. An efficient heuristic based on genetic algorithms is performed on a smart device to cut down the huge searching space and find a reasonable schedule within a feasible time budget. Experimental results with various building conditions show that the proposed algorithm reduces the electricity charge of a smart building by 25.6% on average and up to 33.4%. PMID:24453860</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24453860','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24453860"><span>Electricity usage scheduling in smart building environments using smart devices.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Eunji; Bahn, Hyokyung</p> <p>2013-01-01</p> <p>With the recent advances in smart grid technologies as well as the increasing dissemination of smart meters, the electricity usage of every moment can be detected in modern smart building environments. Thus, the utility company adopts different price of electricity at each time slot considering the peak time. This paper presents a new electricity usage scheduling algorithm for smart buildings that adopts real-time pricing of electricity. The proposed algorithm detects the change of electricity prices by making use of a smart device and changes the power mode of each electric device dynamically. Specifically, we formulate the electricity usage scheduling problem as a real-time task scheduling problem and show that it is a complex search problem that has an exponential time complexity. An efficient heuristic based on genetic algorithms is performed on a smart device to cut down the huge searching space and find a reasonable schedule within a feasible time budget. Experimental results with various building conditions show that the proposed algorithm reduces the electricity charge of a smart building by 25.6% on average and up to 33.4%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA601727','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA601727"><span>Convergence of the Quasi-static Antenna Design Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2013-04-01</p> <p>conductor is the same as an equipotential surface . A line of constant charge on the z-axis, with an image, will generate the ACD antenna design...satisfies this boundary condition. The multipole moments have negative potentials, which can cause the equipotential surface to terminate on the disk or...feed wire. This requires an addition step in the solution process; the equipotential surface is sampled to verify that the charge is enclosed by the</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24754471','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24754471"><span>Statistical iterative reconstruction for streak artefact reduction when using multidetector CT to image the dento-alveolar structures.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dong, J; Hayakawa, Y; Kober, C</p> <p>2014-01-01</p> <p>When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19900048752&hterms=Propulsion+chemistry&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DPropulsion%2Bchemistry','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19900048752&hterms=Propulsion+chemistry&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DPropulsion%2Bchemistry"><span>Two-dimensional atmospheric transport and chemistry model - Numerical experiments with a new advection algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.</p> <p>1990-01-01</p> <p>Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26872088','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26872088"><span>Global optimization of small bimetallic Pd-Co binary nanoalloy clusters: a genetic algorithm approach at the DFT level.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Aslan, Mikail; Davis, Jack B A; Johnston, Roy L</p> <p>2016-03-07</p> <p>The global optimisation of small bimetallic PdCo binary nanoalloys are systematically investigated using the Birmingham Cluster Genetic Algorithm (BCGA). The effect of size and composition on the structures, stability, magnetic and electronic properties including the binding energies, second finite difference energies and mixing energies of Pd-Co binary nanoalloys are discussed. A detailed analysis of Pd-Co structural motifs and segregation effects is also presented. The maximal mixing energy corresponds to Pd atom compositions for which the number of mixed Pd-Co bonds is maximised. Global minimum clusters are distinguished from transition states by vibrational frequency analysis. HOMO-LUMO gap, electric dipole moment and vibrational frequency analyses are made to enable correlation with future experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005OExpr..13..336J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005OExpr..13..336J"><span>Correction factor for ablation algorithms used in corneal refractive surgery with gaussian-profile beams</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jimenez, Jose Ramón; González Anera, Rosario; Jiménez del Barco, Luis; Hita, Enrique; Pérez-Ocón, Francisco</p> <p>2005-01-01</p> <p>We provide a correction factor to be added in ablation algorithms when a Gaussian beam is used in photorefractive laser surgery. This factor, which quantifies the effect of pulse overlapping, depends on beam radius and spot size. We also deduce the expected post-surgical corneal radius and asphericity when considering this factor. Data on 141 eyes operated on LASIK (laser in situ keratomileusis) with a Gaussian profile show that the discrepancy between experimental and expected data on corneal power is significantly lower when using the correction factor. For an effective improvement of post-surgical visual quality, this factor should be applied in ablation algorithms that do not consider the effects of pulse overlapping with a Gaussian beam.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960016734','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960016734"><span>Development of Fast Algorithms Using Recursion, Nesting and Iterations for Computational Electromagnetics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chew, W. C.; Song, J. M.; Lu, C. C.; Weedon, W. H.</p> <p>1995-01-01</p> <p>In the first phase of our work, we have concentrated on laying the foundation to develop fast algorithms, including the use of recursive structure like the recursive aggregate interaction matrix algorithm (RAIMA), the nested equivalence principle algorithm (NEPAL), the ray-propagation fast multipole algorithm (RPFMA), and the multi-level fast multipole algorithm (MLFMA). We have also investigated the use of curvilinear patches to build a basic method of moments code where these acceleration techniques can be used later. In the second phase, which is mainly reported on here, we have concentrated on implementing three-dimensional NEPAL on a massively parallel machine, the Connection Machine CM-5, and have been able to obtain some 3D scattering results. In order to understand the parallelization of codes on the Connection Machine, we have also studied the parallelization of 3D finite-difference time-domain (FDTD) code with PML material absorbing boundary condition (ABC). We found that simple algorithms like the FDTD with material ABC can be parallelized very well allowing us to solve within a minute a problem of over a million nodes. In addition, we have studied the use of the fast multipole method and the ray-propagation fast multipole algorithm to expedite matrix-vector multiplication in a conjugate-gradient solution to integral equations of scattering. We find that these methods are faster than LU decomposition for one incident angle, but are slower than LU decomposition when many incident angles are needed as in the monostatic RCS calculations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhDT........87H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhDT........87H"><span>Simulation Research Framework with Embedded Intelligent Algorithms for Analysis of Multi-Target, Multi-Sensor, High-Cluttered Environments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hanlon, Nicholas P.</p> <p></p> <p>The National Air Space (NAS) can be easily described as a complex aviation system-of-systems that seamlessly works in harmony to provide safe transit for all aircraft within its domain. The number of aircraft within the NAS is growing and according the FAA, "[o]n any given day, more than 85,000 flights are in the skies in the United States...This translates into roughly 5,000 planes in the skies above the United States at any given moment. More than 15,000 federal air traffic controllers in airport traffic control towers, terminal radar approach control facilities and air route traffic control centers guide pilots through the system". The FAA is currently rolling out the Next Generation Air Transportation System (NextGen) to handle projected growth while leveraging satellite-based navigation for improved tracking. A key component to instantiating NextGen lies in the equipage of Automatic Dependent Surveillance-Broadcast (ADS-B), a performance based surveillance technology that uses GPS navigation for more precise positioning than radars providing increased situational awareness to air traffic controllers. Furthermore, the FAA is integrating UAS into the NAS, further congesting the airways and information load on air traffic controllers. The expected increase in aircraft density due to NextGen implementation and UAS integration will require innovative algorithms to cope with the increase data flow and to support air traffic controllers in their decision-making. This research presents a few innovative algorithms to support increased aircraft density and UAS integration into the NAS. First, it is imperative that individual tracks are correlated prior to fusing to ensure a proper picture of the environment is correct. However, current approaches do not scale well as the number of targets and sensors are increased. This work presents a fuzzy clustering design to hierarchically break the problem down into smaller subspaces prior to correlation. This approach provides nearly identical performance metrics at orders of magnitude faster in execution. Second, a fuzzy inference system is presented that alleviates air traffic controllers from information overload by utilizing flight plan data and radar/GPS correlation values to highlight aircraft that deviate from their intended routes. Third, a genetic algorithm optimizes sensor placement that is robust and capable of handling unexpected routes in the environment. Fourth, a fuzzy CUSUM algorithm more accurately detects and corrects aircraft mode changes. Finally, all the work is packaged in a holistic simulation research framework that provides evaluation and analysis of various multi-sensor, multi-target scenarios.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1399726','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1399726"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Marathe, Aniruddha P.; Harris, Rachel A.; Lowenthal, David K.</p> <p></p> <p>The use of clouds to execute high-performance computing (HPC) applications has greatly increased recently. Clouds provide several potential advantages over traditional supercomputers and in-house clusters. The most popular cloud is currently Amazon EC2, which provides fixed-cost and variable-cost, auction-based options. The auction market trades lower cost for potential interruptions that necessitate checkpointing; if the market price exceeds the bid price, a node is taken away from the user without warning. We explore techniques to maximize performance per dollar given a time constraint within which an application must complete. Specifically, we design and implement multiple techniques to reduce expected cost bymore » exploiting redundancy in the EC2 auction market. We then design an adaptive algorithm that selects a scheduling algorithm and determines the bid price. We show that our adaptive algorithm executes programs up to seven times cheaper than using the on-demand market and up to 44 percent cheaper than the best non-redundant, auction-market algorithm. We extend our adaptive algorithm to incorporate application scalability characteristics for further cost savings. In conclusion, we show that the adaptive algorithm informed with scalability characteristics of applications achieves up to 56 percent cost savings compared to the expected cost for the base adaptive algorithm run at a fixed, user-defined scale.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4655131','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4655131"><span>CEINMS: a toolbox to investigate the influence of different neural control solutions on the prediction of muscle excitation and joint moments during dynamic motor tasks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Pizzolato, Claudio; Lloyd, David G.; Sartori, Massimo; Ceseracciu, Elena; Besier, Thor F.; Fregly, Benjamin J.; Reggiani, Monica</p> <p>2015-01-01</p> <p>Personalized neuromusculoskeletal (NMS) models can represent the neurological, physiological, and anatomical characteristics of an individual and can be used to estimate the forces generated inside the human body. Currently, publicly available software to calculate muscle forces are restricted to static and dynamic optimisation methods, or limited to isometric tasks only. We have created and made freely available for the research community the Calibrated EMG-Informed NMS Modelling Toolbox (CEINMS), an OpenSim plug-in that enables investigators to predict different neural control solutions for the same musculoskeletal geometry and measured movements. CEINMS comprises EMG-driven and EMG-informed algorithms that have been previously published and tested. It operates on dynamic skeletal models possessing any number of degrees of freedom and musculotendon units and can be calibrated to the individual to predict measured joint moments and EMG patterns. In this paper we describe the components of CEINMS and its integration with OpenSim. We then analyse how EMG-driven, EMG-assisted, and static optimisation neural control solutions affect the estimated joint moments, muscle forces, and muscle excitations, including muscle co-contraction. PMID:26522621</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15794134','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15794134"><span>Robust feature detection and local classification for surfaces based on moment analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Clarenz, Ulrich; Rumpf, Martin; Telea, Alexandru</p> <p>2004-01-01</p> <p>The stable local classification of discrete surfaces with respect to features such as edges and corners or concave and convex regions, respectively, is as quite difficult as well as indispensable for many surface processing applications. Usually, the feature detection is done via a local curvature analysis. If concerned with large triangular and irregular grids, e.g., generated via a marching cube algorithm, the detectors are tedious to treat and a robust classification is hard to achieve. Here, a local classification method on surfaces is presented which avoids the evaluation of discretized curvature quantities. Moreover, it provides an indicator for smoothness of a given discrete surface and comes together with a built-in multiscale. The proposed classification tool is based on local zero and first moments on the discrete surface. The corresponding integral quantities are stable to compute and they give less noisy results compared to discrete curvature quantities. The stencil width for the integration of the moments turns out to be the scale parameter. Prospective surface processing applications are the segmentation on surfaces, surface comparison, and matching and surface modeling. Here, a method for feature preserving fairing of surfaces is discussed to underline the applicability of the presented approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28203764','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28203764"><span>Two-Dimensional Model for Reactive-Sorption Columns of Cylindrical Geometry: Analytical Solutions and Moment Analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Khan, Farman U; Qamar, Shamsul</p> <p>2017-05-01</p> <p>A set of analytical solutions are presented for a model describing the transport of a solute in a fixed-bed reactor of cylindrical geometry subjected to the first (Dirichlet) and third (Danckwerts) type inlet boundary conditions. Linear sorption kinetic process and first-order decay are considered. Cylindrical geometry allows the use of large columns to investigate dispersion, adsorption/desorption and reaction kinetic mechanisms. The finite Hankel and Laplace transform techniques are adopted to solve the model equations. For further analysis, statistical temporal moments are derived from the Laplace-transformed solutions. The developed analytical solutions are compared with the numerical solutions of high-resolution finite volume scheme. Different case studies are presented and discussed for a series of numerical values corresponding to a wide range of mass transfer and reaction kinetics. A good agreement was observed in the analytical and numerical concentration profiles and moments. The developed solutions are efficient tools for analyzing numerical algorithms, sensitivity analysis and simultaneous determination of the longitudinal and transverse dispersion coefficients from a laboratory-scale radial column experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20090032022&hterms=tb&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dtb','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20090032022&hterms=tb&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dtb"><span>Saturn's Magnetosphere and Properties of Upstream Flow at Titan: Preliminary Results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sittler, E. C., Jr.; Hartle, R. E.; Cooper, J. F.; Lipatov, A.; Bertucci, C.; Coates, A. J.; Arridge, C.; Szego, K.; Shappirio, M.; Simipson, D. G.; <a style="text-decoration: none; " href="javascript:void(0); " onClick="displayelement('author_20090032022'); toggleEditAbsImage('author_20090032022_show'); toggleEditAbsImage('author_20090032022_hide'); "> <img style="display:inline; width:12px; height:12px; " src="images/arrow-up.gif" width="12" height="12" border="0" alt="hide" id="author_20090032022_show"> <img style="width:12px; height:12px; display:none; " src="images/arrow-down.gif" width="12" height="12" border="0" alt="hide" id="author_20090032022_hide"></p> <p>2009-01-01</p> <p>Using Cassini Plasma Spectrometer (CAPS) Ion Mass Spectrometer (IMS) measurements, we present the ion fluid properties and its ion composition of the upstream flow for Titan's interaction with Saturn's magnetosphere. A 3D ion moments algorithm is used which is essentially model independent with only requirement is that ion flow is within the CAPS IMS 2(pi) steradian field-of-view (FOV) and that the ion 'velocity distribution function (VDF) be gyrotropic. These results cover the period from TA flyby (2004 day 300) to T22 flyby (2006 363). Cassini's in situ measurements of Saturn's magnetic field show it is stretched out into a magnetodisc configuration for Saturn Local Times (SLT) centered about midnight local time. Under those circumstances the field is confined near the equatorial plane with Titan either above or below the magnetosphere current sheet. Similar to Jupiter's outer magnetosphere where a magnetodisc configuration applies, one expects the heavy ions within Saturn's outer magnetosphere to be confined within a few degrees of the current sheet while at higher magnetic latitudes protons should dominate. We show that when Cassini is between dusk-midnight-dawn local time and spacecraft is not within the current sheet that light ions (H, 142) tend to dominate the ion composition for the upstream flow. If true, one may expect the interaction between Saturn's magnetosphere, locally devoid of heavy ions and Titan's upper atmosphere and exosphere to be significantly different from that for Voyager 1, TA and TB when heavy ions were present in the upstream flow. We also present observational evidence for Saturn's magnetosphere interaction with Titan's extended H and H2 corona which can extend approx. 1 Rs from Titan.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=programs+AND+developed&pg=2&id=EJ1085648','ERIC'); return false;" href="https://eric.ed.gov/?q=programs+AND+developed&pg=2&id=EJ1085648"><span>Meeting the Challenge of Systemic Change in Geography Education: Lucy Sprague Mitchell's Young Geographers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Downs, Roger M.</p> <p>2016-01-01</p> <p>The history of K-12 geography education has been characterized by recurrent high hopes and dashed expectations. There have, however, been moments when the trajectory of geography education might have changed to offer students the opportunity to develop a thorough working knowledge of geography. Lucy Sprague Mitchell's geography program developed…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=bomb&pg=5&id=EJ838819','ERIC'); return false;" href="https://eric.ed.gov/?q=bomb&pg=5&id=EJ838819"><span>Debt Bomb Is Ticking Loudly on Campuses</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Blumenstyk, Goldie</p> <p>2009-01-01</p> <p>The end of the fiscal year usually isn't a momentous occasion for colleges. But this June 30 could be a day of reckoning many never expected. Colleges borrowed billions of dollars over the past decade to improve facilities and fulfill their ambitions. Now the consequences may be about to blow up in their finances. The author reports on how…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=buddhism&pg=7&id=EJ623799','ERIC'); return false;" href="https://eric.ed.gov/?q=buddhism&pg=7&id=EJ623799"><span>The Blessings of Authenticity: An Interview with Myla and Jon Kabat-Zinn.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Miles, Charlie; Prystowski, Richard; Kabat-Zinn, Myla; Kabat-Zinn, Jon</p> <p>2001-01-01</p> <p>The authors of a book on parenting maintain their book is about the state of being while raising children, which they feel is a spiritual experience. Allowing children their sovereignty; learning from pain and chaos; not projecting expectations onto children; and the importance of paying attention to the present moment, a concept borrowed from…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010072166','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010072166"><span>Any Two Learning Algorithms Are (Almost) Exactly Identical</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wolpert, David H.</p> <p>2000-01-01</p> <p>This paper shows that if one is provided with a loss function, it can be used in a natural way to specify a distance measure quantifying the similarity of any two supervised learning algorithms, even non-parametric algorithms. Intuitively, this measure gives the fraction of targets and training sets for which the expected performance of the two algorithms differs significantly. Bounds on the value of this distance are calculated for the case of binary outputs and 0-1 loss, indicating that any two learning algorithms are almost exactly identical for such scenarios. As an example, for any two algorithms A and B, even for small input spaces and training sets, for less than 2e(-50) of all targets will the difference between A's and B's generalization performance of exceed 1%. In particular, this is true if B is bagging applied to A, or boosting applied to A. These bounds can be viewed alternatively as telling us, for example, that the simple English phrase 'I expect that algorithm A will generalize from the training set with an accuracy of at least 75% on the rest of the target' conveys 20,000 bytes of information concerning the target. The paper ends by discussing some of the subtleties of extending the distance measure to give a full (non-parametric) differential geometry of the manifold of learning algorithms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000PhDT.......171B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000PhDT.......171B"><span>Smart helicopter rotor with active blade tips</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bernhard, Andreas Paul Friedrich</p> <p>2000-10-01</p> <p>The smart active blade tip (SABT) rotor is an on-blade rotor vibration reduction system, incorporating active blade tips that can be independently pitched with respect to the main blade. The active blade tip rotor development included an experimental test program culminating in a Mach scale hover test, and a parallel development of a coupled, elastic actuator and rotor blade analysis for preliminary design studies and hover performance prediction. The experimental testing focussed on a small scale rotor on a bearingless Bell-412 hub. The fabricated Mach-scale active-tip rotor has a diameter of 1.524 m, a blade chord of 76.2 mm and incorporated a 10% span active tip. The nominal operating speed is 2000 rpm, giving a tip Mach number of 0.47. The blade tips are driven by a novel piezo-induced bending-torsion coupled actuator beam, located spanwise in the hollow mid-cell of the main rotor blade. In hover at 2000 rpm, at 2 deg collective, and for an actuation of 125 Vrms, the measured blade tip deflection at the first four rotor harmonics is between +/-1.7 and +/-2.8 deg, increasing to +/-5.3 deg at 5/rev with resonant amplification. The corresponding oscillatory amplitude of the rotor thrust coefficient is between 0.7 · 10-3 and 1.3 · 10-1 at the first four rotor harmonics, increasing to 2.1 · 10-3 at 5/rev. In general, the experimental blade tip frequency response and corresponding rotor thrust response are well captured by the analysis. The flexbeam root flap bending moment is predicted in trend, but is significantly over-estimated. The blade tips did not deflect as expected at high collective settings, because of the blade tip shaft locking up in the bearing. This is caused by the high flap bending moment on the blade tip shaft. Redesign of the blade tip shaft assembly and bearing support is identified as the primary design improvement for future research. The active blade tip rotor was also used as a testbed for the evaluation of an adaptive neural-network based control algorithm. Effective background vibration reduction of an intentional 1/rev hover imbalance was demonstrated. The control algorithm also showed the capability to generate desired multi-frequency control loads on the hub, based on artificial signal injection into the vibration measurement. The research program demonstrates the technical feasibility of the active blade tip concept for vibration reduction and warrants further investigation in terms of closed loop forward flight tests in the windtunnel and full scale design studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/14465','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/14465"><span>Prediction Of The Expected Safety Performance Of Rural Two-Lane Highways</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2000-12-01</p> <p>This report presents an algorithm for predicting the safety performance of a rural two-lane highway. The accident prediction algorithm consists of base models and accident modification factors for both roadway segments and at-grade intersections on r...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20578769','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20578769"><span>Ferroelectric hydration shells around proteins: electrostatics of the protein-water interface.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>LeBard, David N; Matyushov, Dmitry V</p> <p>2010-07-22</p> <p>Numerical simulations of hydrated proteins show that protein hydration shells are polarized into a ferroelectric layer with large values of the average dipole moment magnitude and the dipole moment variance. The emergence of the new polarized mesophase dramatically alters the statistics of electrostatic fluctuations at the protein-water interface. The linear response relation between the average electrostatic potential and its variance breaks down, with the breadth of the electrostatic fluctuations far exceeding the expectations of the linear response theories. The dynamics of these non-Gaussian electrostatic fluctuations are dominated by a slow (approximately = 1 ns) component that freezes in at the temperature of the dynamical transition of proteins. The ferroelectric shell propagates 3-5 water diameters into the bulk.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19730049951&hterms=Root-mean-square+fluctuation&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DRoot-mean-square%2Bfluctuation','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19730049951&hterms=Root-mean-square+fluctuation&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DRoot-mean-square%2Bfluctuation"><span>Description of small-scale fluctuations in the diffuse X-ray background.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Cavaliere, A.; Friedland, A.; Gursky, H.; Spada, G.</p> <p>1973-01-01</p> <p>An analytical study of the fluctuations on a small angular scale expected in the diffuse X-ray background in the presence of unresolved sources is presented. The source population is described by a function N(S), giving the number of sources per unit solid angle and unit apparent flux S. The distribution of observed flux, s, in each angular resolution element of a complete sky survey is represented by a function Q(s). The analytical relation between the successive, higher-order moments of N(S) and Q(s) is described. The goal of reconstructing the source population from the study of the moments of Q(s) of order higher than the second (i.e., the rms fluctuations) is discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyB..536..314I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyB..536..314I"><span>Magnetic properties of rare-earth sulfide YbAgS2</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Iizuka, Ryosuke; Numakura, Ryosuke; Michimura, Shinji; Katano, Susumu; Kosaka, Masashi</p> <p>2018-05-01</p> <p>We have succeeded in synthesizing single-phase polycrystalline samples of YbAgS2 belonging to the tetragonal system with space group I41 md . YbAgS2 shows an antiferromagnetic transition at TN = 6.6 K . The effective magnetic moment is in good agreement with the theoretical value for Yb3+ free ion. A broad anomaly is observed just above TN in the temperature dependence of magnetic susceptibility. The entropy released at TN is only about half of Rln2 expected for a Kramers doublet ground state. We consider that these phenomena are due to the existence of short-range magnetic correlations rather than the partial screening of the Yb moments by conduction electrons via the Kondo effect.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/14683227','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/14683227"><span>Can cosmic shear shed light on low cosmic microwave background multipoles?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kesden, Michael; Kamionkowski, Marc; Cooray, Asantha</p> <p>2003-11-28</p> <p>The lowest multipole moments of the cosmic microwave background (CMB) are smaller than expected for a scale-invariant power spectrum. One possible explanation is a cutoff in the primordial power spectrum below a comoving scale of k(c) approximately equal to 5.0 x 10(-4) Mpc(-1). Such a cutoff would increase significantly the cross correlation between the large-angle CMB and cosmic-shear patterns. The cross correlation may be detectable at >2sigma which, combined with the low CMB moments, may tilt the balance between a 2sigma result and a firm detection of a large-scale power-spectrum cutoff. The cutoff also increases the large-angle cross correlation between the CMB and the low-redshift tracers of the mass distribution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/951110','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/951110"><span>Element-specific study of the temperature dependent magnetization of Co-Mn-Sb thin films</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Schmalhorst, J.; Ebke, D.; Meinert, M.</p> <p></p> <p>Magnetron sputtered thin Co-Mn-Sb films were investigated with respect to their element-specific magnetic properties. Stoichiometric Co{sub 1}Mn{sub 1}Sb{sub 1} crystallized in the C1{sub b} structure has been predicted to be half-metallic and is therefore of interest for spintronics applications. It should show a characteristic antiferromagnetic coupling of the Mn and Co magnetic moments and a transition temperature T{sub C} of about 480K. Although the observed transition temperature of our 20nm thick Co{sub 32.4}Mn{sub 33.7}Sb{sub 33.8}, Co{sub 37.7}Mn{sub 34.1}Sb{sub 28.2} and Co{sub 43.2}Mn{sub 32.6}Sb{sub 24.2} films is in quite good agreement with the expected value, we found a ferromagnetic coupling ofmore » the Mn and Co magnetic moments which indicates that the films do not crystallize in the C1{sub b} structure and are probably not fully spin-polarized. The ratio of the Co and Mn moments does not change up to the transition temperature and the temperature dependence of the magnetic moments can be well described by the mean field theory.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70024119','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70024119"><span>Waveform inversion of oscillatory signatures in long-period events beneath volcanoes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Kumagai, H.; Chouet, B.A.; Nakano, M.</p> <p>2002-01-01</p> <p>The source mechanism of long-period (LP) events is examined using synthetic waveforms generated by the acoustic resonance of a fluid-filled crack. We perform a series of numerical tests in which the oscillatory signatures of synthetic LP waveforms are used to determine the source time functions of the six moment tensor components from waveform inversions assuming a point source. The results indicate that the moment tensor representation is valid for the odd modes of crack resonance with wavelengths 2L/n, 2W/n, n = 3, 5, 7, ..., where L and W are the crack length and width, respectively. For the even modes with wavelengths 2L/n, 2W/n, n = 2, 4, 6,..., a generalized source representation using higher-order tensors is required, although the efficiency of seismic waves radiated by the even modes is expected to be small. We apply the moment tensor inversion to the oscillatory signatures of an LP event observed at Kusatsu-Shirane Volcano, central Japan. Our results point to the resonance of a subhorizontal crack located a few hundred meters beneath the summit crater lakes. The present approach may be useful to quantify the source location, geometry, and force system of LP events, and opens the way for moment tensor inversions of tremor.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/20864057-qqqqq-components-hidden-flavor-contributions-baryon-magnetic-moments','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/20864057-qqqqq-components-hidden-flavor-contributions-baryon-magnetic-moments"><span>The qqqqq components and hidden flavor contributions to the baryon magnetic moments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>An, C. S.; Li, Q. B.; Riska, D. O.</p> <p>2006-11-15</p> <p>The contributions from the qqqqq components to the magnetic moments of the octet as well as the {delta}{sup ++} and {omega}{sup -} decuplet baryons are calculated for the configurations that are expected to have the lowest energy if the hyperfine interaction depends on both spin and flavor. The contributions from the uu,dd, and the ss components are given separately. It is shown that addition of qqqqq admixtures to the ground state baryons can improve the overall description of the magnetic moments of the baryon octet and decuplet in the quark model without SU(3) flavor symmetry breaking, beyond that of themore » different constituent masses of the strange and light-flavor quarks. The explicit flavor (and spin) wave functions for all the possible configurations of the qqqqq components with light and strange qq pairs are given for the baryon octet and decuplet. Admixtures of {approx}10% of the qqqqq configuration where the flavor-spin symmetry is [4]{sub FS}[22]{sub F}[22]{sub S}, which is likely to have the lowest energy, in particular reduces the deviation from the empirical values of the magnetic moments {sigma}{sup -} and the {xi}{sup 0} compared with the static qqq quark model.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFM.H41F1245Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFM.H41F1245Z"><span>An Exploration of the Importance of Flood Heterogeneity for Regionalization in Arizona using the Expected Moments Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zamora-Reyes, D.; Hirschboeck, K. K.; Paretti, N. V.</p> <p>2012-12-01</p> <p>Bulletin 17B (B17B) has prevailed for 30 years as the standard manual for determining flood frequency in the United States. Recently proposed updates to B17B include revising the issue of flood heterogeneity, and improving flood estimates by using the Expected Moments Algorithm (EMA) which can better address low outliers and accommodate information on historical peaks. Incorporating information on mixed populations, such as flood-causing mechanisms, into flood estimates for regions that have noticeable flood heterogeneity can be statistically challenging when systematic flood records are short. The problem magnifies when the population sample size is reduced by decomposing the record, especially if multiple flood mechanisms are involved. In B17B, the guidelines for dealing with mixed populations focus primarily on how to rule out any need to perform a mixed-population analysis. However, in some regions mixed flood populations are critically important determinants of regional flood frequency variations and should be explored from this perspective. Arizona is an area with a heterogeneous mixture of flood processes due to: warm season convective thunderstorms, cool season synoptic-scale storms, and tropical cyclone-enhanced convective activity occurring in the late summer or early fall. USGS station data throughout Arizona was compiled into a database and each flood peak (annual and partial duration series) was classified according to its meteorological cause. Using these data, we have explored the role of flood heterogeneity in Arizona flood estimates through composite flood frequency analysis based on mixed flood populations using EMA. First, for selected stations, the three flood-causing populations were separated out from the systematic annual flood series record and analyzed individually. Second, to create composite probability curves, the individual curves for each of the three populations were generated and combined using Crippen's (1978) composite probability equations for sites that have two or more independent flood populations. Finally, the individual probability curves generated for each of the three flood-causing populations were compared with both the site's composite probability curve and the standard B17B curve to explore the influence of heterogeneity using the 100-year and 200-year flood estimates as a basis of comparison. Results showed that sites located in southern Arizona and along the abrupt elevation transition zone of the Mogollon Rim exhibit a better fit to the systematic data using their composite probability curves than the curves derived from standard B17B analysis. Synoptic storm floods and tropical cyclone-enhanced floods had the greatest influence on 100-year and 200-year flood estimates. This was especially true in southern Arizona, even though summer convective floods are much more frequent and therefore dominate the composite curve. Using the EMA approach also influenced our results because all possible low outliers were censored by the built-in Multiple Grubbs-Beck Test, providing a better fit to the systematic data in the upper probabilities. In conclusion, flood heterogeneity can play an important role in regional flood frequency variations in Arizona and that understanding its influence is important when making projections about future flood variations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.936a2034K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.936a2034K"><span>The Wang Landau parallel algorithm for the simple grids. Optimizing OpenMPI parallel implementation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kussainov, A. S.</p> <p>2017-12-01</p> <p>The Wang Landau Monte Carlo algorithm to calculate density of states for the different simple spin lattices was implemented. The energy space was split between the individual threads and balanced according to the expected runtime for the individual processes. Custom spin clustering mechanism, necessary for overcoming of the critical slowdown in the certain energy subspaces, was devised. Stable reconstruction of the density of states was of primary importance. Some data post-processing techniques were involved to produce the expected smooth density of states.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28534799','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28534799"><span>Expectation Maximization Algorithm for Box-Cox Transformation Cure Rate Model and Assessment of Model Misspecification Under Weibull Lifetimes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pal, Suvra; Balakrishnan, Narayanaswamy</p> <p>2018-05-01</p> <p>In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19810013454','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19810013454"><span>Evaluation of orbits with incomplete knowledge of the mathematical expectancy and the matrix of covariation of errors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bakhshiyan, B. T.; Nazirov, R. R.; Elyasberg, P. E.</p> <p>1980-01-01</p> <p>The problem of selecting the optimal algorithm of filtration and the optimal composition of the measurements is examined assuming that the precise values of the mathematical expectancy and the matrix of covariation of errors are unknown. It is demonstrated that the optimal algorithm of filtration may be utilized for making some parameters more precise (for example, the parameters of the gravitational fields) after preliminary determination of the elements of the orbit by a simpler method of processing (for example, the method of least squares).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19380276','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19380276"><span>A pheromone-rate-based analysis on the convergence time of ACO algorithm.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Huang, Han; Wu, Chun-Guo; Hao, Zhi-Feng</p> <p>2009-08-01</p> <p>Ant colony optimization (ACO) has widely been applied to solve combinatorial optimization problems in recent years. There are few studies, however, on its convergence time, which reflects how many iteration times ACO algorithms spend in converging to the optimal solution. Based on the absorbing Markov chain model, we analyze the ACO convergence time in this paper. First, we present a general result for the estimation of convergence time to reveal the relationship between convergence time and pheromone rate. This general result is then extended to a two-step analysis of the convergence time, which includes the following: 1) the iteration time that the pheromone rate spends on reaching the objective value and 2) the convergence time that is calculated with the objective pheromone rate in expectation. Furthermore, four brief ACO algorithms are investigated by using the proposed theoretical results as case studies. Finally, the conclusions of the case studies that the pheromone rate and its deviation determine the expected convergence time are numerically verified with the experiment results of four one-ant ACO algorithms and four ten-ant ACO algorithms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12472248','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12472248"><span>Wrist kinetics after luno-triquetral dissociation: the changes in moment arms of the flexor carpi ulnaris tendon.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tang, Jin Bo; Xie, Ren Gou; Yu, Xiao Wei; Chen, Feng</p> <p>2002-11-01</p> <p>Wrist biomechanics after luno-triquetral (LT) dissociation is important for understanding the clinical sequelae of the disease and for determining its treatment options. The LT interosseous ligament plays an important role in stabilizing the joint and damage to the ligament would be expected to significantly increase moment arms of tendon of the flexor carpi ulnaris (FCU), the principal ulnar wrist flexor. We investigated the changes in moment arms of FCU tendon after various amounts of sectioning of the ligaments proven to be associated with LT dissociation. In six fresh frozen cadaveric upper extremities, excursions of the FCU tendon were recorded simultaneously with wrist joint angulation during wrist flexion-extension and radioulnar deviation. Tendon excursions were measured in intact wrists, in wrists with sectioning of the dorsal portion of the LT interosseous ligament, in wrists with sectioning of the entire LT interosseous ligament, and finally in wrists with further sectioning of the dorsal radiotriquetral and intercarpal ligaments. Moment arms of the tendon were calculated from tendon excursions and joint motion angulations and expressed as percentage changes from those in the intact wrist. During wrist flexion-extension, moment arms of the FCU tendon after sectioning of the entire LT interosseous ligament and after sectioning of the two capsular ligaments were 112 +/- 7% and 114 +/- 8%, respectively; these values were significantly greater than those in the intact wrist. During radioulnar deviation, the moment arms were 114 +/- 11% after sectioning of the dorsal portion of the LT interosseous ligament, 134 +/- 15% after sectioning of the entire ligament, and 153 +/- 18% after sectioning of the capsular ligaments, again being significantly greater than the normal wrist. Increase in moment arms of the FCU tendon after loss of integrity of the LT interosseous ligament and dorsal capsular ligaments may contribute to clinical sequelae of LT dissociation and difficulty in treating this disorder.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1211424-unexpected-magnetic-domain-behavior-ltp-mnbi','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1211424-unexpected-magnetic-domain-behavior-ltp-mnbi"><span>Unexpected Magnetic Domain Behavior in LTP-MnBi</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Nguyen, PK; Jin, S; Berkowitz, AE</p> <p>2013-07-01</p> <p>Low-temperature-phase MnBi (LTP-MnBi) has attracted much interest as a potential rare-earth-free permanent magnet material because of its high uniaxial magnetocrystalline anisotropy at room temperature, K approximate to 10(7) ergs/cc, and the unusual increase of anisotropy with increasing temperature, with an accompanying increasing coercive force (H-C) with temperature. However, due to the complex Mn-Bi phase diagram, bulk samples of LTP-MnBi with the optimum saturation moment, similar to 75-76 emu/g have been achieved only with zone-refined single crystals. We have prepared polycrystalline samples of LTP-MnBi by induction melting and annealing at 300 degrees C. The moment in 70 kOe is 73.5 emu/g,more » but H-C is only 50 Oe. This is quite surprising-the high saturation moment indicates the dominating presence of LTP-MnBi. Therefore, an H-C c of some significant fraction of 2K/M-S approximate to 30 kOe would seem reasonable in this polycrystalline sample. By examining "Bitter" patterns, we show that the sample is composed of similar to 50 - 100 mu m crystallites. The randomly oriented crystallites exhibit the variety of magnetic domain structures and orientations expected from the hexagonal-structured MnBi with its strong uniaxial anisotropy. Clearly, the reversal of magnetization in the sample proceeds by the low-field nucleation of reversed magnetization in each crystallite, rather than by a wall-pinning mechanism. When the annealed sample was milled into fine particles, H-C increased by several orders of magnitude, as expected.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JEPT...90.1445V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JEPT...90.1445V"><span>Method of Determining the Aerodynamic Characteristics of a Flying Vehicle from the Surface Pressure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Volkov, V. F.; Dyad'kin, A. A.; Zapryagaev, V. I.; Kiselev, N. P.</p> <p>2017-11-01</p> <p>The paper presents a description of the procedure used for determining the aerodynamic characteristics (forces and moments acting on a model of a flying vehicle) obtained from the results of pressure measurements on the surface of a model of a re-entry vehicle with operating retrofire brake rockets in the regime of hovering over a landing surface is given. The algorithm for constructing the interpolation polynomial over interpolation nodes in the radial and azimuthal directions using the assumption on the symmetry of pressure distribution over the surface is presented. The aerodynamic forces and moments at different tilts of the vehicle are obtained. It is shown that the aerodynamic force components acting on the vehicle in the regime of landing and caused by the action of the vertical velocity deceleration nozzle jets are negligibly small in comparison with the engine thrust.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvD..96i6018V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvD..96i6018V"><span>New method of computing the contributions of graphs without lepton loops to the electron anomalous magnetic moment in QED</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Volkov, Sergey</p> <p>2017-11-01</p> <p>This paper presents a new method of numerical computation of the mass-independent QED contributions to the electron anomalous magnetic moment which arise from Feynman graphs without closed electron loops. The method is based on a forestlike subtraction formula that removes all ultraviolet and infrared divergences in each Feynman graph before integration in Feynman-parametric space. The integration is performed by an importance sampling Monte-Carlo algorithm with the probability density function that is constructed for each Feynman graph individually. The method is fully automated at any order of the perturbation series. The results of applying the method to 2-loop, 3-loop, 4-loop Feynman graphs, and to some individual 5-loop graphs are presented, as well as the comparison of this method with other ones with respect to Monte Carlo convergence speed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005JPhA...3810107D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005JPhA...3810107D"><span>Efficient algorithms for construction of recurrence relations for the expansion and connection coefficients in series of Al-Salam Carlitz I polynomials</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Doha, E. H.; Ahmed, H. M.</p> <p>2005-12-01</p> <p>Two formulae expressing explicitly the derivatives and moments of Al-Salam-Carlitz I polynomials of any degree and for any order in terms of Al-Salam-Carlitz I themselves are proved. Two other formulae for the expansion coefficients of general-order derivatives Dpqf(x), and for the moments xellDpqf(x), of an arbitrary function f(x) in terms of its original expansion coefficients are also obtained. Application of these formulae for solving q-difference equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Al-Salam-Carlitz I polynomials and any system of basic hypergeometric orthogonal polynomials, belonging to the q-Hahn class, is described.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930050221&hterms=arm+vibration&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Darm%2Bvibration','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930050221&hterms=arm+vibration&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Darm%2Bvibration"><span>Slewing maneuvers and vibration control of space structures by feedforward/feedback moment-gyro controls</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Yang, Li-Farn; Mikulas, Martin M., Jr.; Park, K. C.; Su, Renjeng</p> <p>1993-01-01</p> <p>This paper presents a moment-gyro control approach to the maneuver and vibration suppression of a flexible truss arm undergoing a constant slewing motion. The overall slewing motion is triggered by a feedforward input, and a companion feedback controller is employed to augment the feedforward input and subsequently to control vibrations. The feedforward input for the given motion requirement is determined from the combined CMG (Control Momentum Gyro) devices and the desired rigid-body motion. The rigid-body dynamic model has enabled us to identify the attendant CMG momentum saturation constraints. The task for vibration control is carried out in two stages; first in the search of a suitable CMG placement along the beam span for various slewing maneuvers, and subsequently in the development of Liapunov-based control algorithms for CMG spin-stabilization. Both analytical and numerical results are presented to show the effectiveness of the present approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28863706','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28863706"><span>Multi-rate cubature Kalman filter based data fusion method with residual compensation to adapt to sampling rate discrepancy in attitude measurement system.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guo, Xiaoting; Sun, Changku; Wang, Peng</p> <p>2017-08-01</p> <p>This paper investigates the multi-rate inertial and vision data fusion problem in nonlinear attitude measurement systems, where the sampling rate of the inertial sensor is much faster than that of the vision sensor. To fully exploit the high frequency inertial data and obtain favorable fusion results, a multi-rate CKF (Cubature Kalman Filter) algorithm with estimated residual compensation is proposed in order to adapt to the problem of sampling rate discrepancy. During inter-sampling of slow observation data, observation noise can be regarded as infinite. The Kalman gain is unknown and approaches zero. The residual is also unknown. Therefore, the filter estimated state cannot be compensated. To obtain compensation at these moments, state error and residual formulas are modified when compared with the observation data available moments. Self-propagation equation of the state error is established to propagate the quantity from the moments with observation to the moments without observation. Besides, a multiplicative adjustment factor is introduced as Kalman gain, which acts on the residual. Then the filter estimated state can be compensated even when there are no visual observation data. The proposed method is tested and verified in a practical setup. Compared with multi-rate CKF without residual compensation and single-rate CKF, a significant improvement is obtained on attitude measurement by using the proposed multi-rate CKF with inter-sampling residual compensation. The experiment results with superior precision and reliability show the effectiveness of the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA564064','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA564064"><span>Performance Assessment of Multi-Array Processing with Ground Truth for Infrasonic, Seismic and Seismo-Acoustic Events</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2012-07-03</p> <p>of white noise vectors with square sumable coefficients and components with finite fourth order moments (Shumway et al., 1999). Here, the infrasonic...center in a star -like configuration for reducing the background noise from wind activity along the boundary layer. Sensor data is recorded by 24-bit...the PMCC Algorithm In Figure 19, under the assumption that the source (red star ) is far from the arrays, PMCC starts coherence processing using</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1124722.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1124722.pdf"><span>High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>von Davier, Matthias</p> <p>2016-01-01</p> <p>This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1053279.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1053279.pdf"><span>The Methods and Goals of Teaching Sorting Algorithms in Public Education</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Bernát, Péter</p> <p>2014-01-01</p> <p>The topic of sorting algorithms is a pleasant subject of informatics education. Not only is it so because the notion of sorting is well known from our everyday life, but also because as an algorithm task, whether we expect naive or practical solutions, it is easy to define and demonstrate. In my paper I will present some of the possible methods…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19980046641','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19980046641"><span>A Novel Approach for Adaptive Signal Processing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chen, Ya-Chin; Juang, Jer-Nan</p> <p>1998-01-01</p> <p>Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29726810','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29726810"><span>Algorithmic psychometrics and the scalable subject.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Stark, Luke</p> <p>2018-04-01</p> <p>Recent public controversies, ranging from the 2014 Facebook 'emotional contagion' study to psychographic data profiling by Cambridge Analytica in the 2016 American presidential election, Brexit referendum and elsewhere, signal watershed moments in which the intersecting trajectories of psychology and computer science have become matters of public concern. The entangled history of these two fields grounds the application of applied psychological techniques to digital technologies, and an investment in applying calculability to human subjectivity. Today, a quantifiable psychological subject position has been translated, via 'big data' sets and algorithmic analysis, into a model subject amenable to classification through digital media platforms. I term this position the 'scalable subject', arguing it has been shaped and made legible by algorithmic psychometrics - a broad set of affordances in digital platforms shaped by psychology and the behavioral sciences. In describing the contours of this 'scalable subject', this paper highlights the urgent need for renewed attention from STS scholars on the psy sciences, and on a computational politics attentive to psychology, emotional expression, and sociality via digital media.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1182509-multi-layer-lanczos-iteration-approach-calculations-vibrational-energies-dipole-transition-intensities-polyatomic-molecules','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1182509-multi-layer-lanczos-iteration-approach-calculations-vibrational-energies-dipole-transition-intensities-polyatomic-molecules"><span>Multi-layer Lanczos iteration approach to calculations of vibrational energies and dipole transition intensities for polyatomic molecules</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Yu, Hua-Gen</p> <p>2015-01-28</p> <p>We report a rigorous full dimensional quantum dynamics algorithm, the multi-layer Lanczos method, for computing vibrational energies and dipole transition intensities of polyatomic molecules without any dynamics approximation. The multi-layer Lanczos method is developed by using a few advanced techniques including the guided spectral transform Lanczos method, multi-layer Lanczos iteration approach, recursive residue generation method, and dipole-wavefunction contraction. The quantum molecular Hamiltonian at the total angular momentum J = 0 is represented in a set of orthogonal polyspherical coordinates so that the large amplitude motions of vibrations are naturally described. In particular, the algorithm is general and problem-independent. An applicationmore » is illustrated by calculating the infrared vibrational dipole transition spectrum of CH₄ based on the ab initio T8 potential energy surface of Schwenke and Partridge and the low-order truncated ab initio dipole moment surfaces of Yurchenko and co-workers. A comparison with experiments is made. The algorithm is also applicable for Raman polarizability active spectra.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1338767','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1338767"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wollaber, Allan Benton; Park, HyeongKae; Lowrie, Robert Byron</p> <p></p> <p>Recent efforts at Los Alamos National Laboratory to develop a moment-based, scale-bridging [or high-order (HO)–low-order (LO)] algorithm for solving large varieties of the transport (kinetic) systems have shown promising results. A part of our ongoing effort is incorporating this methodology into the framework of the Eulerian Applications Project to achieve algorithmic acceleration of radiationhydrodynamics simulations in production software. By starting from the thermal radiative transfer equations with a simple material-motion correction, we derive a discretely consistent energy balance equation (LO equation). We demonstrate that the corresponding LO system for the Monte Carlo HO solver is closely related to the originalmore » LO system without material-motion corrections. We test the implementation on a radiative shock problem and show consistency between the energy densities and temperatures in the HO and LO solutions as well as agreement with the semianalytic solution. We also test the approach on a more challenging two-dimensional problem and demonstrate accuracy enhancements and algorithmic speedups. This paper extends a recent conference paper by including multigroup effects.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.S23A2753H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.S23A2753H"><span>An Envelope Based Feedback Control System for Earthquake Early Warning: Reality Check Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Heaton, T. H.; Karakus, G.; Beck, J. L.</p> <p>2016-12-01</p> <p>Earthquake early warning systems are, in general, designed to be open loop control systems in such a way that the output, i.e., the warning messages, only depend on the input, i.e., recorded ground motions, up to the moment when the message is issued in real-time. We propose an algorithm, which is called Reality Check Algorithm (RCA), which would assess the accuracy of issued warning messages, and then feed the outcome of the assessment back into the system. Then, the system would modify its messages if necessary. That is, we are proposing to convert earthquake early warning systems into feedback control systems by integrating them with RCA. RCA works by continuously monitoring and comparing the observed ground motions' envelopes to the predicted envelopes of Virtual Seismologist (Cua 2005). Accuracy of magnitude and location (both spatial and temporal) estimations of the system are assessed separately by probabilistic classification models, which are trained by a Sparse Bayesian Learning technique called Automatic Relevance Determination prior.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900020578','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900020578"><span>A spatial operator algebra for manipulator modeling and control</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rodriguez, G.; Kreutz, Kenneth; Jain, Abhinandan</p> <p>1989-01-01</p> <p>A recently developed spatial operator algebra, useful for modeling, control, and trajectory design of manipulators is discussed. The elements of this algebra are linear operators whose domain and range spaces consist of forces, moments, velocities, and accelerations. The effect of these operators is equivalent to a spatial recursion along the span of a manipulator. Inversion of operators can be efficiently obtained via techniques of recursive filtering and smoothing. The operator algebra provides a high level framework for describing the dynamic and kinematic behavior of a manipulator and control and trajectory design algorithms. The interpretation of expressions within the algebraic framework leads to enhanced conceptual and physical understanding of manipulator dynamics and kinematics. Furthermore, implementable recursive algorithms can be immediately derived from the abstract operator expressions by inspection. Thus, the transition from an abstract problem formulation and solution to the detailed mechanizaton of specific algorithms is greatly simplified. The analytical formulation of the operator algebra, as well as its implementation in the Ada programming language are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10605E..11L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10605E..11L"><span>A novel rotational invariants target recognition method for rotating motion blurred images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lan, Jinhui; Gong, Meiling; Dong, Mingwei; Zeng, Yiliang; Zhang, Yuzhen</p> <p>2017-11-01</p> <p>The imaging of the image sensor is blurred due to the rotational motion of the carrier and reducing the target recognition rate greatly. Although the traditional mode that restores the image first and then identifies the target can improve the recognition rate, it takes a long time to recognize. In order to solve this problem, a rotating fuzzy invariants extracted model was constructed that recognizes target directly. The model includes three metric layers. The object description capability of metric algorithms that contain gray value statistical algorithm, improved round projection transformation algorithm and rotation-convolution moment invariants in the three metric layers ranges from low to high, and the metric layer with the lowest description ability among them is as the input which can eliminate non pixel points of target region from degenerate image gradually. Experimental results show that the proposed model can improve the correct target recognition rate of blurred image and optimum allocation between the computational complexity and function of region.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25583871','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25583871"><span>Wind turbine blade shear web disbond detection using rotor blade operational sensing and data analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Myrent, Noah; Adams, Douglas E; Griffith, D Todd</p> <p>2015-02-28</p> <p>A wind turbine blade's structural dynamic response is simulated and analysed with the goal of characterizing the presence and severity of a shear web disbond. Computer models of a 5 MW offshore utility-scale wind turbine were created to develop effective algorithms for detecting such damage. Through data analysis and with the use of blade measurements, a shear web disbond was quantified according to its length. An aerodynamic sensitivity study was conducted to ensure robustness of the detection algorithms. In all analyses, the blade's flap-wise acceleration and root-pitching moment were the clearest indicators of the presence and severity of a shear web disbond. A combination of blade and non-blade measurements was formulated into a final algorithm for the detection and quantification of the disbond. The probability of detection was 100% for the optimized wind speed ranges in laminar, 30% horizontal shear and 60% horizontal shear conditions. © 2015 The Author(s) Published by the Royal Society. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20010000270&hterms=statistics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dstatistics','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20010000270&hterms=statistics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dstatistics"><span>Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Williams Colin P.</p> <p>1999-01-01</p> <p>Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012SPIE.8345E..07N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012SPIE.8345E..07N"><span>Damage diagnosis algorithm using a sequential change point detection method with an unknown distribution for damage</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.</p> <p>2012-04-01</p> <p>This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=duodenal+AND+ulcer&id=EJ564267','ERIC'); return false;" href="https://eric.ed.gov/?q=duodenal+AND+ulcer&id=EJ564267"><span>Feature Selection and Effective Classifiers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Deogun, Jitender S.; Choubey, Suresh K.; Raghavan, Vijay V.; Sever, Hayri</p> <p>1998-01-01</p> <p>Develops and analyzes four algorithms for feature selection in the context of rough set methodology. Experimental results confirm the expected relationship between the time complexity of these algorithms and the classification accuracy of the resulting upper classifiers. When compared, results of upper classifiers perform better than lower…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Hegemonic+AND+Masculinity&pg=7&id=ED519729','ERIC'); return false;" href="https://eric.ed.gov/?q=Hegemonic+AND+Masculinity&pg=7&id=ED519729"><span>"They'll Expect More Bad Things from Us.": Latino/a Youth Constructing Identities in a Racialized High School in New Mexico</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Lechuga, Chalane Elizabeth</p> <p>2010-01-01</p> <p>This research explores how Latino/a high school students in New Mexico constitute their racial identities in this particular historical moment, the post-Civil Rights colorblind era. I explore what their chosen nomenclatures and employed discourses suggest about the relationship between their racial identities and academic achievement. The research…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.7981E..4NN','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.7981E..4NN"><span>Application of a sparse representation method using K-SVD to data compression of experimental ambient vibration data for SHM</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Noh, Hae Young; Kiremidjian, Anne S.</p> <p>2011-04-01</p> <p>This paper introduces a data compression method using the K-SVD algorithm and its application to experimental ambient vibration data for structural health monitoring purposes. Because many damage diagnosis algorithms that use system identification require vibration measurements of multiple locations, it is necessary to transmit long threads of data. In wireless sensor networks for structural health monitoring, however, data transmission is often a major source of battery consumption. Therefore, reducing the amount of data to transmit can significantly lengthen the battery life and reduce maintenance cost. The K-SVD algorithm was originally developed in information theory for sparse signal representation. This algorithm creates an optimal over-complete set of bases, referred to as a dictionary, using singular value decomposition (SVD) and represents the data as sparse linear combinations of these bases using the orthogonal matching pursuit (OMP) algorithm. Since ambient vibration data are stationary, we can segment them and represent each segment sparsely. Then only the dictionary and the sparse vectors of the coefficients need to be transmitted wirelessly for restoration of the original data. We applied this method to ambient vibration data measured from a four-story steel moment resisting frame. The results show that the method can compress the data efficiently and restore the data with very little error.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19880013502','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19880013502"><span>Expanded envelope concepts for aircraft control-element failure detection and identification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Weiss, Jerold L.; Hsu, John Y.</p> <p>1988-01-01</p> <p>The purpose of this effort was to develop and demonstrate concepts for expanding the envelope of failure detection and isolation (FDI) algorithms for aircraft-path failures. An algorithm which uses analytic-redundancy in the form of aerodynamic force and moment balance equations was used. Because aircraft-path FDI uses analytical models, there is a tradeoff between accuracy and the ability to detect and isolate failures. For single flight condition operation, design and analysis methods are developed to deal with this robustness problem. When the departure from the single flight condition is significant, algorithm adaptation is necessary. Adaptation requirements for the residual generation portion of the FDI algorithm are interpreted as the need for accurate, large-motion aero-models, over a broad range of velocity and altitude conditions. For the decision-making part of the algorithm, adaptation may require modifications to filtering operations, thresholds, and projection vectors that define the various hypothesis tests performed in the decision mechanism. Methods of obtaining and evaluating adequate residual generation and decision-making designs have been developed. The application of the residual generation ideas to a high-performance fighter is demonstrated by developing adaptive residuals for the AFTI-F-16 and simulating their behavior under a variety of maneuvers using the results of a NASA F-16 simulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110009983','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110009983"><span>Icing Encounter Duration Sensitivity Study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Addy, Harold E., Jr.; Lee, Sam</p> <p>2011-01-01</p> <p>This paper describes a study performed to investigate how aerodynamic performance degradation progresses with time throughout an exposure to icing conditions. It is one of the first documented studies of the effects of ice contamination on aerodynamic performance at various points in time throughout an icing encounter. Both a 1.5 and 6 ft chord, two-dimensional, NACA-23012 airfoils were subjected to icing conditions in the NASA Icing Research Tunnel for varying lengths of time. At the end of each run, lift, drag, and pitching moment measurements were made. Measurements with the 1.5 ft chord model showed that maximum lift and pitching moment degraded more rapidly early in the exposure and degraded more slowly as time progressed. Drag for the 1.5 ft chord model degraded more linearly with time, although drag for very short exposure durations was slightly higher than expected. Only drag measurements were made with the 6 ft chord airfoil. Here, drag for the long exposures was higher than expected. Novel comparison of drag measurements versus an icing scaling parameter, accumulation parameter times collection efficiency was used to compare the data from the two different size model. The comparisons provided a means of assessing the level of fidelity needed for accurate icing simulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1033186-model-based-clustering-regression-time-series-data-via-apecm-aecm-algorithm-sung-even-faster-beat','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1033186-model-based-clustering-regression-time-series-data-via-apecm-aecm-algorithm-sung-even-faster-beat"><span>Model-Based Clustering of Regression Time Series Data via APECM -- An AECM Algorithm Sung to an Even Faster Beat</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Chen, Wei-Chen; Maitra, Ranjan</p> <p>2011-01-01</p> <p>We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24068888','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24068888"><span>Knee point search using cascading top-k sorting with minimized time complexity.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Zheng; Tseng, Shian-Shyong</p> <p>2013-01-01</p> <p>Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70023511','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70023511"><span>Comments on "Failures in detecting volcanic ash from a satellite-based technique"</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Prata, F.; Bluth, G.; Rose, B.; Schneider, D.; Tupper, A.</p> <p>2001-01-01</p> <p>The recent paper by Simpson et al. [Remote Sens. Environ. 72 (2000) 191.] on failures to detect volcanic ash using the 'reverse' absorption technique provides a timely reminder of the danger that volcanic ash presents to aviation and the urgent need for some form of effective remote detection. The paper unfortunately suffers from a fundamental flaw in its methodology and numerous errors of fact and interpretation. For the moment, the 'reverse' absorption technique provides the best means for discriminating volcanic ash clouds from meteorological clouds. The purpose of our comment is not to defend any particular algorithm; rather, we point out some problems with Simpson et al.'s analysis and re-state the conditions under which the 'reverse' absorption algorithm is likely to succeed. ?? 2001 Elsevier Science Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28841314','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28841314"><span>Optimal Alignment of Structures for Finite and Periodic Systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Griffiths, Matthew; Niblett, Samuel P; Wales, David J</p> <p>2017-10-10</p> <p>Finding the optimal alignment between two structures is important for identifying the minimum root-mean-square distance (RMSD) between them and as a starting point for calculating pathways. Most current algorithms for aligning structures are stochastic, scale exponentially with the size of structure, and the performance can be unreliable. We present two complementary methods for aligning structures corresponding to isolated clusters of atoms and to condensed matter described by a periodic cubic supercell. The first method (Go-PERMDIST), a branch and bound algorithm, locates the global minimum RMSD deterministically in polynomial time. The run time increases for larger RMSDs. The second method (FASTOVERLAP) is a heuristic algorithm that aligns structures by finding the global maximum kernel correlation between them using fast Fourier transforms (FFTs) and fast SO(3) transforms (SOFTs). For periodic systems, FASTOVERLAP scales with the square of the number of identical atoms in the system, reliably finds the best alignment between structures that are not too distant, and shows significantly better performance than existing algorithms. The expected run time for Go-PERMDIST is longer than FASTOVERLAP for periodic systems. For finite clusters, the FASTOVERLAP algorithm is competitive with existing algorithms. The expected run time for Go-PERMDIST to find the global RMSD between two structures deterministically is generally longer than for existing stochastic algorithms. However, with an earlier exit condition, Go-PERMDIST exhibits similar or better performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1399726-exploiting-redundancy-application-scalability-cost-effective-time-constrained-execution-hpc-applications-amazon-ec2','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1399726-exploiting-redundancy-application-scalability-cost-effective-time-constrained-execution-hpc-applications-amazon-ec2"><span>Exploiting Redundancy and Application Scalability for Cost-Effective, Time-Constrained Execution of HPC Applications on Amazon EC2</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Marathe, Aniruddha P.; Harris, Rachel A.; Lowenthal, David K.; ...</p> <p>2015-12-17</p> <p>The use of clouds to execute high-performance computing (HPC) applications has greatly increased recently. Clouds provide several potential advantages over traditional supercomputers and in-house clusters. The most popular cloud is currently Amazon EC2, which provides fixed-cost and variable-cost, auction-based options. The auction market trades lower cost for potential interruptions that necessitate checkpointing; if the market price exceeds the bid price, a node is taken away from the user without warning. We explore techniques to maximize performance per dollar given a time constraint within which an application must complete. Specifically, we design and implement multiple techniques to reduce expected cost bymore » exploiting redundancy in the EC2 auction market. We then design an adaptive algorithm that selects a scheduling algorithm and determines the bid price. We show that our adaptive algorithm executes programs up to seven times cheaper than using the on-demand market and up to 44 percent cheaper than the best non-redundant, auction-market algorithm. We extend our adaptive algorithm to incorporate application scalability characteristics for further cost savings. In conclusion, we show that the adaptive algorithm informed with scalability characteristics of applications achieves up to 56 percent cost savings compared to the expected cost for the base adaptive algorithm run at a fixed, user-defined scale.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JMoSp.300...16F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JMoSp.300...16F"><span>Electron electric dipole moment and hyperfine interaction constants for ThO</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fleig, Timo; Nayak, Malaya K.</p> <p>2014-06-01</p> <p>A recently implemented relativistic four-component configuration interaction approach to study P- and T-odd interaction constants in atoms and molecules is employed to determine the electron electric dipole moment effective electric field in the Ω=1 first excited state of the ThO molecule. We obtain a value of Eeff=75.2GV/cm with an estimated error bar of 3% and 10% smaller than a previously reported result (Skripnikov et al., 2013). Using the same wavefunction model we obtain an excitation energy of TvΩ=1=5410 (cm), in accord with the experimental value within 2%. In addition, we report the implementation of the magnetic hyperfine interaction constant A|| as an expectation value, resulting in A||=-1339 (MHz) for the Ω=1 state in ThO. The smaller effective electric field increases the previously determined upper bound (Baron et al., 2014) on the electron electric dipole moment to |de|<9.7×10-29e cm and thus mildly mitigates constraints to possible extensions of the Standard Model of particle physics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004CoTPh..42..798Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004CoTPh..42..798Y"><span>Studies on Electronic Structure and Magnetic Properties of an Organic Magnet with Metallic Mn2+ and Cu2+ Ions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yao, Jian-Guo; Peng, Guang-Xiong</p> <p>2004-11-01</p> <p>The electronic structure and the magnetic properties of the non-pure organic ferromagnetic compound MnCu(pbaOH)(H2O)3 with pbaOH = 2-hydroxy-1, 3-propylenebis (oxamato) are studied by using the density-functional theory with local-spin-density approximation. The density of states, total energy, and the spin magnetic moment are calculated. The calculations reveal that the compound MnCu(pbaOH)(H20)3 has a stable metal-ferromagnetic ground state, and the spin magnetic moment per molecule is 2.208 μB, and the spin magnetic moment is mainly from Mn ion and Cu ion. An antiferromagnetic order is expected and the antiferromagnetic exchange interaction of d-electrons of Cu and Mn passes through the antiferromagnetic interaction between the adjacent C, O, and N atoms along the path linking the atoms Cu and Mn. The project supported by National Natural Science Foundation of China under Grant No. 10375074 and Hubei Automotive Industries Institute Foundation under Grant No. QY2002-16</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4732103','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4732103"><span>Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liang, Qiaokang; Zhang, Dan; Coppola, Gianmarc; Mao, Jianxu; Sun, Wei; Wang, Yaonan; Ge, Yunjian</p> <p>2016-01-01</p> <p>Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes. PMID:26751451</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvD..96i4512W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvD..96i4512W"><span>First lattice QCD study of the gluonic structure of light nuclei</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Winter, Frank; Detmold, William; Gambhir, Arjun S.; Orginos, Kostas; Savage, Martin J.; Shanahan, Phiala E.; Wagman, Michael L.; Nplqcd Collaboration</p> <p>2017-11-01</p> <p>The role of gluons in the structure of the nucleon and light nuclei is investigated using lattice quantum chromodynamics (QCD) calculations. The first moment of the unpolarized gluon distribution is studied in nuclei up to atomic number A =3 at quark masses corresponding to pion masses of mπ˜450 and 806 MeV. Nuclear modification of this quantity defines a gluonic analogue of the EMC effect and is constrained to be less than ˜10 % in these nuclei. This is consistent with expectations from phenomenological quark distributions and the momentum sum rule. In the deuteron, the combination of gluon distributions corresponding to the b1 structure function is found to have a small first moment compared with the corresponding momentum fraction. The first moment of the gluon transversity structure function is also investigated in the spin-1 deuteron, where a nonzero signal is observed at mπ˜806 MeV . This is the first indication of gluon contributions to nuclear structure that can not be associated with an individual nucleon.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28548672','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28548672"><span>Complex magnetic orders in small cobalt-benzene molecules.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>González, J W; Alonso-Lanza, T; Delgado, F; Aguilera-Granja, F; Ayuela, A</p> <p>2017-06-07</p> <p>Organometallic clusters based on transition metal atoms are interesting because of their possible applications in spintronics and quantum information processing. In addition to the enhanced magnetism at the nanoscale, the organic ligands may provide a natural shield against unwanted magnetic interactions with the matrices required for applications. Here we show that the organic ligands may lead to non-collinear magnetic order as well as the expected quenching of the magnetic moments. We use different density functional theory (DFT) methods to study the experimentally relevant three cobalt atoms surrounded by benzene rings (Co 3 Bz 3 ). We found that the benzene rings induce a ground state with non-collinear magnetization, with the magnetic moments localized on the cobalt centers and lying on the plane formed by the three cobalt atoms. We further analyze the magnetism of such a cluster using an anisotropic Heisenberg model where the involved parameters are obtained by a comparison with the DFT results. These results may also explain the recent observation of the null magnetic moment of Co 3 Bz 3 + . Moreover, we propose an additional experimental verification based on electron paramagnetic resonance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22095343-search-variation-fundamental-constants-violations-fundamental-symmetries-using-isotope-comparisons','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22095343-search-variation-fundamental-constants-violations-fundamental-symmetries-using-isotope-comparisons"><span>Search for variation of fundamental constants and violations of fundamental symmetries using isotope comparisons</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Berengut, J. C.; Flambaum, V. V.; Kava, E. M.</p> <p>2011-10-15</p> <p>Atomic microwave clocks based on hyperfine transitions, such as the caesium standard, tick with a frequency that is proportional to the magnetic moment of the nucleus. This magnetic moment varies strongly between isotopes of the same atom, while all atomic electron parameters remain the same. Therefore the comparison of two microwave clocks based on different isotopes of the same atom can be used to constrain variation of fundamental constants. In this paper, we calculate the neutron and proton contributions to the nuclear magnetic moments, as well as their sensitivity to any potential quark-mass variation, in a number of isotopes ofmore » experimental interest including {sup 201,199}Hg and {sup 87,85}Rb, where experiments are underway. We also include a brief treatment of the dependence of the hyperfine transitions to variation in nuclear radius, which in turn is proportional to any change in quark mass. Our calculations of expectation values of proton and neutron spin in nuclei are also needed to interpret measurements of violations of fundamental symmetries.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1411418-first-lattice-qcd-study-gluonic-structure-light-nuclei','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1411418-first-lattice-qcd-study-gluonic-structure-light-nuclei"><span>First lattice QCD study of the gluonic structure of light nuclei</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Winter, Frank; Detmold, William; Gambhir, Arjun S.; ...</p> <p>2017-11-28</p> <p>The role of gluons in the structure of the nucleon and light nuclei is investigated using lattice quantum chromodynamics (QCD) calculations. The first moment of the unpolarised gluon distribution is studied in nuclei up to atomic numbermore » $A=3$ at quark masses corresponding to pion masses of $$m_\\pi\\sim 450$$ and $806$ MeV. Nuclear modification of this quantity defines a gluonic analogue of the EMC effect and is constrained to be less than $$\\sim 10$$% in these nuclei. This is consistent with expectations from phenomenological quark distributions and the momentum sum rule. In the deuteron, the combination of gluon distributions corresponding to the $$b_1$$ structure function is found to have a small first moment compared with the corresponding momentum fraction. The first moment of the gluon transversity structure function is also investigated in the spin-1 deuteron, where a non-zero signal is observed at $$m_\\pi \\sim 806$$ MeV. In conclusion, this is the first indication of gluon contributions to nuclear structure that can not be associated with an individual nucleon.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20050028495','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20050028495"><span>Calculation of Wing Bending Moments and Tail Loads Resulting from the Jettison of Wing Tips During a Symmetrical Pull-Up</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Boshar, John</p> <p>1947-01-01</p> <p>A preliminary analytical investigation was made to determine the feasibility of the basic idea of controlled failure points as safety valves for the primary airplane structure. The present analysis considers the possibilities of the breakable wing tip which, in failing as a weak link, would relieve the bending moments on the wing structure. The analysis was carried out by computing the time histories of the wing and stabilizer angle of attack in a 10g pull-up for an XF8F airplane with tips fixed and comparing the results with those for the same maneuver, that is, elevator motion but with tips jettisoned at 8g. The calculations indicate that the increased stability accompanying the loss of the wing tips reduces the bending moment an additional amount above that which would be expected from the initial loss in lift and the inboard shift in load. The vortex shed when the tips are lost may induce a transient load requiring that the tail be made stronger than otherwise.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26751451','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26751451"><span>Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liang, Qiaokang; Zhang, Dan; Coppola, Gianmarc; Mao, Jianxu; Sun, Wei; Wang, Yaonan; Ge, Yunjian</p> <p>2016-01-07</p> <p>Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.geerassociation.org/component/geer_reports/?view=geerreports&id=75&layout=default','USGSPUBS'); return false;" href="http://www.geerassociation.org/component/geer_reports/?view=geerreports&id=75&layout=default"><span>Geotechnical aspects of the 2016 MW 6.2, MW 6.0, and MW 7.0 Kumamoto earthquakes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Kayen, Robert E.; Dashti, Shideh; Kokusho, T.; Hazarika, H.; Franke, Kevin; Oettle, N. K.; Wham, Brad; Ramirez Calderon, Jenny; Briggs, Dallin; Guillies, Samantha; Cheng, Katherine; Tanoue, Yutaka; Takematsu, Katsuji; Matsumoto, Daisuke; Morinaga, Takayuki; Furuichi, Hideo; Kitano, Yuuta; Tajiri, Masanori; Chaudhary, Babloo; Nishimura, Kengo; Chu, Chu</p> <p>2017-01-01</p> <p>The 2016 Kumamoto earthquakes are a series of events that began with an earthquake of moment magnitude 6.2 on the Hinagu Fault on April 14, 2016, followed by another foreshock of moment magnitude 6.0 on the Hinagu Fault on April 15, 2016, and a larger moment magnitude 7.0 event on the Futagawa Fault on April 16, 2016 beneath Kumamoto City, Kumamoto Prefecture on Kyushu, Japan. These events are the strongest earthquakes recorded in Kyushu during the modern instrumental era. The earthquakes resulted in substantial damage to infrastructure, buildings, cultural heritage of Kumamoto Castle, roads and highways, slopes, and river embankments due to earthquake-induced landsliding and debris flows. Surface fault rupture produced offset and damage to roads, buildings, river levees, and an agricultural dam. Surprisingly, given the extremely intense earthquake motions, liquefaction occurred only in a few districts of Kumamoto City and in the port areas indicating that the volcanic soils were less susceptible to liquefying than expected given the intensity of earthquake shaking, a significant finding from this event.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24191069','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24191069"><span>Model-based clustering for RNA-seq data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Si, Yaqing; Liu, Peng; Li, Pinghua; Brutnell, Thomas P</p> <p>2014-01-15</p> <p>RNA-seq technology has been widely adopted as an attractive alternative to microarray-based methods to study global gene expression. However, robust statistical tools to analyze these complex datasets are still lacking. By grouping genes with similar expression profiles across treatments, cluster analysis provides insight into gene functions and networks, and hence is an important technique for RNA-seq data analysis. In this manuscript, we derive clustering algorithms based on appropriate probability models for RNA-seq data. An expectation-maximization algorithm and another two stochastic versions of expectation-maximization algorithms are described. In addition, a strategy for initialization based on likelihood is proposed to improve the clustering algorithms. Moreover, we present a model-based hybrid-hierarchical clustering method to generate a tree structure that allows visualization of relationships among clusters as well as flexibility of choosing the number of clusters. Results from both simulation studies and analysis of a maize RNA-seq dataset show that our proposed methods provide better clustering results than alternative methods such as the K-means algorithm and hierarchical clustering methods that are not based on probability models. An R package, MBCluster.Seq, has been developed to implement our proposed algorithms. This R package provides fast computation and is publicly available at http://www.r-project.org</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28339644','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28339644"><span>Development and validation of a structured query language implementation of the Elixhauser comorbidity index.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Epstein, Richard H; Dexter, Franklin</p> <p>2017-07-01</p> <p>Comorbidity adjustment is often performed during outcomes and health care resource utilization research. Our goal was to develop an efficient algorithm in structured query language (SQL) to determine the Elixhauser comorbidity index. We wrote an SQL algorithm to calculate the Elixhauser comorbidities from Diagnosis Related Group and International Classification of Diseases (ICD) codes. Validation was by comparison to expected comorbidities from combinations of these codes and to the 2013 Nationwide Readmissions Database (NRD). The SQL algorithm matched perfectly with expected comorbidities for all combinations of ICD-9 or ICD-10, and Diagnosis Related Groups. Of 13 585 859 evaluable NRD records, the algorithm matched 100% of the listed comorbidities. Processing time was ∼0.05 ms/record. The SQL Elixhauser code was efficient and computationally identical to the SAS algorithm used for the NRD. This algorithm may be useful where preprocessing of large datasets in a relational database environment and comorbidity determination is desired before statistical analysis. A validated SQL procedure to calculate Elixhauser comorbidities and the van Walraven index from ICD-9 or ICD-10 discharge diagnosis codes has been published. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1374984-octet-baryon-magnetic-moments-from-lattice-qcd-approaching-experiment-from-three-flavor-symmetric-point','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1374984-octet-baryon-magnetic-moments-from-lattice-qcd-approaching-experiment-from-three-flavor-symmetric-point"><span>Octet baryon magnetic moments from lattice QCD: Approaching experiment from a three-flavor symmetric point</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Parreño, Assumpta; Savage, Martin J.; Tiburzi, Brian C.; ...</p> <p>2017-06-23</p> <p>We used lattice QCD calculations with background magnetic fields to determine the magnetic moments of the octet baryons. Computations are performed at the physical value of the strange quark mass, and two values of the light quark mass, one corresponding to the SU(3) flavor-symmetric point, where the pion mass is m π ~ 800 MeV, and the other corresponding to a pion mass m π ~ 450 MeV. The moments are found to exhibit only mild pion-mass dependence when expressed in terms of appropriately chosen magneton units---the natural baryon magneton. This suggests that simple extrapolations can be used to determinemore » magnetic moments at the physical point, and extrapolated results are found to agree with experiment within uncertainties. A curious pattern is revealed among the anomalous baryon magnetic moments which is linked to the constituent quark model, however, careful scrutiny exposes additional features. Relations expected to hold in the large-N c limit of QCD are studied; and, in one case, the quark model prediction is significantly closer to the extracted values than the large-N c prediction. The magnetically coupled Λ-Σ 0 system is treated in detail at the SU(3) F point, with the lattice QCD results comparing favorably with predictions based on SU(3) F symmetry. Our analysis enables the first extraction of the isovector transition magnetic polarizability. The possibility that large magnetic fields stabilize strange matter is explored, but such a scenario is found to be unlikely.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18600837','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18600837"><span>Implementation of an adaptive controller for the startup and steady-state running of a biomethanation process operated in the CSTR mode.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Renard, P; Van Breusegem, V; Nguyen, M T; Naveau, H; Nyns, E J</p> <p>1991-10-20</p> <p>An adaptive control algorithm has been implemented on a biomethanation process to maintain propionate concentration, a stable variable, at a given low value, by steering the dilution rate. It was thereby expected to ensure the stability of the process during the startup and during steady-state running with an acceptable performance. The methane pilot reactor was operated in the completely mixed, once-through mode and computer-controlled during 161 days. The results yielded the real-life validation of the adaptive control algorithm, and documented the stability and acceptable performance expected.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19890035956&hterms=bending+moment&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dbending%2Bmoment','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19890035956&hterms=bending+moment&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dbending%2Bmoment"><span>Estimation of blade airloads from rotor blade bending moments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bousman, William G.</p> <p>1987-01-01</p> <p>This paper presents a method for the estimation of blade airloads, based on the measurements of flap bending moments. In this procedure, the blade rotation in vacuum modes is calculated, and the airloads are expressed as an algebraic sum of the mode shapes, modal amplitudes, mass distribution, and frequency properties. The method was validated by comparing the calculated airload distribution with the original wind tunnel measurements which were made using ten modes and twenty measurement stations. Good agreement between the predicted and the measured airloads was found up to 0.90 R, but the agreement degraded towards the blade tip. The method is shown to be quite robust to the type of experimental problems that could be expected to occur in the testing of full-scale and model-scale rotors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1221934-comparative-study-magnetic-properties-nanoparticles-high-frequency-heat-dissipation-conventional-magnetometry','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1221934-comparative-study-magnetic-properties-nanoparticles-high-frequency-heat-dissipation-conventional-magnetometry"><span>Comparative Study of Magnetic Properties of Nanoparticles by High-Frequency Heat Dissipation and Conventional Magnetometry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Malik, V.; Goodwill, J.; Mallapragada, S.; ...</p> <p>2014-11-13</p> <p>The rate of heating of a water-based colloid of uniformly sized 15 nm magnetic nanoparticles by high-amplitude and high-frequency ac magnetic field induced by the resonating LC circuit (nanoTherics Magnetherm) was measured. The results are analyzed in terms of specific energy absorption rate (SAR). Fitting field amplitude and frequency dependences of SAR to the linear response theory, magnetic moment per particles was extracted. The value of magnetic moment was independently evaluated from dc magnetization measurements (Quantum Design MPMS) of a frozen colloid by fitting field-dependent magnetization to Langevin function. The two methods produced similar results, which are compared to themore » theoretical expectation for this particle size. Additionally, analysis of SAR curves yielded effective relaxation time.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JMMM..448..274P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JMMM..448..274P"><span>Structure and magnetization of Co4N thin film</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pandey, Nidhi; Gupta, Mukul; Gupta, Rachana; Rajput, Parasmani; Stahn, Jochen</p> <p>2018-02-01</p> <p>In this work, we studied the local structure and the magnetization of Co4N thin films deposited by a reactive dc magnetron sputtering process. The interstitial incorporation of N atoms in a fcc Co lattice is expected to expand the structure. This expansion yields interesting magnetic properties e.g. a larger magnetic moment (than Co) and a very high value of spin polarization ratio in Co4N . By optimizing the growth conditions, we prepared Co4N film having lattice parameter close to its theoretically predicted value. The N concentration was measured using secondary ion mass spectroscopy. Detailed magnetization measurements using bulk magnetization method and polarized neutron reflectivity confirm that the magnetic moment of Co in Co4N is higher than that of Co.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1409054-infrared-laser-stark-spectroscopy-hydroxymethoxycarbene-nanodroplets','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1409054-infrared-laser-stark-spectroscopy-hydroxymethoxycarbene-nanodroplets"><span>Infrared laser Stark spectroscopy of hydroxymethoxycarbene in 4He nanodroplets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Broderick, Bernadette M.; Moradi, Christopher P.; Douberly, Gary E.</p> <p>2015-09-07</p> <p>Hydroxymethoxycarbene, CH 3OCOH, was produced via pyrolysis of monomethyl oxalate and subsequently isolated in 4He nanodroplets. Infrared laser spectroscopy reveals two rotationally resolved a,b-hybrid bands in the OH-stretch region, which are assigned to trans, trans- and cis, trans-rotamers. Stark spectroscopy of the trans, trans-OH stretch band provides the a-axis inertial component of the dipole moment, namely μ a = 0.62(7) D. Here, the computed equilibrium dipole moment agrees well with the expectation value determined from experiment, consistent with a semi-rigid CH 3OCOH backbone computed via a potential energy scan at the B3LYP/cc-pVTZ level of theory, which reveals substantial conformer interconversionmore » barriers of ≈17 kcal/mol.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA058142','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA058142"><span>Correlation of Experimental and Theoretical Steady-State Spinning Motion for a Current Fighter Airplane Using Rotation-Balance Aerodynamic Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1978-07-01</p> <p>were input into the computer program. The program was numerically intergrated with time by using a fourth-order Runge-Kutta integration algorithm with...equations of motion are numerically intergrated to provide time histories of the aircraft spinning motion. A.2 EQUATIONS DEFINING THE FORCE AND MOMENT...by Cy or Cn. 50 AE DC-TR-77-126 A . 4 where EQUATIONS FOR TRANSFERRING AERODYNAMIC DATA INPUTS TO THE PROPER HORIZONTAL CENTER OF GRAVITY</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA616257','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA616257"><span>Advancements of In-Flight Mass Moment of Inertia and Structural Deflection Algorithms for Satellite Attitude Simulators</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2015-03-26</p> <p>pendulum [15] to estimate the MOI. The benefit to this methodology is that instead of a direct comparison to Euler’s equations when using an on-board ACS...the equations of motion of pendulum motion are evaluated to estimate the resistance to angular acceleration. Instead of attempting to compare noisy...sensor data instantaneously when using on-board ACS data, the pendulum oscillation frequency is estimated, which can be globally smoothed for highly</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10063E..0WP','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10063E..0WP"><span>Optical diagnosis of cervical cancer by higher order spectra and boosting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pratiher, Sawon; Mukhopadhyay, Sabyasachi; Barman, Ritwik; Pratiher, Souvik; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.</p> <p>2017-03-01</p> <p>In this contribution, we report the application of higher order statistical moments using decision tree and ensemble based learning methodology for the development of diagnostic algorithms for optical diagnosis of cancer. The classification results were compared to those obtained with an independent feature extractors like linear discriminant analysis (LDA). The performance and efficacy of these methodology using higher order statistics as a classifier using boosting has higher specificity and sensitivity while being much faster as compared to other time-frequency domain based methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950020734','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950020734"><span>Development of numerical methods for overset grids with applications for the integrated Space Shuttle vehicle</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chan, William M.</p> <p>1995-01-01</p> <p>Algorithms and computer code developments were performed for the overset grid approach to solving computational fluid dynamics problems. The techniques developed are applicable to compressible Navier-Stokes flow for any general complex configurations. The computer codes developed were tested on different complex configurations with the Space Shuttle launch vehicle configuration as the primary test bed. General, efficient and user-friendly codes were produced for grid generation, flow solution and force and moment computation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADP014212','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADP014212"><span>Application of Two-Dimensional AWE Algorithm in Training Multi-Dimensional Neural Network Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2003-07-01</p> <p>hybrid scheme . the general neural network method (Table 3.1). The training process of the software- ACKNOWLEDGMENT "Neuralmodeler" is shown in Fig. 3.2...engineering. Artificial neural networks (ANNs) have emerged Training a neural network model is the key of as a powerful technique for modeling general neural...coefficients am, the derivatives method of moments (MoM). The variables in the of matrix I have to be generated . A closed form model are frequency</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AIPC.1650..453R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AIPC.1650..453R"><span>MUSIC imaging method for electromagnetic inspection of composite multi-layers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rodeghiero, Giacomo; Ding, Ping-Ping; Zhong, Yu; Lambert, Marc; Lesselier, Dominique</p> <p>2015-03-01</p> <p>A first-order asymptotic formulation of the electric field scattered by a small inclusion (with respect to the wavelength in dielectric regime or to the skin depth in conductive regime) embedded in composite material is given. It is validated by comparison with results obtained using a Method of Moments (MoM). A non-iterative MUltiple SIgnal Classification (MUSIC) imaging method is utilized in the same configuration to locate the position of small defects. The effectiveness of the imaging algorithm is illustrated through some numerical examples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1011055','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1011055"><span>Efficient Estimation of Mutual Information for Strongly Dependent Variables</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2015-05-11</p> <p>the two possibilities: for a fixed dimension d and near- est neighbor parameter k, we find a constant ↵ k,d , such that if V̄ (i)/V (i) < ↵ k,d , then...also compare the results to several baseline estima- tors: KSG (Kraskov et al., 2004), generalized near- est neighbor graph (GNN) (Pál et al., 2010...Amaury Lendasse, and Francesco Corona. A boundary corrected expansion of the moments of near- est neighbor distributions. Random Struct. Algorithms</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9787E..1JA','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9787E..1JA"><span>A utility/cost analysis of breast cancer risk prediction algorithms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abbey, Craig K.; Wu, Yirong; Burnside, Elizabeth S.; Wunderlich, Adam; Samuelson, Frank W.; Boone, John M.</p> <p>2016-03-01</p> <p>Breast cancer risk prediction algorithms are used to identify subpopulations that are at increased risk for developing breast cancer. They can be based on many different sources of data such as demographics, relatives with cancer, gene expression, and various phenotypic features such as breast density. Women who are identified as high risk may undergo a more extensive (and expensive) screening process that includes MRI or ultrasound imaging in addition to the standard full-field digital mammography (FFDM) exam. Given that there are many ways that risk prediction may be accomplished, it is of interest to evaluate them in terms of expected cost, which includes the costs of diagnostic outcomes. In this work we perform an expected-cost analysis of risk prediction algorithms that is based on a published model that includes the costs associated with diagnostic outcomes (true-positive, false-positive, etc.). We assume the existence of a standard screening method and an enhanced screening method with higher scan cost, higher sensitivity, and lower specificity. We then assess expected cost of using a risk prediction algorithm to determine who gets the enhanced screening method under the strong assumption that risk and diagnostic performance are independent. We find that if risk prediction leads to a high enough positive predictive value, it will be cost-effective regardless of the size of the subpopulation. Furthermore, in terms of the hit-rate and false-alarm rate of the of the risk prediction algorithm, iso-cost contours are lines with slope determined by properties of the available diagnostic systems for screening.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1994JChPh.101..734N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1994JChPh.101..734N"><span>A structure adapted multipole method for electrostatic interactions in protein dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Niedermeier, Christoph; Tavan, Paul</p> <p>1994-07-01</p> <p>We present an algorithm for rapid approximate evaluation of electrostatic interactions in molecular dynamics simulations of proteins. Traditional algorithms require computational work of the order O(N2) for a system of N particles. Truncation methods which try to avoid that effort entail untolerably large errors in forces, energies and other observables. Hierarchical multipole expansion algorithms, which can account for the electrostatics to numerical accuracy, scale with O(N log N) or even with O(N) if they become augmented by a sophisticated scheme for summing up forces. To further reduce the computational effort we propose an algorithm that also uses a hierarchical multipole scheme but considers only the first two multipole moments (i.e., charges and dipoles). Our strategy is based on the consideration that numerical accuracy may not be necessary to reproduce protein dynamics with sufficient correctness. As opposed to previous methods, our scheme for hierarchical decomposition is adjusted to structural and dynamical features of the particular protein considered rather than chosen rigidly as a cubic grid. As compared to truncation methods we manage to reduce errors in the computation of electrostatic forces by a factor of 10 with only marginal additional effort.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>