Contextual Multi-armed Bandits under Feature Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yun, Seyoung; Nam, Jun Hyun; Mo, Sangwoo
We study contextual multi-armed bandit problems under linear realizability on rewards and uncertainty (or noise) on features. For the case of identical noise on features across actions, we propose an algorithm, coined NLinRel, having O(T⁷/₈(log(dT)+K√d)) regret bound for T rounds, K actions, and d-dimensional feature vectors. Next, for the case of non-identical noise, we observe that popular linear hypotheses including NLinRel are impossible to achieve such sub-linear regret. Instead, under assumption of Gaussian feature vectors, we prove that a greedy algorithm has O(T²/₃√log d)regret bound with respect to the optimal linear hypothesis. Utilizing our theoretical understanding on the Gaussian case,more » we also design a practical variant of NLinRel, coined Universal-NLinRel, for arbitrary feature distributions. It first runs NLinRel for finding the ‘true’ coefficient vector using feature uncertainties and then adjust it to minimize its regret using the statistical feature information. We justify the performance of Universal-NLinRel on both synthetic and real-world datasets.« less
Appraisal of jump distributions in ensemble-based sampling algorithms
NASA Astrophysics Data System (ADS)
Dejanic, Sanda; Scheidegger, Andreas; Rieckermann, Jörg; Albert, Carlo
2017-04-01
Sampling Bayesian posteriors of model parameters is often required for making model-based probabilistic predictions. For complex environmental models, standard Monte Carlo Markov Chain (MCMC) methods are often infeasible because they require too many sequential model runs. Therefore, we focused on ensemble methods that use many Markov chains in parallel, since they can be run on modern cluster architectures. Little is known about how to choose the best performing sampler, for a given application. A poor choice can lead to an inappropriate representation of posterior knowledge. We assessed two different jump moves, the stretch and the differential evolution move, underlying, respectively, the software packages EMCEE and DREAM, which are popular in different scientific communities. For the assessment, we used analytical posteriors with features as they often occur in real posteriors, namely high dimensionality, strong non-linear correlations or multimodality. For posteriors with non-linear features, standard convergence diagnostics based on sample means can be insufficient. Therefore, we resorted to an entropy-based convergence measure. We assessed the samplers by means of their convergence speed, robustness and effective sample sizes. For posteriors with strongly non-linear features, we found that the stretch move outperforms the differential evolution move, w.r.t. all three aspects.
Interactions between hyporheic flow produced by stream meanders, bars, and dunes
Stonedahl, Susa H.; Harvey, Judson W.; Packman, Aaron I.
2013-01-01
Stream channel morphology from grain-scale roughness to large meanders drives hyporheic exchange flow. In practice, it is difficult to model hyporheic flow over the wide spectrum of topographic features typically found in rivers. As a result, many studies only characterize isolated exchange processes at a single spatial scale. In this work, we simulated hyporheic flows induced by a range of geomorphic features including meanders, bars and dunes in sand bed streams. Twenty cases were examined with 5 degrees of river meandering. Each meandering river model was run initially without any small topographic features. Models were run again after superimposing only bars and then only dunes, and then run a final time after including all scales of topographic features. This allowed us to investigate the relative importance and interactions between flows induced by different scales of topography. We found that dunes typically contributed more to hyporheic exchange than bars and meanders. Furthermore, our simulations show that the volume of water exchanged and the distributions of hyporheic residence times resulting from various scales of topographic features are close to, but not linearly additive. These findings can potentially be used to develop scaling laws for hyporheic flow that can be widely applied in streams and rivers.
1975-05-01
M ot.atacla OnnU Strata« iBdldil t’. MMt < •allr/l fanäii aM im >. MMt 1 Uck * for im ■ atar Dap tu 0*4 vanicia rioattai...code numbers. V-2 ’■’- I run mmt Assignment of specific geometric properties and physical dimensions to the linear features can be done by a
Investigating Mars: Pavonis Mons
2017-11-06
his image shows part of the eastern flank of Pavonis Mons. Surface lava flows run down hill from the upper left of the image towards the bottom right. Perpendicular to that trend are several linear features. These are faults that encircle the volcano and also run along the linear trend through the three Tharsis volcanoes. This image shows a collapsed lava tube where a flow followed the trend of a graben and then "turned" to flow down hill. Graben are linear features, so lava flows in them are linear. Where the lava flow is running along the surface of the volcano it has sinuosity just like a river. The mode of formation of a lava tube starts with a surface lava flow. The sides and top of the flow cool faster than the center, eventually forming a solid, non-flowing cover of the still flowing lava. The surface flow may have followed the deeper fault block graben (a lower surface than the surroundings). Once the flow stops there remains the empty space lower than the surroundings, and collapse of the top of the tube starts in small pits which coalesce in the linear features. Pavonis Mons is one of the three aligned Tharsis Volcanoes. The four Tharsis volcanoes are Ascreaus Mons, Pavonis Mons, Arsia Mons, and Olympus Mars. All four are shield type volcanoes. Shield volcanoes are formed by lava flows originating near or at the summit, building up layers upon layers of lava. The Hawaiian islands on Earth are shield volcanoes. The three aligned volcanoes are located along a topographic rise in the Tharsis region. Along this trend there are increased tectonic features and additional lava flows. Pavonis Mons is the smallest of the four volcanoes, rising 14km above the mean Mars surface level with a width of 375km. It has a complex summit caldera, with the smallest caldera deeper than the larger caldera. Like most shield volcanoes the surface has a low profile. In the case of Pavonis Mons the average slope is only 4 degrees. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 32751 Latitude: 0.338236 Longitude: 248.74 Instrument: VIS Captured: 2009-05-03 01:57 https://photojournal.jpl.nasa.gov/catalog/PIA22022
Thermospheric dynamics - A system theory approach
NASA Technical Reports Server (NTRS)
Codrescu, M.; Forbes, J. M.; Roble, R. G.
1990-01-01
A system theory approach to thermospheric modeling is developed, based upon a linearization method which is capable of preserving nonlinear features of a dynamical system. The method is tested using a large, nonlinear, time-varying system, namely the thermospheric general circulation model (TGCM) of the National Center for Atmospheric Research. In the linearized version an equivalent system, defined for one of the desired TGCM output variables, is characterized by a set of response functions that is constructed from corresponding quasi-steady state and unit sample response functions. The linearized version of the system runs on a personal computer and produces an approximation of the desired TGCM output field height profile at a given geographic location.
Control of the NASA Langley 16-Foot Transonic Tunnel with the Self-Organizing Feature Map
NASA Technical Reports Server (NTRS)
Motter, Mark A.
1998-01-01
A predictive, multiple model control strategy is developed based on an ensemble of local linear models of the nonlinear system dynamics for a transonic wind tunnel. The local linear models are estimated directly from the weights of a Self Organizing Feature Map (SOFM). Local linear modeling of nonlinear autonomous systems with the SOFM is extended to a control framework where the modeled system is nonautonomous, driven by an exogenous input. This extension to a control framework is based on the consideration of a finite number of subregions in the control space. Multiple self organizing feature maps collectively model the global response of the wind tunnel to a finite set of representative prototype controls. These prototype controls partition the control space and incorporate experimental knowledge gained from decades of operation. Each SOFM models the combination of the tunnel with one of the representative controls, over the entire range of operation. The SOFM based linear models are used to predict the tunnel response to a larger family of control sequences which are clustered on the representative prototypes. The control sequence which corresponds to the prediction that best satisfies the requirements on the system output is applied as the external driving signal. Each SOFM provides a codebook representation of the tunnel dynamics corresponding to a prototype control. Different dynamic regimes are organized into topological neighborhoods where the adjacent entries in the codebook represent the minimization of a similarity metric which is the essence of the self organizing feature of the map. Thus, the SOFM is additionally employed to identify the local dynamical regime, and consequently implements a switching scheme than selects the best available model for the applied control. Experimental results of controlling the wind tunnel, with the proposed method, during operational runs where strict research requirements on the control of the Mach number were met, are presented. Comparison to similar runs under the same conditions with the tunnel controlled by either the existing controller or an expert operator indicate the superiority of the method.
De Beer, Maarten; Lynen, Fréderic; Chen, Kai; Ferguson, Paul; Hanna-Brown, Melissa; Sandra, Pat
2010-03-01
Stationary-phase optimized selectivity liquid chromatography (SOS-LC) is a tool in reversed-phase LC (RP-LC) to optimize the selectivity for a given separation by combining stationary phases in a multisegment column. The presently (commercially) available SOS-LC optimization procedure and algorithm are only applicable to isocratic analyses. Step gradient SOS-LC has been developed, but this is still not very elegant for the analysis of complex mixtures composed of components covering a broad hydrophobicity range. A linear gradient prediction algorithm has been developed allowing one to apply SOS-LC as a generic RP-LC optimization method. The algorithm allows operation in isocratic, stepwise, and linear gradient run modes. The features of SOS-LC in the linear gradient mode are demonstrated by means of a mixture of 13 steroids, whereby baseline separation is predicted and experimentally demonstrated.
PyFDAP: automated analysis of fluorescence decay after photoconversion (FDAP) experiments.
Bläßle, Alexander; Müller, Patrick
2015-03-15
We developed the graphical user interface PyFDAP for the fitting of linear and non-linear decay functions to data from fluorescence decay after photoconversion (FDAP) experiments. PyFDAP structures and analyses large FDAP datasets and features multiple fitting and plotting options. PyFDAP was written in Python and runs on Ubuntu Linux, Mac OS X and Microsoft Windows operating systems. The software, a user guide and a test FDAP dataset are freely available for download from http://people.tuebingen.mpg.de/mueller-lab. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Jones, Drew R; Wu, Zhiping; Chauhan, Dharminder; Anderson, Kenneth C; Peng, Junmin
2014-04-01
Global metabolomics relies on highly reproducible and sensitive detection of a wide range of metabolites in biological samples. Here we report the optimization of metabolome analysis by nanoflow ultraperformance liquid chromatography coupled to high-resolution orbitrap mass spectrometry. Reliable peak features were extracted from the LC-MS runs based on mandatory detection in duplicates and additional noise filtering according to blank injections. The run-to-run variation in peak area showed a median of 14%, and the false discovery rate during a mock comparison was evaluated. To maximize the number of peak features identified, we systematically characterized the effect of sample loading amount, gradient length, and MS resolution. The number of features initially rose and later reached a plateau as a function of sample amount, fitting a hyperbolic curve. Longer gradients improved unique feature detection in part by time-resolving isobaric species. Increasing the MS resolution up to 120000 also aided in the differentiation of near isobaric metabolites, but higher MS resolution reduced the data acquisition rate and conferred no benefits, as predicted from a theoretical simulation of possible metabolites. Moreover, a biphasic LC gradient allowed even distribution of peak features across the elution, yielding markedly more peak features than the linear gradient. Using this robust nUPLC-HRMS platform, we were able to consistently analyze ~6500 metabolite features in a single 60 min gradient from 2 mg of yeast, equivalent to ~50 million cells. We applied this optimized method in a case study of drug (bortezomib) resistant and drug-sensitive multiple myeloma cells. Overall, 18% of metabolite features were matched to KEGG identifiers, enabling pathway enrichment analysis. Principal component analysis and heat map data correctly clustered isogenic phenotypes, highlighting the potential for hundreds of small molecule biomarkers of cancer drug resistance.
NASA Technical Reports Server (NTRS)
Walley, J. L.; Nunes, A. C.; Clounch, J. L.; Russell, C. K.
2007-01-01
This study presents examples and considerations for differentiating linear radiographic indications produced by gas tungsten arc welds in a 0.05-in-thick sheet of Inconel 718. A series of welds with different structural features, including the enigma indications and other defect indications such as lack of fusion and penetration, were produced, radiographed, and examined metallographically. The enigma indications were produced by a large columnar grain running along the center of the weld nugget occurring when the weld speed was reduced sufficiently below nominal. Examples of respective indications, including the effect of changing the x-ray source location, are presented as an aid to differentiation. Enigma, nominal, and hot-weld specimens were tensile tested to demonstrate the harmlessness of the enigma indication. Statistical analysis showed that there is no difference between the strengths of these three weld conditions.
Abbasian Ardakani, Ali; Rajaee, Jila; Khoei, Samideh
2017-11-01
Hyperthermia and radiation have the ability to induce structural and morphological changes on both macroscopic and microscopic level. Normal and damage cells have a different texture but may be perceived by human eye, as having the same texture. To explore the potential of texture analysis based on run-length matrix, a total of 32 sphere images for each group and treatment regime were used in this study. Cells were subjected to the treatment with different doses of 6 MeV electron radiation (0 2, 4 and 6 Gy), hyperthermia (at 43° C in 0, 30, 60 and 90 min) and radiation + hyperthermia (at 43 °C in 30 min with 2, 4 and 6 Gy dose), respectively. Twenty run-length matrix (RLM) features were extracted as descriptors for each selected region of interest for texture analysis. Linear discriminant analysis was employed to transform raw data to lower-dimensional spaces and increase discriminative power. The features were classified by the first nearest neighbor classifier. RLM features represented the best performance with sensitivity, specificity, accuracy, positive predictive value (PPV) and negative predictive value (NPV) of 100% between 0 and 6 Gy radiation, 0 and 6 Gy radiation + hyperthermia, 0 and 90 min and 30 and 90 min hyperthermia groups. The area under receiver operating characteristic curve was 1 for these groups. RLM features have a high potential to characterize cell changes during different treatment regimes.
Investigating Mars: Pavonis Mons
2017-11-03
This image shows part of the southeastern flank of Pavonis Mons. Surface lava flows run down hill from the top left of the image to the bottom right. Perpendicular to that trend are several linear features. These are faults that encircle the volcano and also run along the linear trend through the three Tharsis volcanoes. This image illustrates how subsurface lava tubes collapse into the free space of the empty tube. Just to the top of the deepest depression are a series of circular pits. The pits coalesce into a linear feature near the left side of the deepest depression. The mode of formation of a lava tube starts with a surface lava flow. The sides and top of the flow cool faster than the center, eventually forming a solid, non-flowing cover of the still flowing lava. The surface flow may have followed the deeper fault block graben (a lower surface than the surroundings). Once the flow stops there remains the empty space lower than the surroundings, and collapse of the top of the tube starts in small pits which coalesce in the linear features. Pavonis Mons is one of the three aligned Tharsis Volcanoes. The four Tharsis volcanoes are Ascreaus Mons, Pavonis Mons, Arsia Mons, and Olympus Mars. All four are shield type volcanoes. Shield volcanoes are formed by lava flows originating near or at the summit, building up layers upon layers of lava. The Hawaiian islands on Earth are shield volcanoes. The three aligned volcanoes are located along a topographic rise in the Tharsis region. Along this trend there are increased tectonic features and additional lava flows. Pavonis Mons is the smallest of the four volcanoes, rising 14km above the mean Mars surface level with a width of 375km. It has a complex summit caldera, with the smallest caldera deeper than the larger caldera. Like most shield volcanoes the surface has a low profile. In the case of Pavonis Mons the average slope is only 4 degrees. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 31330 Latitude: -1.26587 Longitude: 247.705 Instrument: VIS Captured: 2009-01-05 23:32 https://photojournal.jpl.nasa.gov/catalog/PIA22021
1979-07-11
Range : 312, 000 kilometers (195,000 miles) This photo of Ganymede (Ice Giant) was taken from Voyager 2 and shows features down to about 5 to 6 kilometers across. Different types of terrain common on Ganymede's surface are visible. The boundary of the largest region of dark ancient terrain on Ganymede can be seen to the east (right), revealing some of the light linear features which may be all that remains of a large ancient impact structure similar to the large ring structure on Callisto. The broad light regions running through the image are the typical grooved structures seen within another example of what might be evidence of large scale lateral motion in Ganymede's crust. The band of grooved terrain (about 100 kilometers wide) in this region appears to be offset by 50 kilometers or more on the left hand edge by a linear feature perpendicular to it. A feature similar to this one was previously discovered by Voyager 1. These are the first clear examples of strike-slip style faulting on any planet other than Earth. Many examples of craters of all ages can be seen in this image, ranging from fresh, bright ray craters to large, subdued circular markings thought to be the 'scars' of large ancient impacts that have been flatteded by glacier-like flows.
Pedestrian detection in crowded scenes with the histogram of gradients principle
NASA Astrophysics Data System (ADS)
Sidla, O.; Rosner, M.; Lypetskyy, Y.
2006-10-01
This paper describes a close to real-time scale invariant implementation of a pedestrian detector system which is based on the Histogram of Oriented Gradients (HOG) principle. Salient HOG features are first selected from a manually created very large database of samples with an evolutionary optimization procedure that directly trains a polynomial Support Vector Machine (SVM). Real-time operation is achieved by a cascaded 2-step classifier which uses first a very fast linear SVM (with the same features as the polynomial SVM) to reject most of the irrelevant detections and then computes the decision function with a polynomial SVM on the remaining set of candidate detections. Scale invariance is achieved by running the detector of constant size on scaled versions of the original input images and by clustering the results over all resolutions. The pedestrian detection system has been implemented in two versions: i) fully body detection, and ii) upper body only detection. The latter is especially suited for very busy and crowded scenarios. On a state-of-the-art PC it is able to run at a frequency of 8 - 20 frames/sec.
Model-Based Engine Control Architecture with an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Connolly, Joseph W.
2016-01-01
This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The non-linear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.
NASA Astrophysics Data System (ADS)
Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua
2017-04-01
Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification.
Running Economy from a Muscle Energetics Perspective.
Fletcher, Jared R; MacIntosh, Brian R
2017-01-01
The economy of running has traditionally been quantified from the mass-specific oxygen uptake; however, because fuel substrate usage varies with exercise intensity, it is more accurate to express running economy in units of metabolic energy. Fundamentally, the understanding of the major factors that influence the energy cost of running (E run ) can be obtained with this approach. E run is determined by the energy needed for skeletal muscle contraction. Here, we approach the study of E run from that perspective. The amount of energy needed for skeletal muscle contraction is dependent on the force, duration, shortening, shortening velocity, and length of the muscle. These factors therefore dictate the energy cost of running. It is understood that some determinants of the energy cost of running are not trainable: environmental factors, surface characteristics, and certain anthropometric features. Other factors affecting E run are altered by training: other anthropometric features, muscle and tendon properties, and running mechanics. Here, the key features that dictate the energy cost during distance running are reviewed in the context of skeletal muscle energetics.
Fast Exact Search in Hamming Space With Multi-Index Hashing.
Norouzi, Mohammad; Punjani, Ali; Fleet, David J
2014-06-01
There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.
Simple estimation of linear 1+1 D tsunami run-up
NASA Astrophysics Data System (ADS)
Fuentes, M.; Campos, J. A.; Riquelme, S.
2016-12-01
An analytical expression is derived concerning the linear run-up for any given initial wave generated over a sloping bathymetry. Due to the simplicity of the linear formulation, complex transformations are unnecessay, because the shoreline motion is directly obtained in terms of the initial wave. This analytical result not only supports maximum run-up invariance between linear and non-linear theories, but also the time evolution of shoreline motion and velocity. The results exhibit good agreement with the non-linear theory. The present formulation also allows computing the shoreline motion numerically from a customised initial waveform, including non-smooth functions. This is useful for numerical tests, laboratory experiments or realistic cases in which the initial disturbance might be retrieved from seismic data rather than using a theoretical model. It is also shown that the real case studied is consistent with the field observations.
Photographer : JPL Range : 312, 000 kilometers (195,000 miles) This photo of Ganymede (Ice Giant)
NASA Technical Reports Server (NTRS)
1979-01-01
Photographer : JPL Range : 312, 000 kilometers (195,000 miles) This photo of Ganymede (Ice Giant) was taken from Voyager 2 and shows features down to about 5 to 6 kilometers across. Different types of terrain common on Ganymede's surface are visible. The boundary of the largest region of dark ancient terrain on Ganymede can be seen to the east (right), revealing some of the light linear features which may be all that remains of a large ancient impact structure similar to the large ring structure on Callisto. The broad light regions running through the image are the typical grooved structures seen within another example of what might be evidence of large scale lateral motion in Ganymede's crust. The band of grooved terrain (about 100 kilometers wide) in this region appears to be offset by 50 kilometers or more on the left hand edge by a linear feature perpendicular to it. A feature similar to this one was previously discovered by Voyager 1. These are the first clear examples of strike-slip style faulting on any planet other than Earth. Many examples of craters of all ages can be seen in this image, ranging from fresh, bright ray craters to large, subdued circular markings thought to be the 'scars' of large ancient impacts that have been flatteded by glacier-like flows.
NASA Astrophysics Data System (ADS)
Österberg, Anders; Ivansen, Lars; Beyerl, Angela; Newman, Tom; Bowhill, Amanda; Sahouria, Emile; Schulze, Steffen
2007-10-01
Optical proximity correction (OPC) is widely used in wafer lithography to produce a printed image that best matches the design intent while optimizing CD control. OPC software applies corrections to the mask pattern data, but in general it does not compensate for the mask writer and mask process characteristics. The Sigma7500-II deep-UV laser mask writer projects the image of a programmable spatial light modulator (SLM) using partially coherent optics similar to wafer steppers, and the optical proximity effects of the mask writer are in principle correctable with established OPC methods. To enhance mask patterning, an embedded OPC function, LinearityEqualize TM, has been developed for the Sigma7500- II that is transparent to the user and which does not degrade mask throughput. It employs a Calibre TM rule-based OPC engine from Mentor Graphics, selected for the computational speed necessary for mask run-time execution. A multinode cluster computer applies optimized table-based CD corrections to polygonized pattern data that is then fractured into an internal writer format for subsequent data processing. This embedded proximity correction flattens the linearity behavior for all linewidths and pitches, which targets to improve the CD uniformity on production photomasks. Printing results show that the CD linearity is reduced to below 5 nm for linewidths down to 200 nm, both for clear and dark and for isolated and dense features, and that sub-resolution assist features (SRAF) are reliably printed down to 120 nm. This reduction of proximity effects for main mask features and the extension of the practical resolution for SRAFs expands the application space of DUV laser mask writing.
Application of machine vision to pup loaf bread evaluation
NASA Astrophysics Data System (ADS)
Zayas, Inna Y.; Chung, O. K.
1996-12-01
Intrinsic end-use quality of hard winter wheat breeding lines is routinely evaluated at the USDA, ARS, USGMRL, Hard Winter Wheat Quality Laboratory. Experimental baking test of pup loaves is the ultimate test for evaluating hard wheat quality. Computer vision was applied to developing an objective methodology for bread quality evaluation for the 1994 and 1995 crop wheat breeding line samples. Computer extracted features for bread crumb grain were studied, using subimages (32 by 32 pixel) and features computed for the slices with different threshold settings. A subsampling grid was located with respect to the axis of symmetry of a slice to provide identical topological subimage information. Different ranking techniques were applied to the databases. Statistical analysis was run on the database with digital image and breadmaking features. Several ranking algorithms and data visualization techniques were employed to create a sensitive scale for porosity patterns of bread crumb. There were significant linear correlations between machine vision extracted features and breadmaking parameters. Crumb grain scores by human experts were correlated more highly with some image features than with breadmaking parameters.
NASA Technical Reports Server (NTRS)
Lowry, James D., Jr.
1999-01-01
The purpose of this archaeological research was two-fold; the location of Mayan sites and features in order to learn more of this cultural group, and the (cultural) preservation of these sites and features for the future using Landsat Thematic Mapper (TM) images. Because the rainy season, traditionally at least, lasts about six months (about June to December), the time of year the image is acquired plays an important role in spectral reflectance. Images from 1986, 1995, and 1997 were selected because it was felt they would provide the best opportunity for success in layering different bands from different years together to attempt to see features not completely visible in any one year. False-color composites were created including bands 3, 4, and 5 using a mixture of years and bands. One particular combination that yielded tremendously interesting results included band 5 from 1997, band 4 from 1995, and band 3 from 1986. A number of straight linear features (probably Mayan causeways) run through the bajos that Dr. Sever believes are features previously undiscovered. At this point, early indications are that this will be a successful method for locating "new" Mayan archaeological features in the Peten.
Model-Based Engine Control Architecture with an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Connolly, Joseph W.
2016-01-01
This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.
Running Economy from a Muscle Energetics Perspective
Fletcher, Jared R.; MacIntosh, Brian R.
2017-01-01
The economy of running has traditionally been quantified from the mass-specific oxygen uptake; however, because fuel substrate usage varies with exercise intensity, it is more accurate to express running economy in units of metabolic energy. Fundamentally, the understanding of the major factors that influence the energy cost of running (Erun) can be obtained with this approach. Erun is determined by the energy needed for skeletal muscle contraction. Here, we approach the study of Erun from that perspective. The amount of energy needed for skeletal muscle contraction is dependent on the force, duration, shortening, shortening velocity, and length of the muscle. These factors therefore dictate the energy cost of running. It is understood that some determinants of the energy cost of running are not trainable: environmental factors, surface characteristics, and certain anthropometric features. Other factors affecting Erun are altered by training: other anthropometric features, muscle and tendon properties, and running mechanics. Here, the key features that dictate the energy cost during distance running are reviewed in the context of skeletal muscle energetics. PMID:28690549
Will women outrun men in ultra-marathon road races from 50 km to 1,000 km?
Zingg, Matthias Alexander; Karner-Rezek, Klaus; Rosemann, Thomas; Knechtle, Beat; Lepers, Romuald; Rüst, Christoph Alexander
2014-01-01
It has been assumed that women would be able to outrun men in ultra-marathon running. The present study investigated the sex differences in running speed in ultra-marathons held worldwide from 50 km to 1,000 km. Changes in running speeds and the sex differences in running speeds in the annual fastest finishers in 50 km, 100 km, 200 km and 1,000 km events held worldwide from 1969-2012 were analysed using linear, non-linear and multi-level regression analyses. For the annual fastest and the annual ten fastest finishers, running speeds increased non-linearly in 50 km and 100 km, but not in 200 km and 1,000 km where running speeds remained unchanged for the annual fastest. The sex differences decreased non-linearly in 50 km and 100 km, but not in 200 and 1,000 km where the sex difference remained unchanged for the annual fastest. For the fastest women and men ever, the sex difference in running speed was lowest in 100 km (5.0%) and highest in 50 km (15.4%). For the ten fastest women and men ever, the sex difference was lowest in 100 km (10.0 ± 3.0%) and highest in 200 km (27.3 ± 5.7%). For both the fastest (r(2) = 0.003, p = 0.82) and the ten fastest finishers ever (r(2) = 0.34, p = 0.41) in 50 km, 100 km, 200 km and 1,000 km, we found no correlation between sex difference in performance and running speed. To summarize, the sex differences in running speeds decreased non-linearly in 50 km and 100 km but remained unchanged in 200 km and 1,000 km, and the sex differences in running speeds showed no change with increasing length of the race distance. These findings suggest that it is very unlikely that women will ever outrun men in ultra-marathons held from 50 km to 100 km.
ALPS - A LINEAR PROGRAM SOLVER
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
NASA Astrophysics Data System (ADS)
Rosenberg, D. E.; Alafifi, A.
2016-12-01
Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.
On the Rapid Computation of Various Polylogarithmic Constants
NASA Technical Reports Server (NTRS)
Bailey, David H.; Borwein, Peter; Plouffe, Simon
1996-01-01
We give algorithms for the computation of the d-th digit of certain transcendental numbers in various bases. These algorithms can be easily implemented (multiple precision arithmetic is not needed), require virtually no memory, and feature run times that scale nearly linearly with the order of the digit desired. They make it feasible to compute, for example, the billionth binary digit of log(2) or pi on a modest workstation in a few hours run time. We demonstrate this technique by computing the ten billionth hexadecimal digit of pi, the billionth hexadecimal digits of pi-squared, log(2) and log-squared(2), and the ten billionth decimal digit of log(9/10). These calculations rest on the observation that very special types of identities exist for certain numbers like pi, pi-squared, log(2) and log-squared(2). These are essentially polylogarithmic ladders in an integer base. A number of these identities that we derive in this work appear to be new, for example a critical identity for pi.
Patterson, Brent R.; Anderson, Morgan L.; Rodgers, Arthur R.; Vander Vennen, Lucas M.; Fryxell, John M.
2017-01-01
Woodland caribou (Rangifer tarandus caribou) in Ontario are a threatened species that have experienced a substantial retraction of their historic range. Part of their decline has been attributed to increasing densities of anthropogenic linear features such as trails, roads, railways, and hydro lines. These features have been shown to increase the search efficiency and kill rate of wolves. However, it is unclear whether selection for anthropogenic linear features is additive or compensatory to selection for natural (water) linear features which may also be used for travel. We studied the selection of water and anthropogenic linear features by 52 resident wolves (Canis lupus x lycaon) over four years across three study areas in northern Ontario that varied in degrees of forestry activity and human disturbance. We used Euclidean distance-based resource selection functions (mixed-effects logistic regression) at the seasonal range scale with random coefficients for distance to water linear features, primary/secondary roads/railways, and hydro lines, and tertiary roads to estimate the strength of selection for each linear feature and for several habitat types, while accounting for availability of each feature. Next, we investigated the trade-off between selection for anthropogenic and water linear features. Wolves selected both anthropogenic and water linear features; selection for anthropogenic features was stronger than for water during the rendezvous season. Selection for anthropogenic linear features increased with increasing density of these features on the landscape, while selection for natural linear features declined, indicating compensatory selection of anthropogenic linear features. These results have implications for woodland caribou conservation. Prey encounter rates between wolves and caribou seem to be strongly influenced by increasing linear feature densities. This behavioral mechanism–a compensatory functional response to anthropogenic linear feature density resulting in decreased use of natural travel corridors–has negative consequences for the viability of woodland caribou. PMID:29117234
Newton, Erica J; Patterson, Brent R; Anderson, Morgan L; Rodgers, Arthur R; Vander Vennen, Lucas M; Fryxell, John M
2017-01-01
Woodland caribou (Rangifer tarandus caribou) in Ontario are a threatened species that have experienced a substantial retraction of their historic range. Part of their decline has been attributed to increasing densities of anthropogenic linear features such as trails, roads, railways, and hydro lines. These features have been shown to increase the search efficiency and kill rate of wolves. However, it is unclear whether selection for anthropogenic linear features is additive or compensatory to selection for natural (water) linear features which may also be used for travel. We studied the selection of water and anthropogenic linear features by 52 resident wolves (Canis lupus x lycaon) over four years across three study areas in northern Ontario that varied in degrees of forestry activity and human disturbance. We used Euclidean distance-based resource selection functions (mixed-effects logistic regression) at the seasonal range scale with random coefficients for distance to water linear features, primary/secondary roads/railways, and hydro lines, and tertiary roads to estimate the strength of selection for each linear feature and for several habitat types, while accounting for availability of each feature. Next, we investigated the trade-off between selection for anthropogenic and water linear features. Wolves selected both anthropogenic and water linear features; selection for anthropogenic features was stronger than for water during the rendezvous season. Selection for anthropogenic linear features increased with increasing density of these features on the landscape, while selection for natural linear features declined, indicating compensatory selection of anthropogenic linear features. These results have implications for woodland caribou conservation. Prey encounter rates between wolves and caribou seem to be strongly influenced by increasing linear feature densities. This behavioral mechanism-a compensatory functional response to anthropogenic linear feature density resulting in decreased use of natural travel corridors-has negative consequences for the viability of woodland caribou.
Molecular dynamics study of Al and Ni 3Al sputtering by Al clusters bombardment
NASA Astrophysics Data System (ADS)
Zhurkin, Eugeni E.; Kolesnikov, Anton S.
2002-06-01
The sputtering of Al and Ni 3Al (1 0 0) surfaces induced by impact of Al ions and Al N clusters ( N=2,4,6,9,13,55) with energies of 100 and 500 eV/atom is studied at atomic scale by means of classical molecular dynamics (MD). The MD code we used implements many-body tight binding potential splined to ZBL at short distances. Special attention has been paid to model dense cascades: we used quite big computation cells with lateral periodic and damped boundary conditions. In addition, long simulation times (10-25 ps) and representative statistics (up to 1000 runs per each case) were considered. The total sputtering yields, energy and time spectrums of sputtered particles, as well as preferential sputtering of compound target were analyzed, both in the linear and non-linear regimes. The significant "cluster enhancement" of sputtering yield was found for cluster sizes N⩾13. In parallel, we estimated collision cascade features depending on cluster size in order to interpret the nature of observed non-linear effects.
Gender classification of running subjects using full-body kinematics
NASA Astrophysics Data System (ADS)
Williams, Christina M.; Flora, Jeffrey B.; Iftekharuddin, Khan M.
2016-05-01
This paper proposes novel automated gender classification of subjects while engaged in running activity. The machine learning techniques include preprocessing steps using principal component analysis followed by classification with linear discriminant analysis, and nonlinear support vector machines, and decision-stump with AdaBoost. The dataset consists of 49 subjects (25 males, 24 females, 2 trials each) all equipped with approximately 80 retroreflective markers. The trials are reflective of the subject's entire body moving unrestrained through a capture volume at a self-selected running speed, thus producing highly realistic data. The classification accuracy using leave-one-out cross validation for the 49 subjects is improved from 66.33% using linear discriminant analysis to 86.74% using the nonlinear support vector machine. Results are further improved to 87.76% by means of implementing a nonlinear decision stump with AdaBoost classifier. The experimental findings suggest that the linear classification approaches are inadequate in classifying gender for a large dataset with subjects running in a moderately uninhibited environment.
Can We Speculate Running Application With Server Power Consumption Trace?
Li, Yuanlong; Hu, Han; Wen, Yonggang; Zhang, Jun
2018-05-01
In this paper, we propose to detect the running applications in a server by classifying the observed power consumption series for the purpose of data center energy consumption monitoring and analysis. Time series classification problem has been extensively studied with various distance measurements developed; also recently the deep learning-based sequence models have been proved to be promising. In this paper, we propose a novel distance measurement and build a time series classification algorithm hybridizing nearest neighbor and long short term memory (LSTM) neural network. More specifically, first we propose a new distance measurement termed as local time warping (LTW), which utilizes a user-specified index set for local warping, and is designed to be noncommutative and nondynamic programming. Second, we hybridize the 1-nearest neighbor (1NN)-LTW and LSTM together. In particular, we combine the prediction probability vector of 1NN-LTW and LSTM to determine the label of the test cases. Finally, using the power consumption data from a real data center, we show that the proposed LTW can improve the classification accuracy of dynamic time warping (DTW) from about 84% to 90%. Our experimental results prove that the proposed LTW is competitive on our data set compared with existed DTW variants and its noncommutative feature is indeed beneficial. We also test a linear version of LTW and find out that it can perform similar to state-of-the-art DTW-based method while it runs as fast as the linear runtime lower bound methods like LB_Keogh for our problem. With the hybrid algorithm, for the power series classification task we achieve an accuracy up to about 93%. Our research can inspire more studies on time series distance measurement and the hybrid of the deep learning models with other traditional models.
NASA Technical Reports Server (NTRS)
2002-01-01
(Released 29 May 2002) The Science Today's THEMIS release captures Mangala Fossa. Mangala Fossa is a graben, which in geologic terminology translates into a long parallel to semi-parallel fracture or trough. Grabens are dropped or downthrown areas relative to the rocks on either side and these features are generally longer than they are wider. There are numerous dust devil trails seen in this image. In the lower portion of this image several dust devil tracks can be seen cutting across the upper surface then down the short stubby channel and finally back up and over to the adjacent upper surface. Some dust avalanche streaks on slopes are also visible. The rough material in the upper third of the image contains a portion of the rim of a 90 km diameter crater located in Daedalia Planum. The smooth crater floor has a graben (up to 7 km wide) and channel (2 km wide) incised into its surface. In the middle third and right of this image one can see ripples (possibly fossil dunes) on the crater floor material just above the graben. The floor of Mangala Fossa and the southern crater floor surface also have smaller linear ridges trending from the upper left to lower right. These linear ridges could be either erosional (yardangs) or depositional (dunes) landforms. The lower third of the scene contains a short stubby channel (near the right margin) and lava flow front (lower left). The floor of this channel is fairly smooth with some linear crevasses located along its course. One gets the impression that the channel floor is mantled with some type of indurated material that permits cracks to form in its surface. The Story In the Daedalia Plains on Mars, the rim of an old eroded crater rises up, a wreck of its former self (see context image at right). From the rough, choppy crater rim (top of the larger THEMIS image), the terrain descends to the almost smooth crater floor, gouged deeply by a trough, a channel, and the occasional dents of small, scattered craters. The deep trough running from southwest to northeast across the middle of this image is called 'Mangala Fossa.' Mangala Fossa is a graben, a land feature created by tectonic processes that worked to create a depression in the landscape. This graben is a little more than 4 miles wide at its maximum, but like most grabens, is much longer than it is wide. You can see from the context image that it runs across much of the width of the crater. Running southward from the graben (lower right-hand side of the larger THEMIS image) is a branching channel a little over a mile wide. The floor of this channel is fairly smooth with some linear crevasses along its course. These features suggest that the channel floor might be layered with some type of cemented material that permits cracks to form in its surface. Between the rough crater rim and the depressed graben, tiny crackles on the otherwise smooth surface appear. They might be the ripples of fossil dunes, hardened remains from a more active time. The floor of Mangala Fossa and the southern crater floor surface also feature small lines that seem to crease the surface. We know that they are ridges on the surface, but how did they form? Were higher surfaces carved away in grooves by the wind and scouring sand, forming ridges called yardangs? Or were dunes deposited on the smooth, lower terrain? No one knows for sure. Look closely for faint details as well. Do you see the subtle, scalloped pattern that laps at the lower left of the image, almost too muted to be seen? That's the sign of an ancient lava flow that stopped just there. And the shadowy gray streaks? Some are smudges caused by dust avalanches running down the slopes of the channel. Others are the tracks of dust devils that pass across the land, lifting and carrying away brighter dust to reveal the darker surface beneath. For a good example of a dust devil track, check out the faint gray line that cuts across the upper part of the channel, just below the point where it meets the graben.
Anhøj, Jacob; Olesen, Anne Vingaard
2014-01-01
A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.
Toward real-time performance benchmarks for Ada
NASA Technical Reports Server (NTRS)
Clapp, Russell M.; Duchesneau, Louis; Volz, Richard A.; Mudge, Trevor N.; Schultze, Timothy
1986-01-01
The issue of real-time performance measurements for the Ada programming language through the use of benchmarks is addressed. First, the Ada notion of time is examined and a set of basic measurement techniques are developed. Then a set of Ada language features believed to be important for real-time performance are presented and specific measurement methods discussed. In addition, other important time related features which are not explicitly part of the language but are part of the run-time related features which are not explicitly part of the language but are part of the run-time system are also identified and measurement techniques developed. The measurement techniques are applied to the language and run-time system features and the results are presented.
NASA Astrophysics Data System (ADS)
Gatos, I.; Tsantis, S.; Karamesini, M.; Skouroliakou, A.; Kagadis, G.
2015-09-01
Purpose: The design and implementation of a computer-based image analysis system employing the support vector machine (SVM) classifier system for the classification of Focal Liver Lesions (FLLs) on routine non-enhanced, T2-weighted Magnetic Resonance (MR) images. Materials and Methods: The study comprised 92 patients; each one of them has undergone MRI performed on a Magnetom Concerto (Siemens). Typical signs on dynamic contrast-enhanced MRI and biopsies were employed towards a three class categorization of the 92 cases: 40-benign FLLs, 25-Hepatocellular Carcinomas (HCC) within Cirrhotic liver parenchyma and 27-liver metastases from Non-Cirrhotic liver. Prior to FLLs classification an automated lesion segmentation algorithm based on Marcov Random Fields was employed in order to acquire each FLL Region of Interest. 42 texture features derived from the gray-level histogram, co-occurrence and run-length matrices and 12 morphological features were obtained from each lesion. Stepwise multi-linear regression analysis was utilized to avoid feature redundancy leading to a feature subset that fed the multiclass SVM classifier designed for lesion classification. SVM System evaluation was performed by means of leave-one-out method and ROC analysis. Results: Maximum accuracy for all three classes (90.0%) was obtained by means of the Radial Basis Kernel Function and three textural features (Inverse- Different-Moment, Sum-Variance and Long-Run-Emphasis) that describe lesion's contrast, variability and shape complexity. Sensitivity values for the three classes were 92.5%, 81.5% and 96.2% respectively, whereas specificity values were 94.2%, 95.3% and 95.5%. The AUC value achieved for the selected subset was 0.89 with 0.81 - 0.94 confidence interval. Conclusion: The proposed SVM system exhibit promising results that could be utilized as a second opinion tool to the radiologist in order to decrease the time/cost of diagnosis and the need for patients to undergo invasive examination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreibmann, E; Iwinski Sutter, A; Whitaker, D
Objective: To investigate the prognostic significance of image gradients and in predicting clinical outcomes in a patients with non-small cell lung cancer treated with stereotactic body radiotherapy (SBRT) on 71 patients with 83 treated lesions. Methods: The records of patients treated with lung SBRT were retrospectively reviewed. When applicable, SBRT target volumes were modified to exclude any overlap with pleura, chestwall, or mediastinum. The ITK software package was utilized to generate quantitative measures of image intensity, inhomogeneity, shape morphology and first and second-order CT textures. Multivariate and univariate models containing CT features were generated to assess associations with clinicopathologic factors.more » Results: On univariate analysis, tumor size (HR 0.54, p=0.045) sumHU (HR 0.31, p=0.044) and short run grey level emphasis STD (HR 0.22, p=0.019) were associated with regional failure-free survival; meanHU (HR 0.30, p=0.035), long run emphasis (HR 0.21, p=0.011) and long run low grey level emphasis (HR 0.14, p=0.005) was associated with distant failure-free survival (DFFS). No features were significant on multivariate modeling however long run low grey level emphasis had a hazard ratio of 0.12 (p=0.061) for DFFS. Adenocarcinoma and squamous cell carcinoma differed with respect to long run emphasis STD (p=0.024), short run low grey level emphasis STD (p<0.001), and long run low grey level emphasis STD (p=0.024). Multivariate modeling of texture features associated with tumor histology was used to estimate histologies of 18 lesions treated without histologic confirmation. Of these, MVA suggested the same histology as a prior metachronous lung malignancy in 3/7 patients. Conclusion: Extracting radiomics features on clinical datasets was feasible with the ITK package with minimal effort to identify pre-treatment quantitative CT features with prognostic factors for distant control after lung SBRT.« less
Lymphocytic esophagitis: Report of three cases and review of the literature
Jideh, Bilel; Keegan, Andrew; Weltman, Martin
2016-01-01
Lymphocytic esophagitis (LyE) is a rare condition characterised histologically by high numbers of esophageal intraepithelial lymphocytes without significant granulocytes infiltration, in addition to intercellular edema (“spongiosis”). The clinical significance and natural history of LyE is poorly defined although dysphagia is reportedly the most common symptom. Endoscopic features range from normal appearing esophageal mucosa to features similar to those seen in eosinophilic esophagitis, including esophageal rings, linear furrows, whitish exudates, and esophageal strictures/stenosis. Symptomatic gastroesophageal reflux disease is an inconsistent association. LyE has been associated in paediatric Crohn’s disease, and recently in primary esophageal dysmotility disorder in adults. There are no studies assessing effective treatment strategies for LyE; empirical therapies have included use of proton pump inhibitor and corticosteroids. Esophageal dilatation have been used to manage esophageal strictures. LyE has been reported to run a benign course; however there has been a case of esophageal perforation associated with LyE. Here, we describe the clinical, endoscopic and histopathological features of three patients with lymphocytic esophagitis along with a review of the current literature. PMID:28035315
NASA Astrophysics Data System (ADS)
Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa
2018-03-01
In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.
Sex difference in top performers from Ironman to double deca iron ultra-triathlon
Knechtle, Beat; Zingg, Matthias A; Rosemann, Thomas; Rüst, Christoph A
2014-01-01
This study investigated changes in performance and sex difference in top performers for ultra-triathlon races held between 1978 and 2013 from Ironman (3.8 km swim, 180 km cycle, and 42 km run) to double deca iron ultra-triathlon distance (76 km swim, 3,600 km cycle, and 844 km run). The fastest men ever were faster than the fastest women ever for split and overall race times, with the exception of the swimming split in the quintuple iron ultra-triathlon (19 km swim, 900 km cycle, and 210.1 km run). Correlation analyses showed an increase in sex difference with increasing length of race distance for swimming (r2=0.67, P=0.023), running (r2=0.77, P=0.009), and overall race time (r2=0.77, P=0.0087), but not for cycling (r2=0.26, P=0.23). For the annual top performers, split and overall race times decreased across years nonlinearly in female and male Ironman triathletes. For longer distances, cycling split times decreased linearly in male triple iron ultra-triathletes, and running split times decreased linearly in male double iron ultra-triathletes but increased linearly in female triple and quintuple iron ultra-triathletes. Overall race times increased nonlinearly in female triple and male quintuple iron ultra-triathletes. The sex difference decreased nonlinearly in swimming, running, and overall race time in Ironman triathletes but increased linearly in cycling and running and nonlinearly in overall race time in triple iron ultra-triathletes. These findings suggest that women reduced the sex difference nonlinearly in shorter ultra-triathlon distances (ie, Ironman), but for longer distances than the Ironman, the sex difference increased or remained unchanged across years. It seems very unlikely that female top performers will ever outrun male top performers in ultratriathlons. The nonlinear change in speed and sex difference in Ironman triathlon suggests that female and male Ironman triathletes have reached their limits in performance. PMID:25114605
Application of texture analysis method for mammogram density classification
NASA Astrophysics Data System (ADS)
Nithya, R.; Santhi, B.
2017-07-01
Mammographic density is considered a major risk factor for developing breast cancer. This paper proposes an automated approach to classify breast tissue types in digital mammogram. The main objective of the proposed Computer-Aided Diagnosis (CAD) system is to investigate various feature extraction methods and classifiers to improve the diagnostic accuracy in mammogram density classification. Texture analysis methods are used to extract the features from the mammogram. Texture features are extracted by using histogram, Gray Level Co-Occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Level Difference Matrix (GLDM), Local Binary Pattern (LBP), Entropy, Discrete Wavelet Transform (DWT), Wavelet Packet Transform (WPT), Gabor transform and trace transform. These extracted features are selected using Analysis of Variance (ANOVA). The features selected by ANOVA are fed into the classifiers to characterize the mammogram into two-class (fatty/dense) and three-class (fatty/glandular/dense) breast density classification. This work has been carried out by using the mini-Mammographic Image Analysis Society (MIAS) database. Five classifiers are employed namely, Artificial Neural Network (ANN), Linear Discriminant Analysis (LDA), Naive Bayes (NB), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). Experimental results show that ANN provides better performance than LDA, NB, KNN and SVM classifiers. The proposed methodology has achieved 97.5% accuracy for three-class and 99.37% for two-class density classification.
Eskofier, Bjoern M; Kraus, Martin; Worobets, Jay T; Stefanyshyn, Darren J; Nigg, Benno M
2012-01-01
The identification of differences between groups is often important in biomechanics. This paper presents group classification tasks using kinetic and kinematic data from a prospective running injury study. Groups composed of gender, of shod/barefoot running and of runners who developed patellofemoral pain syndrome (PFPS) during the study, and asymptotic runners were classified. The features computed from the biomechanical data were deliberately chosen to be generic. Therefore, they were suited for different biomechanical measurements and classification tasks without adaptation to the input signals. Feature ranking was applied to reveal the relevance of each feature to the classification task. Data from 80 runners were analysed for gender and shod/barefoot classification, while 12 runners were investigated in the injury classification task. Gender groups could be differentiated with 84.7%, shod/barefoot running with 98.3%, and PFPS with 100% classification rate. For the latter group, one single variable could be identified that alone allowed discrimination.
Garcia-Vicente, Ana María; Molina, David; Pérez-Beteta, Julián; Amo-Salas, Mariano; Martínez-González, Alicia; Bueno, Gloria; Tello-Galán, María Jesús; Soriano-Castrejón, Ángel
2017-12-01
To study the influence of dual time point 18F-FDG PET/CT in textural features and SUV-based variables and their relation among them. Fifty-six patients with locally advanced breast cancer (LABC) were prospectively included. All of them underwent a standard 18F-FDG PET/CT (PET-1) and a delayed acquisition (PET-2). After segmentation, SUV variables (SUVmax, SUVmean, and SUVpeak), metabolic tumor volume (MTV), and total lesion glycolysis (TLG) were obtained. Eighteen three-dimensional (3D) textural measures were computed including: run-length matrices (RLM) features, co-occurrence matrices (CM) features, and energies. Differences between all PET-derived variables obtained in PET-1 and PET-2 were studied. Significant differences were found between the SUV-based parameters and MTV obtained in the dual time point PET/CT, with higher values of SUV-based variables and lower MTV in the PET-2 with respect to the PET-1. In relation with the textural parameters obtained in dual time point acquisition, significant differences were found for the short run emphasis, low gray-level run emphasis, short run high gray-level emphasis, run percentage, long run emphasis, gray-level non-uniformity, homogeneity, and dissimilarity. Textural variables showed relations with MTV and TLG. Significant differences of textural features were found in dual time point 18F-FDG PET/CT. Thus, a dynamic behavior of metabolic characteristics should be expected, with higher heterogeneity in delayed PET acquisition compared with the standard PET. A greater heterogeneity was found in bigger tumors.
Bianconi, Francesco; Fravolini, Mario Luca; Bello-Cerezo, Raquel; Minestrini, Matteo; Scialpi, Michele; Palumbo, Barbara
2018-04-01
We retrospectively investigated the prognostic potential (correlation with overall survival) of 9 shape and 21 textural features from non-contrast-enhanced computed tomography (CT) in patients with non-small-cell lung cancer. We considered a public dataset of 203 individuals with inoperable, histologically- or cytologically-confirmed NSCLC. Three-dimensional shape and textural features from CT were computed using proprietary code and their prognostic potential evaluated through four different statistical protocols. Volume and grey-level run length matrix (GLRLM) run length non-uniformity were the only two features to pass all four protocols. Both features correlated negatively with overall survival. The results also showed a strong dependence on the evaluation protocol used. Tumour volume and GLRLM run-length non-uniformity from CT were the best predictor of survival in patients with non-small-cell lung cancer. We did not find enough evidence to claim a relationship with survival for the other features. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.
Rowan, L.C.; Trautwein, C.M.; Purdy, T.L.
1990-01-01
This study was undertaken as part of the Conterminous U.S. Mineral Assessment Program (CUSMAP). The purpose of the study was to map linear features on Landsat Multispectral Scanner (MSS) images and a proprietary side-looking airborne radar (SLAR) image mosaic and to determine the spatial relationship between these linear features and the locations of metallic mineral occurrE-nces. The results show a close spatial association of linear features with metallic mineral occurrences in parts of the quadrangle, but in other areas the association is less well defined. Linear features are defined as distinct linear and slightly curvilinear elements mappable on MSS and SLAR images. The features generally represent linear segments of streams, ridges, and terminations of topographic features; however, they may also represent tonal patterns that are related to variations in lithology and vegetation. Most linear features in the Butte quadrangle probably represent underlying structural elements, such as fractures (with and without displacement), dikes, and alignment of fold axes. However, in areas underlain by sedimentary rocks, some of the linear features may reflect bedding traces. This report describes the geologic setting of the Butte quadrangle, the procedures used in mapping and analyzing the linear features, and the results of the study. Relationship of these features to placer and non-metal deposits were not analyzed in this study and are not discussed in this report.
Fast support vector data descriptions for novelty detection.
Liu, Yi-Hung; Liu, Yan-Chen; Chen, Yen-Jen
2010-08-01
Support vector data description (SVDD) has become a very attractive kernel method due to its good results in many novelty detection problems. However, the decision function of SVDD is expressed in terms of the kernel expansion, which results in a run-time complexity linear in the number of support vectors. For applications where fast real-time response is needed, how to speed up the decision function is crucial. This paper aims at dealing with the issue of reducing the testing time complexity of SVDD. A method called fast SVDD (F-SVDD) is proposed. Unlike the traditional methods which all try to compress a kernel expansion into one with fewer terms, the proposed F-SVDD directly finds the preimage of a feature vector, and then uses a simple relationship between this feature vector and the SVDD sphere center to re-express the center with a single vector. The decision function of F-SVDD contains only one kernel term, and thus the decision boundary of F-SVDD is only spherical in the original space. Hence, the run-time complexity of the F-SVDD decision function is no longer linear in the support vectors, but is a constant, no matter how large the training set size is. In this paper, we also propose a novel direct preimage-finding method, which is noniterative and involves no free parameters. The unique preimage can be obtained in real time by the proposed direct method without taking trial-and-error. For demonstration, several real-world data sets and a large-scale data set, the extended MIT face data set, are used in experiments. In addition, a practical industry example regarding liquid crystal display micro-defect inspection is also used to compare the applicability of SVDD and our proposed F-SVDD when faced with mass data input. The results are very encouraging.
2012-01-01
Background Through the wealth of information contained within them, genome-wide association studies (GWAS) have the potential to provide researchers with a systematic means of associating genetic variants with a wide variety of disease phenotypes. Due to the limitations of approaches that have analyzed single variants one at a time, it has been proposed that the genetic basis of these disorders could be determined through detailed analysis of the genetic variants themselves and in conjunction with one another. The construction of models that account for these subsets of variants requires methodologies that generate predictions based on the total risk of a particular group of polymorphisms. However, due to the excessive number of variants, constructing these types of models has so far been computationally infeasible. Results We have implemented an algorithm, known as greedy RLS, that we use to perform the first known wrapper-based feature selection on the genome-wide level. The running time of greedy RLS grows linearly in the number of training examples, the number of features in the original data set, and the number of selected features. This speed is achieved through computational short-cuts based on matrix calculus. Since the memory consumption in present-day computers can form an even tighter bottleneck than running time, we also developed a space efficient variation of greedy RLS which trades running time for memory. These approaches are then compared to traditional wrapper-based feature selection implementations based on support vector machines (SVM) to reveal the relative speed-up and to assess the feasibility of the new algorithm. As a proof of concept, we apply greedy RLS to the Hypertension – UK National Blood Service WTCCC dataset and select the most predictive variants using 3-fold external cross-validation in less than 26 minutes on a high-end desktop. On this dataset, we also show that greedy RLS has a better classification performance on independent test data than a classifier trained using features selected by a statistical p-value-based filter, which is currently the most popular approach for constructing predictive models in GWAS. Conclusions Greedy RLS is the first known implementation of a machine learning based method with the capability to conduct a wrapper-based feature selection on an entire GWAS containing several thousand examples and over 400,000 variants. In our experiments, greedy RLS selected a highly predictive subset of genetic variants in a fraction of the time spent by wrapper-based selection methods used together with SVM classifiers. The proposed algorithms are freely available as part of the RLScore software library at http://users.utu.fi/aatapa/RLScore/. PMID:22551170
Listing triangles in expected linear time on a class of power law graphs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordman, Daniel J.; Wilson, Alyson G.; Phillips, Cynthia Ann
Enumerating triangles (3-cycles) in graphs is a kernel operation for social network analysis. For example, many community detection methods depend upon finding common neighbors of two related entities. We consider Cohen's simple and elegant solution for listing triangles: give each node a 'bucket.' Place each edge into the bucket of its endpoint of lowest degree, breaking ties consistently. Each node then checks each pair of edges in its bucket, testing for the adjacency that would complete that triangle. Cohen presents an informal argument that his algorithm should run well on real graphs. We formalize this argument by providing an analysismore » for the expected running time on a class of random graphs, including power law graphs. We consider a rigorously defined method for generating a random simple graph, the erased configuration model (ECM). In the ECM each node draws a degree independently from a marginal degree distribution, endpoints pair randomly, and we erase self loops and multiedges. If the marginal degree distribution has a finite second moment, it follows immediately that Cohen's algorithm runs in expected linear time. Furthermore, it can still run in expected linear time even when the degree distribution has such a heavy tail that the second moment is not finite. We prove that Cohen's algorithm runs in expected linear time when the marginal degree distribution has finite 4/3 moment and no vertex has degree larger than {radical}n. In fact we give the precise asymptotic value of the expected number of edge pairs per bucket. A finite 4/3 moment is required; if it is unbounded, then so is the number of pairs. The marginal degree distribution of a power law graph has bounded 4/3 moment when its exponent {alpha} is more than 7/3. Thus for this class of power law graphs, with degree at most {radical}n, Cohen's algorithm runs in expected linear time. This is precisely the value of {alpha} for which the clustering coefficient tends to zero asymptotically, and it is in the range that is relevant for the degree distribution of the World-Wide Web.« less
Global correlation of topographic heights and gravity anomalies
NASA Technical Reports Server (NTRS)
Roufosse, M. C.
1977-01-01
The short wavelength features were obtained by subtracting a calculated 24th-degree-and-order field from observed data written in 1 deg x 1 deg squares. The correlation between the two residual fields was examined by a program of linear regression. When run on a worldwide scale over oceans and continents separately, the program did not exhibit any correlation; this can be explained by the fact that the worldwide autocorrelation function for residual gravity anomalies falls off much faster as a function of distance than does that for residual topographic heights. The situation was different when the program was used in restricted areas, of the order of 5 deg x 5 deg square. For 30% of the world,fair-to-good correlations were observed, mostly over continents. The slopes of the regression lines are proportional to apparent densities, which offer a large spectrum of values that are being interpreted in terms of features in the upper mantle consistent with available heat-flow, gravity, and seismic data.
Multi-Scale Peak and Trough Detection Optimised for Periodic and Quasi-Periodic Neuroscience Data.
Bishop, Steven M; Ercole, Ari
2018-01-01
The reliable detection of peaks and troughs in physiological signals is essential to many investigative techniques in medicine and computational biology. Analysis of the intracranial pressure (ICP) waveform is a particular challenge due to multi-scale features, a changing morphology over time and signal-to-noise limitations. Here we present an efficient peak and trough detection algorithm that extends the scalogram approach of Scholkmann et al., and results in greatly improved algorithm runtime performance. Our improved algorithm (modified Scholkmann) was developed and analysed in MATLAB R2015b. Synthesised waveforms (periodic, quasi-periodic and chirp sinusoids) were degraded with white Gaussian noise to achieve signal-to-noise ratios down to 5 dB and were used to compare the performance of the original Scholkmann and modified Scholkmann algorithms. The modified Scholkmann algorithm has false-positive (0%) and false-negative (0%) detection rates identical to the original Scholkmann when applied to our test suite. Actual compute time for a 200-run Monte Carlo simulation over a multicomponent noisy test signal was 40.96 ± 0.020 s (mean ± 95%CI) for the original Scholkmann and 1.81 ± 0.003 s (mean ± 95%CI) for the modified Scholkmann, demonstrating the expected improvement in runtime complexity from [Formula: see text] to [Formula: see text]. The accurate interpretation of waveform data to identify peaks and troughs is crucial in signal parameterisation, feature extraction and waveform identification tasks. Modification of a standard scalogram technique has produced a robust algorithm with linear computational complexity that is particularly suited to the challenges presented by large, noisy physiological datasets. The algorithm is optimised through a single parameter and can identify sub-waveform features with minimal additional overhead, and is easily adapted to run in real time on commodity hardware.
NASA Technical Reports Server (NTRS)
Edwards, John W.; Malone, John B.
1992-01-01
The current status of computational methods for unsteady aerodynamics and aeroelasticity is reviewed. The key features of challenging aeroelastic applications are discussed in terms of the flowfield state: low-angle high speed flows and high-angle vortex-dominated flows. The critical role played by viscous effects in determining aeroelastic stability for conditions of incipient flow separation is stressed. The need for a variety of flow modeling tools, from linear formulations to implementations of the Navier-Stokes equations, is emphasized. Estimates of computer run times for flutter calculations using several computational methods are given. Applications of these methods for unsteady aerodynamic and transonic flutter calculations for airfoils, wings, and configurations are summarized. Finally, recommendations are made concerning future research directions.
Murata, Chiharu; Ramírez, Ana Belén; Ramírez, Guadalupe; Cruz, Alonso; Morales, José Luis; Lugo-Reyes, Saul Oswaldo
2015-01-01
The features in a clinical history from a patient with suspected primary immunodeficiency (PID) direct the differential diagnosis through pattern recognition. PIDs are a heterogeneous group of more than 250 congenital diseases with increased susceptibility to infection, inflammation, autoimmunity, allergy and malignancy. Linear discriminant analysis (LDA) is a multivariate supervised classification method to sort objects of study into groups by finding linear combinations of a number of variables. To identify the features that best explain membership of pediatric PID patients to a group of defect or disease. An analytic cross-sectional study was done with a pre-existing database with clinical and laboratory records from 168 patients with PID, followed at the National Institute of Pediatrics during 1991-2012, it was used to build linear discriminant models that would explain membership of each patient to the different group defects and to the most prevalent PIDs in our registry. After a preliminary run only 30 features were included (4 demographic, 10 clinical, 10 laboratory, 6 germs), with which the training models were developed through a stepwise regression algorithm. We compared the automatic feature selection with a selection made by a human expert, and then assessed the diagnostic usefulness of the resulting models (sensitivity, specificity, prediction accuracy and kappa coefficient), with 95% confidence intervals. The models incorporated 6 to 14 features to explain membership of PID patients to the five most abundant defect groups (combined, antibody, well-defined, dysregulation and phagocytosis), and to the four most prevalent PID diseases (X-linked agammaglobulinemia, chronic granulomatous disease, common variable immunodeficiency and ataxiatelangiectasia). In practically all cases of feature selection the machine outperformed the human expert. Diagnosis prediction using the equations created had a global accuracy of 83 to 94%, with sensitivity of 60 to 100%, specificity of 83 to 95% and kappa coefficient of 0.37 to 0.76. In general, the selection of features has clinical plausibility, and the practical advantage of utilizing only clinical attributes, infecting germs and routine lab results (blood cell counts and serum immunoglobulins). The performance of the model as a diagnostic tool was acceptable. The study's main limitations are a limited sample size and a lack of cross validation. This is only the first step in the construction of a machine learning system, with a wider approach that includes a larger database and different methodologies, to assist the clinical diagnosis of primary immunodeficiencies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Richen; Guo, Hanqi; Yuan, Xiaoru
Most of the existing approaches to visualize vector field ensembles are to reveal the uncertainty of individual variables, for example, statistics, variability, etc. However, a user-defined derived feature like vortex or air mass is also quite significant, since they make more sense to domain scientists. In this paper, we present a new framework to extract user-defined derived features from different simulation runs. Specially, we use a detail-to-overview searching scheme to help extract vortex with a user-defined shape. We further compute the geometry information including the size, the geo-spatial location of the extracted vortexes. We also design some linked views tomore » compare them between different runs. At last, the temporal information such as the occurrence time of the feature is further estimated and compared. Results show that our method is capable of extracting the features across different runs and comparing them spatially and temporally.« less
SU-E-QI-17: Dependence of 3D/4D PET Quantitative Image Features On Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliver, J; Budzevich, M; Zhang, G
2014-06-15
Purpose: Quantitative imaging is a fast evolving discipline where a large number of features are extracted from images; i.e., radiomics. Some features have been shown to have diagnostic, prognostic and predictive value. However, they are sensitive to acquisition and processing factors; e.g., noise. In this study noise was added to positron emission tomography (PET) images to determine how features were affected by noise. Methods: Three levels of Gaussian noise were added to 8 lung cancer patients PET images acquired in 3D mode (static) and using respiratory tracking (4D); for the latter images from one of 10 phases were used. Amore » total of 62 features: 14 shape, 19 intensity (1stO), 18 GLCM textures (2ndO; from grey level co-occurrence matrices) and 11 RLM textures (2ndO; from run-length matrices) features were extracted from segmented tumors. Dimensions of GLCM were 256×256, calculated using 3D images with a step size of 1 voxel in 13 directions. Grey levels were binned into 256 levels for RLM and features were calculated in all 13 directions. Results: Feature variation generally increased with noise. Shape features were the most stable while RLM were the most unstable. Intensity and GLCM features performed well; the latter being more robust. The most stable 1stO features were compactness, maximum and minimum length, standard deviation, root-mean-squared, I30, V10-V90, and entropy. The most stable 2ndO features were entropy, sum-average, sum-entropy, difference-average, difference-variance, difference-entropy, information-correlation-2, short-run-emphasis, long-run-emphasis, and run-percentage. In general, features computed from images from one of the phases of 4D scans were more stable than from 3D scans. Conclusion: This study shows the need to characterize image features carefully before they are used in research and medical applications. It also shows that the performance of features, and thereby feature selection, may be assessed in part by noise analysis.« less
Cottin, F; Metayer, N; Goachet, A G; Julliand, V; Slawinski, J; Billat, V; Barrey, E
2010-11-01
Arabian horses have morphological, muscular and metabolic features designed for endurance races. Their gas exchange and gait variables were therefore measured during a field exercise test. This study presents original respiratory and locomotor data recorded in endurance horses under field conditions. Respiratory gas exchange ratio (RER) of Arabian horses at the speed required to win endurance races (18 km/h for 120-160 km) are <1 and running economy (RE) is also low in order to maintain exercise intensity using aerobic metabolism for long intervals. The purpose of this study was to measure oxygen consumption and gait variables in Arabian endurance horses running in the field in order to estimate RER and RE. Five Arabian horses trained for endurance racing were test ridden at increasing speeds on the field. Their speed was recorded and controlled by the rider using a GPS logger. Each horse was equipped with a portable respiratory gas analyser, which measured breath-by-breath respiratory variables and heart rate. The gait variables were recorded using tri-axial accelerometer data loggers and software for gait analysis. Descriptive statistics and linear regressions were used to analyse the speed related changes in each variable with P < 0.05 taken as significant. At a canter speed corresponding to endurance race winning speed (18 km/h), horses presented a VO(2) = 42 ± 9 ml/min/kg bwt, RER = 0.96 ± 0.10 and RE (= VO(2) /speed) = 134 ± 27 l/km/kg bwt. Linear relationships were observed between speed and VO(2,) HR and gait variables. Significant correlations were observed between VO(2) and gait variables. The RER of 0.96 at winning endurance speed indicates that Arabian horses mainly use aerobic metabolism based on lipid oxidation and that RER may also be related to a good coordination between running speed, respiratory and gait parameters. © 2010 EVJ Ltd.
Synthesis of Energetic Materials
1984-03-31
1 GP Chromatogram of a Polyformal of 1 ; Run 45/5 .................... 4 2 GP Chromatogram of Polyformal of 1 ; Run 49/ 4 ...linear component. The GPCs of runs 45/5 and 49/ 4 are shown in Figures 1 and 2 , respectively. Run #49/ 4 was scaled up to the 10 g level without...not yet identified product 6 02NCH CH CH 0Ac 1 . MeOH/H + 0 NCH CH CH ONO 2 2 2 2 . HNO 3/H2 So 4 2 2 2 2
Fast linear feature detection using multiple directional non-maximum suppression.
Sun, C; Vallotton, P
2009-05-01
The capacity to detect linear features is central to image analysis, computer vision and pattern recognition and has practical applications in areas such as neurite outgrowth detection, retinal vessel extraction, skin hair removal, plant root analysis and road detection. Linear feature detection often represents the starting point for image segmentation and image interpretation. In this paper, we present a new algorithm for linear feature detection using multiple directional non-maximum suppression with symmetry checking and gap linking. Given its low computational complexity, the algorithm is very fast. We show in several examples that it performs very well in terms of both sensitivity and continuity of detected linear features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, O; Mutic, S; Li, H
2016-06-15
Purpose: To describe the performance of a linear accelerator operating in a compact MRI-guided radiation therapy system. Methods: A commercial linear accelerator was placed in an MRI unit that is employed in a commercial MR-based image guided radiation therapy (IGRT) system. The linear accelerator components were placed within magnetic field-reducing hardware that provided magnetic fields of less than 40 G for the magnetron, gun driver, and port circulator, with 1 G for the linear accelerator. The system did not employ a flattening filter. The test linear accelerator was an industrial 4 MV model that was employed to test the abilitymore » to run an accelerator in the MR environment. An MR-compatible diode detector array was used to measure the beam profiles with the accelerator outside and inside the MR field and with the gradient coils on and off to examine if there was any effect on the delivered dose distribution. The beam profiles and time characteristics of the beam were measured. Results: The beam profiles exhibited characteristic unflattened Bremsstrahlung features with less than ±1.5% differences in the profile magnitude when the system was outside and inside the magnet and less than 1% differences with the gradient coils on and off. The central axis dose rate fluctuated by less than 1% over a 30 second period when outside and inside the MRI. Conclusion: A linaccompatible MR design has been shown to be effective in not perturbing the operation of a commercial linear accelerator. While the accelerator used in the tests was 4MV, there is nothing fundamentally different with the operation of a 6MV unit, implying that the design will enable operation of the proposed clinical unit. Research funding provided by ViewRay, Inc.« less
Validity of Treadmill-Derived Critical Speed on Predicting 5000-Meter Track-Running Performance.
Nimmerichter, Alfred; Novak, Nina; Triska, Christoph; Prinz, Bernhard; Breese, Brynmor C
2017-03-01
Nimmerichter, A, Novak, N, Triska, C, Prinz, B, and Breese, BC. Validity of treadmill-derived critical speed on predicting 5,000-meter track-running performance. J Strength Cond Res 31(3): 706-714, 2017-To evaluate 3 models of critical speed (CS) for the prediction of 5,000-m running performance, 16 trained athletes completed an incremental test on a treadmill to determine maximal aerobic speed (MAS) and 3 randomly ordered runs to exhaustion at the [INCREMENT]70% intensity, at 110% and 98% of MAS. Critical speed and the distance covered above CS (D') were calculated using the hyperbolic speed-time (HYP), the linear distance-time (LIN), and the linear speed inverse-time model (INV). Five thousand meter performance was determined on a 400-m running track. Individual predictions of 5,000-m running time (t = [5,000-D']/CS) and speed (s = D'/t + CS) were calculated across the 3 models in addition to multiple regression analyses. Prediction accuracy was assessed with the standard error of estimate (SEE) from linear regression analysis and the mean difference expressed in units of measurement and coefficient of variation (%). Five thousand meter running performance (speed: 4.29 ± 0.39 m·s; time: 1,176 ± 117 seconds) was significantly better than the predictions from all 3 models (p < 0.0001). The mean difference was 65-105 seconds (5.7-9.4%) for time and -0.22 to -0.34 m·s (-5.0 to -7.5%) for speed. Predictions from multiple regression analyses with CS and D' as predictor variables were not significantly different from actual running performance (-1.0 to 1.1%). The SEE across all models and predictions was approximately 65 seconds or 0.20 m·s and is therefore considered as moderate. The results of this study have shown the importance of aerobic and anaerobic energy system contribution to predict 5,000-m running performance. Using estimates of CS and D' is valuable for predicting performance over race distances of 5,000 m.
NASA Astrophysics Data System (ADS)
Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus
2016-04-01
Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type of simulation path at various scales. The newly implemented search method for kriging reduces the computational cost from an exponential dependence with regard to the grid size in the original algorithm to a linear relationship, as each neighboring search becomes independent from the grid size. For the considered examples, our results show a sevenfold reduction in run time for each additional realization when a constant simulation path is used. The traditional criticism that constant path techniques introduce a bias to the simulations was explored and our findings do indeed reveal a minor reduction in the diversity of the simulations. This bias can, however, be largely eliminated by changing the path type at different scales through the use of the multi-grid approach. Finally, we show that adapting the aggregation weight at each scale considered in our multi-grid approach allows for reproducing both the variogram and histogram, and the spatial trend of the underlying data.
Water and processes of degradation in the Martian landscape
NASA Technical Reports Server (NTRS)
Milton, D. J.
1973-01-01
It is shown that erosion has been active on Mars so that many of the surface landforms are products of degradation. Unlike earth, erosion has not been a universal process, but one areally restricted and intermittently active so that a landscape is the product of one or two cycles of erosion and large areas of essentially undisturbed primitive terrain; running water has been the principal agent of degradation. Many features on Mars are most easily explained by assuming running surface water at some time in the past; for a few features, running water is the only possible explanation.
Interactive Web Graphs for Economic Principles.
ERIC Educational Resources Information Center
Kaufman, Dennis A.; Kaufman, Rebecca S.
2002-01-01
Describes a Web site with animation and interactive activities containing graphs and basic economics concepts. Features changes in supply and market equilibrium, the construction of the long-run average cost curve, short-run profit maximization, long-run market equilibrium, and changes in aggregate demand and aggregate supply. States the…
NASA Astrophysics Data System (ADS)
Jordan, Gyozo; Petrik, Attila; De Vivo, Benedetto; Albanese, Stefano; Demetriades, Alecos; Sadeghi, Martiya
2017-04-01
Several studies have investigated the spatial distribution of chemical elements in topsoil (0-20 cm) within the framework of the EuroGeoSurveys Geochemistry Expert Group's 'Geochemical Mapping of Agricultural and Grazing Land Soil' project . Most of these studies used geostatistical analyses and interpolated concentration maps, Exploratory and Compositional Data and Analysis to identify anomalous patterns. The objective of our investigation is to demonstrate the use of digital image processing techniques for reproducible spatial pattern recognition and quantitative spatial feature characterisation. A single element (Ni) concentration in agricultural topsoil is used to perform the detailed spatial analysis, and to relate these features to possible underlying processes. In this study, simple univariate statistical methods were implemented first, and Tukey's inner-fence criterion was used to delineate statistical outliers. The linear and triangular irregular network (TIN) interpolation was used on the outlier-free Ni data points, which was resampled to a 10*10 km grid. Successive moving average smoothing was applied to generalise the TIN model and to suppress small- and at the same time enhance significant large-scale features of Nickel concentration spatial distribution patterns in European topsoil. The TIN map smoothed with a moving average filter revealed the spatial trends and patterns without losing much detail, and it was used as the input into digital image processing, such as local maxima and minima determination, digital cross sections, gradient magnitude and gradient direction calculation, second derivative profile curvature calculation, edge detection, local variability assessment, lineament density and directional variogram analyses. The detailed image processing analysis revealed several NE-SW, E-W and NW-SE oriented elongated features, which coincide with different spatial parameter classes and alignment with local maxima and minima. The NE-SW oriented linear pattern is the dominant feature to the south of the last glaciation limit. Some of these linear features are parallel to the suture zone of the Iapetus Ocean, while the others follow the Alpine and Carpathian Chains. The highest variability zones of Ni concentration in topsoil are located in the Alps and in the Balkans where mafic and ultramafic rocks outcrop. The predominant NE-SW oriented pattern is also captured by the strong anisotropy in the semi-variograms in this direction. A single major E-W oriented north-facing feature runs along the southern border of the last glaciation zone. This zone also coincides with a series of local maxima in Ni concentration along the glaciofluvial deposits. The NW-SE elongated spatial features are less dominant and are located in the Pyrenees and Scandinavia. This study demonstrates the efficiency of systematic image processing analysis in identifying and characterising spatial geochemical patterns that often remain uncovered by the usual visual map interpretation techniques.
Run-up of Tsunamis in the Gulf of Mexico caused by the Chicxulub Impact Event
NASA Astrophysics Data System (ADS)
Weisz, R.; Wünnenmann, K.; Bahlburg, H.
2003-04-01
The Chicxulub impact event can be investigated on (1) local, (2) regional and in (3) global scales. Our investigations focus on the regional scale, especially on the run-up of tsunami waves on the coast around the Gulf of Mexico caused by the impact. An impact produces two types of tsunami waves: (1) the rim wave, (2) the collapse wave. Both waves propagate over long distances and reach coastal areas. Depending on the tsunami wave characteristics, they have a potentionally large influence on the coastal areas. Run-up distance and run-up height can be used as parameters for assessing this influence. To calculate these parameters, we are using a multi-material hydrocode (SALE) to simulate the generation of the tsunami wave, a non-linear shallow water approach for the propagation, and we implemented a special open boundary for considering the run-up of tsunami waves. With the help of the one-dimensional shallow water approach, we will give run-up heights and distances for the coastal area around the Gulf of Mexico. The calculations are done along several sections from the impact site towards the coast. These are a first approximation to run-up calculations for the entire coast of the Gulf of Mexico. The bathymetric data along the sections, used in the wave propagation and run-up, correspond to a linearized bathymetry of the recent Gulf of Mexico. Additionally, we will present preliminary results from our first two-dimensional experiments of propagation and run-up. These results will be compared with the one-dimensional approach.
LINFLUX-AE: A Turbomachinery Aeroelastic Code Based on a 3-D Linearized Euler Solver
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, M. A.; Trudell, J. J.; Mehmed, O.; Stefko, G. L.
2004-01-01
This report describes the development and validation of LINFLUX-AE, a turbomachinery aeroelastic code based on the linearized unsteady 3-D Euler solver, LINFLUX. A helical fan with flat plate geometry is selected as the test case for numerical validation. The steady solution required by LINFLUX is obtained from the nonlinear Euler/Navier Stokes solver TURBO-AE. The report briefly describes the salient features of LINFLUX and the details of the aeroelastic extension. The aeroelastic formulation is based on a modal approach. An eigenvalue formulation is used for flutter analysis. The unsteady aerodynamic forces required for flutter are obtained by running LINFLUX for each mode, interblade phase angle and frequency of interest. The unsteady aerodynamic forces for forced response analysis are obtained from LINFLUX for the prescribed excitation, interblade phase angle, and frequency. The forced response amplitude is calculated from the modal summation of the generalized displacements. The unsteady pressures, work done per cycle, eigenvalues and forced response amplitudes obtained from LINFLUX are compared with those obtained from LINSUB, TURBO-AE, ASTROP2, and ANSYS.
Application of a linear spectral model to the study of Amazonian squall lines during GTE/ABLE 2B
NASA Technical Reports Server (NTRS)
Silva Dias, Maria A. F.; Ferreira, Rosana N.
1992-01-01
A linear nonhydrostatic spectral model is run with the basic state, or large scale, vertical profiles of temperature and wind observed prior to convective development along the northern coast of South America during the GTE/ABLE 2B. The model produces unstable modes with mesoscale wavelength and propagation speed comparable to observed Amazonian squall lines. Several tests with different vertical profiles of low-level winds lead to the conclusion that a shallow and/or weak low-level jet either does not produce a scale selection or, if it does, the selected mode is stationary, indicating the absence of a propagating disturbance. A 700-mbar jet of 13 m/s, with a 600-mbar wind speed greater or equal to 10 m/s, is enough to produce unstable modes with propagating features resembling those of observed Amazonian squall lines. However, a deep layer of moderate winds (about 10 m/s) may produce similar results even in the absence of a low-level wind maximum. The implications in terms of short-term weather forecasting are discussed.
Gavião Neto, Wilson P.; Roveri, Maria Isabel; Oliveira, Wagner R.
2017-01-01
Background Resilience of midsole material and the upper structure of the shoe are conceptual characteristics that can interfere in running biomechanics patterns. Artificial intelligence techniques can capture features from the entire waveform, adding new perspective for biomechanical analysis. This study tested the influence of shoe midsole resilience and upper structure on running kinematics and kinetics of non-professional runners by using feature selection, information gain, and artificial neural network analysis. Methods Twenty-seven experienced male runners (63 ± 44 km/week run) ran in four-shoe design that combined two resilience-cushioning materials (low and high) and two uppers (minimalist and structured). Kinematic data was acquired by six infrared cameras at 300 Hz, and ground reaction forces were acquired by two force plates at 1,200 Hz. We conducted a Machine Learning analysis to identify features from the complete kinematic and kinetic time series and from 42 discrete variables that had better discriminate the four shoes studied. For that analysis, we built an input data matrix of dimensions 1,080 (10 trials × 4 shoes × 27 subjects) × 1,254 (3 joints × 3 planes of movement × 101 data points + 3 vectors forces × 101 data points + 42 discrete calculated kinetic and kinematic features). Results The applied feature selection by information gain and artificial neural networks successfully differentiated the two resilience materials using 200(16%) biomechanical variables with an accuracy of 84.8% by detecting alterations of running biomechanics, and the two upper structures with an accuracy of 93.9%. Discussion The discrimination of midsole resilience resulted in lower accuracy levels than did the discrimination of the shoe uppers. In both cases, the ground reaction forces were among the 25 most relevant features. The resilience of the cushioning material caused significant effects on initial heel impact, while the effects of different uppers were distributed along the stance phase of running. Biomechanical changes due to shoe midsole resilience seemed to be subject-dependent, while those due to upper structure seemed to be subject-independent. PMID:28265506
Onodera, Andrea N; Gavião Neto, Wilson P; Roveri, Maria Isabel; Oliveira, Wagner R; Sacco, Isabel Cn
2017-01-01
Resilience of midsole material and the upper structure of the shoe are conceptual characteristics that can interfere in running biomechanics patterns. Artificial intelligence techniques can capture features from the entire waveform, adding new perspective for biomechanical analysis. This study tested the influence of shoe midsole resilience and upper structure on running kinematics and kinetics of non-professional runners by using feature selection, information gain, and artificial neural network analysis. Twenty-seven experienced male runners (63 ± 44 km/week run) ran in four-shoe design that combined two resilience-cushioning materials (low and high) and two uppers (minimalist and structured). Kinematic data was acquired by six infrared cameras at 300 Hz, and ground reaction forces were acquired by two force plates at 1,200 Hz. We conducted a Machine Learning analysis to identify features from the complete kinematic and kinetic time series and from 42 discrete variables that had better discriminate the four shoes studied. For that analysis, we built an input data matrix of dimensions 1,080 (10 trials × 4 shoes × 27 subjects) × 1,254 (3 joints × 3 planes of movement × 101 data points + 3 vectors forces × 101 data points + 42 discrete calculated kinetic and kinematic features). The applied feature selection by information gain and artificial neural networks successfully differentiated the two resilience materials using 200(16%) biomechanical variables with an accuracy of 84.8% by detecting alterations of running biomechanics, and the two upper structures with an accuracy of 93.9%. The discrimination of midsole resilience resulted in lower accuracy levels than did the discrimination of the shoe uppers. In both cases, the ground reaction forces were among the 25 most relevant features. The resilience of the cushioning material caused significant effects on initial heel impact, while the effects of different uppers were distributed along the stance phase of running. Biomechanical changes due to shoe midsole resilience seemed to be subject-dependent, while those due to upper structure seemed to be subject-independent.
NASA Astrophysics Data System (ADS)
Bakker, O. J.; Gibson, C.; Wilson, P.; Lohse, N.; Popov, A. A.
2015-10-01
Due to its inherent advantages, linear friction welding is a solid-state joining process of increasing importance to the aerospace, automotive, medical and power generation equipment industries. Tangential oscillations and forge stroke during the burn-off phase of the joining process introduce essential dynamic forces, which can also be detrimental to the welding process. Since burn-off is a critical phase in the manufacturing stage, process monitoring is fundamental for quality and stability control purposes. This study aims to improve workholding stability through the analysis of fixture cassette deformations. Methods and procedures for process monitoring are developed and implemented in a fail-or-pass assessment system for fixture cassette deformations during the burn-off phase. Additionally, the de-noised signals are compared to results from previous production runs. The observed deformations as a consequence of the forces acting on the fixture cassette are measured directly during the welding process. Data on the linear friction-welding machine are acquired and de-noised using empirical mode decomposition, before the burn-off phase is extracted. This approach enables a direct, objective comparison of the signal features with trends from previous successful welds. The capacity of the whole process monitoring system is validated and demonstrated through the analysis of a large number of signals obtained from welding experiments.
NASA Astrophysics Data System (ADS)
Parshin, Dmitry A.
2018-05-01
The additive process of forming a semicircular arched structure by means of layer-by-layer addition of material to its inner surface is simulated. The impact of this process running mode on the development of the technological stresses fields in the structure being formed under the action of gravity under properties of the material creep and aging is examined. In the framework of the linear mechanics of accreted solids a mathematical model of the process under study is offered and numerical experiments are conducted. It is shown that the stress-strain state of the additively formed heavy objects decisively depends on their formation mode. Various practically important trends and features of this dependence are studied.
Normalized Cut Algorithm for Automated Assignment of Protein Domains
NASA Technical Reports Server (NTRS)
Samanta, M. P.; Liang, S.; Zha, H.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We present a novel computational method for automatic assignment of protein domains from structural data. At the core of our algorithm lies a recently proposed clustering technique that has been very successful for image-partitioning applications. This grap.,l-theory based clustering method uses the notion of a normalized cut to partition. an undirected graph into its strongly-connected components. Computer implementation of our method tested on the standard comparison set of proteins from the literature shows a high success rate (84%), better than most existing alternative In addition, several other features of our algorithm, such as reliance on few adjustable parameters, linear run-time with respect to the size of the protein and reduced complexity compared to other graph-theory based algorithms, would make it an attractive tool for structural biologists.
Real-time polarization imaging algorithm for camera-based polarization navigation sensors.
Lu, Hao; Zhao, Kaichun; You, Zheng; Huang, Kaoli
2017-04-10
Biologically inspired polarization navigation is a promising approach due to its autonomous nature, high precision, and robustness. Many researchers have built point source-based and camera-based polarization navigation prototypes in recent years. Camera-based prototypes can benefit from their high spatial resolution but incur a heavy computation load. The pattern recognition algorithm in most polarization imaging algorithms involves several nonlinear calculations that impose a significant computation burden. In this paper, the polarization imaging and pattern recognition algorithms are optimized through reduction to several linear calculations by exploiting the orthogonality of the Stokes parameters without affecting precision according to the features of the solar meridian and the patterns of the polarized skylight. The algorithm contains a pattern recognition algorithm with a Hough transform as well as orientation measurement algorithms. The algorithm was loaded and run on a digital signal processing system to test its computational complexity. The test showed that the running time decreased to several tens of milliseconds from several thousand milliseconds. Through simulations and experiments, it was found that the algorithm can measure orientation without reducing precision. It can hence satisfy the practical demands of low computational load and high precision for use in embedded systems.
Scalable PGAS Metadata Management on Extreme Scale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Agarwal, Khushbu; Straatsma, TP
Programming models intended to run on exascale systems have a number of challenges to overcome, specially the sheer size of the system as measured by the number of concurrent software entities created and managed by the underlying runtime. It is clear from the size of these systems that any state maintained by the programming model has to be strictly sub-linear in size, in order not to overwhelm memory usage with pure overhead. A principal feature of Partitioned Global Address Space (PGAS) models is providing easy access to global-view distributed data structures. In order to provide efficient access to these distributedmore » data structures, PGAS models must keep track of metadata such as where array sections are located with respect to processes/threads running on the HPC system. As PGAS models and applications become ubiquitous on very large transpetascale systems, a key component to their performance and scalability will be efficient and judicious use of memory for model overhead (metadata) compared to application data. We present an evaluation of several strategies to manage PGAS metadata that exhibit different space/time tradeoffs. We use two real-world PGAS applications to capture metadata usage patterns and gain insight into their communication behavior.« less
3-D in vitro estimation of temperature using the change in backscattered ultrasonic energy.
Arthur, R Martin; Basu, Debomita; Guo, Yuzheng; Trobaugh, Jason W; Moros, Eduardo G
2010-08-01
Temperature imaging with a non-invasive modality to monitor the heating of tumors during hyperthermia treatment is an attractive alternative to sparse invasive measurement. Previously, we predicted monotonic changes in backscattered energy (CBE) of ultrasound with temperature for certain sub-wavelength scatterers. We also measured CBE values similar to our predictions in bovine liver, turkey breast muscle, and pork rib muscle in 2-D in vitro studies and in nude mice during 2-D in vivo studies. To extend these studies to three dimensions, we compensated for motion and measured CBE in turkey breast muscle. 3-D data sets were assembled from images formed by a phased-array imager with a 7.5-MHz linear probe moved in 0.6-mm steps in elevation during uniform heating from 37 to 45 degrees C in 0.5 degrees C increments. We used cross-correlation as a similarity measure in RF signals to automatically track feature displacement as a function of temperature. Feature displacement was non-rigid. Envelopes of image regions, compensated for non-rigid motion, were found with the Hilbert transform then smoothed with a 3 x 3 running average filter before forming the backscattered energy at each pixel. CBE in 3-D motion-compensated images was nearly linear with an average sensitivity of 0.30 dB/ degrees C. 3-D estimation of temperature in separate tissue regions had errors with a maximum standard deviation of about 0.5 degrees C over 1-cm(3) volumes. Success of CBE temperature estimation based on 3-D non-rigid tracking and compensation for real and apparent motion of image features could serve as the foundation for the eventual generation of 3-D temperature maps in soft tissue in a non-invasive, convenient, and low-cost way in clinical hyperthermia.
A model of the extent and distribution of woody linear features in rural Great Britain.
Scholefield, Paul; Morton, Dan; Rowland, Clare; Henrys, Peter; Howard, David; Norton, Lisa
2016-12-01
Hedges and lines of trees (woody linear features) are important boundaries that connect and enclose habitats, buffer the effects of land management, and enhance biodiversity in increasingly impoverished landscapes. Despite their acknowledged importance in the wider countryside, they are usually not considered in models of landscape function due to their linear nature and the difficulties of acquiring relevant data about their character, extent, and location. We present a model which uses national datasets to describe the distribution of woody linear features along boundaries in Great Britain. The method can be applied for other boundary types and in other locations around the world across a range of spatial scales where different types of linear feature can be separated using characteristics such as height or width. Satellite-derived Land Cover Map 2007 (LCM2007) provided the spatial framework for locating linear features and was used to screen out areas unsuitable for their occurrence, that is, offshore, urban, and forest areas. Similarly, Ordnance Survey Land-Form PANORAMA®, a digital terrain model, was used to screen out where they do not occur. The presence of woody linear features on boundaries was modelled using attributes from a canopy height dataset obtained by subtracting a digital terrain map (DTM) from a digital surface model (DSM). The performance of the model was evaluated against existing woody linear feature data in Countryside Survey across a range of scales. The results indicate that, despite some underestimation, this simple approach may provide valuable information on the extents and locations of woody linear features in the countryside at both local and national scales.
Assessing Footwear Effects from Principal Features of Plantar Loading during Running.
Trudeau, Matthieu B; von Tscharner, Vinzenz; Vienneau, Jordyn; Hoerzer, Stefan; Nigg, Benno M
2015-09-01
The effects of footwear on the musculoskeletal system are commonly assessed by interpreting the resultant force at the foot during the stance phase of running. However, this approach overlooks loading patterns across the entire foot. An alternative technique for assessing foot loading across different footwear conditions is possible using comprehensive analysis tools that extract different foot loading features, thus enhancing the functional interpretation of the differences across different interventions. The purpose of this article was to use pattern recognition techniques to develop and use a novel comprehensive method for assessing the effects of different footwear interventions on plantar loading. A principal component analysis was used to extract different loading features from the stance phase of running, and a support vector machine (SVM) was used to determine whether and how these loading features were different across three shoe conditions. The results revealed distinct loading features at the foot during the stance phase of running. The loading features determined from the principal component analysis allowed successful classification of all three shoe conditions using the SVM. Several differences were found in the location and timing of the loading across each pairwise shoe comparison using the output from the SVM. The analysis approach proposed can successfully be used to compare different loading patterns with a much greater resolution than has been reported previously. This study has several important applications. One such application is that it would not be relevant for a user to select a shoe or for a manufacturer to alter a shoe's construction if the classification across shoe conditions would not have been significant.
NASA Astrophysics Data System (ADS)
Løvholt, F.; Lynett, P.; Pedersen, G.
2013-06-01
Tsunamis induced by rock slides plunging into fjords constitute a severe threat to local coastal communities. The rock slide impact may give rise to highly non-linear waves in the near field, and because the wave lengths are relatively short, frequency dispersion comes into play. Fjord systems are rugged with steep slopes, and modeling non-linear dispersive waves in this environment with simultaneous run-up is demanding. We have run an operational Boussinesq-type TVD (total variation diminishing) model using different run-up formulations. Two different tests are considered, inundation on steep slopes and propagation in a trapezoidal channel. In addition, a set of Lagrangian models serves as reference models. Demanding test cases with solitary waves with amplitudes ranging from 0.1 to 0.5 were applied, and slopes were ranging from 10 to 50°. Different run-up formulations yielded clearly different accuracy and stability, and only some provided similar accuracy as the reference models. The test cases revealed that the model was prone to instabilities for large non-linearity and fine resolution. Some of the instabilities were linked with false breaking during the first positive inundation, which was not observed for the reference models. None of the models were able to handle the bore forming during drawdown, however. The instabilities are linked to short-crested undulations on the grid scale, and appear on fine resolution during inundation. As a consequence, convergence was not always obtained. It is reason to believe that the instability may be a general problem for Boussinesq models in fjords.
Linear optics measurements and corrections using an AC dipole in RHIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, G.; Bai, M.; Yang, L.
2010-05-23
We report recent experimental results on linear optics measurements and corrections using ac dipole. In RHIC 2009 run, the concept of the SVD correction algorithm is tested at injection energy for both identifying the artificial gradient errors and correcting it using the trim quadrupoles. The measured phase beatings were reduced by 30% and 40% respectively for two dedicated experiments. In RHIC 2010 run, ac dipole is used to measure {beta}* and chromatic {beta} function. For the 0.65m {beta}* lattice, we observed a factor of 3 discrepancy between model and measured chromatic {beta} function in the yellow ring.
Altered Running Economy Directly Translates to Altered Distance-Running Performance.
Hoogkamer, Wouter; Kipp, Shalaya; Spiering, Barry A; Kram, Rodger
2016-11-01
Our goal was to quantify if small (1%-3%) changes in running economy quantitatively affect distance-running performance. Based on the linear relationship between metabolic rate and running velocity and on earlier observations that added shoe mass increases metabolic rate by ~1% per 100 g per shoe, we hypothesized that adding 100 and 300 g per shoe would slow 3000-m time-trial performance by 1% and 3%, respectively. Eighteen male sub-20-min 5-km runners completed treadmill testing, and three 3000-m time trials wearing control shoes and identical shoes with 100 and 300 g of discreetly added mass. We measured rates of oxygen consumption and carbon dioxide production and calculated metabolic rates for the treadmill tests, and we recorded overall running time for the time trials. Adding mass to the shoes significantly increased metabolic rate at 3.5 m·s by 1.11% per 100 g per shoe (95% confidence interval = 0.88%-1.35%). While wearing the control shoes, participants ran the 3000-m time trial in 626.1 ± 55.6 s. Times averaged 0.65% ± 1.36% and 2.37% ± 2.09% slower for the +100-g and +300-g shoes, respectively (P < 0.001). On the basis of a linear fit of all the data, 3000-m time increased 0.78% per added 100 g per shoe (95% confidence interval = 0.52%-1.04%). Adding shoe mass predictably degrades running economy and slows 3000-m time-trial performance proportionally. Our data demonstrate that laboratory-based running economy measurements can accurately predict changes in distance-running race performance due to shoe modifications.
Osis, Sean T; Hettinga, Blayne A; Leitch, Jessica; Ferber, Reed
2014-08-22
As 3-dimensional (3D) motion-capture for clinical gait analysis continues to evolve, new methods must be developed to improve the detection of gait cycle events based on kinematic data. Recently, the application of principal component analysis (PCA) to gait data has shown promise in detecting important biomechanical features. Therefore, the purpose of this study was to define a new foot strike detection method for a continuum of striking techniques, by applying PCA to joint angle waveforms. In accordance with Newtonian mechanics, it was hypothesized that transient features in the sagittal-plane accelerations of the lower extremity would be linked with the impulsive application of force to the foot at foot strike. Kinematic and kinetic data from treadmill running were selected for 154 subjects, from a database of gait biomechanics. Ankle, knee and hip sagittal plane angular acceleration kinematic curves were chained together to form a row input to a PCA matrix. A linear polynomial was calculated based on PCA scores, and a 10-fold cross-validation was performed to evaluate prediction accuracy against gold-standard foot strike as determined by a 10 N rise in the vertical ground reaction force. Results show 89-94% of all predicted foot strikes were within 4 frames (20 ms) of the gold standard with the largest error being 28 ms. It is concluded that this new foot strike detection is an improvement on existing methods and can be applied regardless of whether the runner exhibits a rearfoot, midfoot, or forefoot strike pattern. Copyright © 2014 Elsevier Ltd. All rights reserved.
Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema
2016-08-10
Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. Copyright © 2016 Elsevier B.V. All rights reserved.
Running vacuum cosmological models: linear scalar perturbations
NASA Astrophysics Data System (ADS)
Perico, E. L. D.; Tamayo, D. A.
2017-08-01
In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ(H2) or Λ(R). Such models assume an equation of state for the vacuum given by bar PΛ = - bar rhoΛ, relating its background pressure bar PΛ with its mean energy density bar rhoΛ ≡ Λ/8πG. This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interaction between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely bar rhoΛ = Σibar rhoΛi. Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ(H2) scenario the vacuum is coupled with every matter component, whereas the Λ(R) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.
High-performance coupled poro-hydro-mechanical models to resolve fluid escape pipes
NASA Astrophysics Data System (ADS)
Räss, Ludovic; Makhnenko, Roman; Podladchikov, Yury
2017-04-01
Field observations and laboratory experiments exhibit inelastic deformation features arising in many coupled settings relevant to geo-applications. These irreversible deformations and their specific patterns suggest a rather ductile or brittle mechanism, such as viscous creep or micro cracks, taking place on both geological (long) and human (short) timescales. In order to understand the underlying mechanisms responsible for these deformation features, there is a current need to accurately resolve the non-linearities inherent to strongly coupled physical processes. Among the large variety of modelling tools and softwares available nowadays in the community, very few are capable to efficiently solve coupled systems with high accuracy in both space and time and run efficiently on modern hardware. Here, we propose a robust framework to solve coupled multi-physics hydro-mechanical processes on very high spatial and temporal resolution in both two and three dimensions. Our software relies on the Finite-Difference Method and a pseudo-transient scheme is used to converge to the implicit solution of the system of poro-visco-elasto-plastic equations at each physical time step. The rheology including viscosity estimates for major reservoir rock types is inferred from novel lab experiments and confirms the ease of flow of sedimentary rocks. Our results propose a physical mechanism responsible for the generation of high permeability pathways in fluid saturated porous media and predict their propagation in rates observable on operational timescales. Finally, our software scales linearly on more than 5000 GPUs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng
An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classificationmore » rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.« less
Linear and Non-Linear Visual Feature Learning in Rat and Humans
Bossens, Christophe; Op de Beeck, Hans P.
2016-01-01
The visual system processes visual input in a hierarchical manner in order to extract relevant features that can be used in tasks such as invariant object recognition. Although typically investigated in primates, recent work has shown that rats can be trained in a variety of visual object and shape recognition tasks. These studies did not pinpoint the complexity of the features used by these animals. Many tasks might be solved by using a combination of relatively simple features which tend to be correlated. Alternatively, rats might extract complex features or feature combinations which are nonlinear with respect to those simple features. In the present study, we address this question by starting from a small stimulus set for which one stimulus-response mapping involves a simple linear feature to solve the task while another mapping needs a well-defined nonlinear combination of simpler features related to shape symmetry. We verified computationally that the nonlinear task cannot be trivially solved by a simple V1-model. We show how rats are able to solve the linear feature task but are unable to acquire the nonlinear feature. In contrast, humans are able to use the nonlinear feature and are even faster in uncovering this solution as compared to the linear feature. The implications for the computational capabilities of the rat visual system are discussed. PMID:28066201
Milenković, Jana; Dalmış, Mehmet Ufuk; Žgajnar, Janez; Platel, Bram
2017-09-01
New ultrafast view-sharing sequences have enabled breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to be performed at high spatial and temporal resolution. The aim of this study is to evaluate the diagnostic potential of textural features that quantify the spatiotemporal changes of the contrast-agent uptake in computer-aided diagnosis of malignant and benign breast lesions imaged with high spatial and temporal resolution DCE-MRI. The proposed approach is based on the textural analysis quantifying the spatial variation of six dynamic features of the early-phase contrast-agent uptake of a lesion's largest cross-sectional area. The textural analysis is performed by means of the second-order gray-level co-occurrence matrix, gray-level run-length matrix and gray-level difference matrix. This yields 35 textural features to quantify the spatial variation of each of the six dynamic features, providing a feature set of 210 features in total. The proposed feature set is evaluated based on receiver operating characteristic (ROC) curve analysis in a cross-validation scheme for random forests (RF) and two support vector machine classifiers, with linear and radial basis function (RBF) kernel. Evaluation is done on a dataset with 154 breast lesions (83 malignant and 71 benign) and compared to a previous approach based on 3D morphological features and the average and standard deviation of the same dynamic features over the entire lesion volume as well as their average for the smaller region of the strongest uptake rate. The area under the ROC curve (AUC) obtained by the proposed approach with the RF classifier was 0.8997, which was significantly higher (P = 0.0198) than the performance achieved by the previous approach (AUC = 0.8704) on the same dataset. Similarly, the proposed approach obtained a significantly higher result for both SVM classifiers with RBF (P = 0.0096) and linear kernel (P = 0.0417) obtaining AUC of 0.8876 and 0.8548, respectively, compared to AUC values of previous approach of 0.8562 and 0.8311, respectively. The proposed approach based on 2D textural features quantifying spatiotemporal changes of the contrast-agent uptake significantly outperforms the previous approach based on 3D morphology and dynamic analysis in differentiating the malignant and benign breast lesions, showing its potential to aid clinical decision making. © 2017 American Association of Physicists in Medicine.
Implications of random variation in the Stand Prognosis Model
David A. Hamilton
1991-01-01
Although the Stand Prognosis Model has several stochastic components, features have been included in the model in an attempt to minimize run-to-run variation attributable to these stochastic components. This has led many users to assume that comparisons of management alternatives could be made based on a single run of the model for each alternative. Recent analyses...
Intramuscular Pressure Measurement During Locomotion in Humans
NASA Technical Reports Server (NTRS)
Ballard, Ricard E.
1996-01-01
To assess the usefulness of intramuscular pressure (IMP) measurement for studying muscle function during gait, IMP was recorded in the soleus and tibialis anterior muscles of ten volunteers during, treadmill walking, and running using transducer-tipped catheters. Soleus IMP exhibited single peaks during late-stance phase of walking (181 +/- 69 mmHg, mean +/- S.E.) and running (269 +/- 95 mmHg). Tibialis anterior IMP showed a biphasic response, with the largest peak (90 +/- 15 mmHg during walking and 151 +/- 25 mmHg during running) occurring shortly after heel strike. IMP magnitude increased with gait speed in both muscles. Linear regression of soleus IMP against ankle joint torque obtained by a dynamometer in two subjects produced linear relationships (r = 0.97). Application of these relationships to IMP data yielded estimated peak soleus moment contributions of 0.95-165 Nm/Kg during walking, and 1.43-2.70 Nm/Kg during running. IMP results from local muscle tissue deformations caused by muscle force development and thus, provides a direct, practical index of muscle function during locomotion in humans.
Leg intramuscular pressures during locomotion in humans
NASA Technical Reports Server (NTRS)
Ballard, R. E.; Watenpaugh, D. E.; Breit, G. A.; Murthy, G.; Holley, D. C.; Hargens, A. R.
1998-01-01
To assess the usefulness of intramuscular pressure (IMP) measurement for studying muscle function during gait, IMP was recorded in the soleus and tibialis anterior muscles of 10 volunteers during treadmill walking and running by using transducer-tipped catheters. Soleus IMP exhibited single peaks during late-stance phase of walking [181 +/- 69 (SE) mmHg] and running (269 +/- 95 mmHg). Tibialis anterior IMP showed a biphasic response, with the largest peak (90 +/- 15 mmHg during walking and 151 +/- 25 mmHg during running) occurring shortly after heel strike. IMP magnitude increased with gait speed in both muscles. Linear regression of soleus IMP against ankle joint torque obtained by a dynamometer produced linear relationships (n = 2, r = 0.97 for both). Application of these relationships to IMP data yielded estimated peak soleus moment contributions of 0.95-1.65 N . m/kg during walking, and 1.43-2.70 N . m/kg during running. Phasic elevations of IMP during exercise are probably generated by local muscle tissue deformations due to muscle force development. Thus profiles of IMP provide a direct, reproducible index of muscle function during locomotion in humans.
[Facts and fiction about running shoes].
Schelde, Jacob
2012-11-26
Running as a means of exercise is becoming increasingly popular, but the rate of injury is very high among runners. To prevent running-related injuries much attention has been given the running shoe and its construction, particular its shock-absorbing capabilities and motion control features. It is recommended that running shoes should be purchased based on the runner's medial arch height and degree of pronation, and that the shoes should be changed frequently as their shock-absorbing capabilities decrease with usage. Randomized controlled trials and other studies in the scientific literature do not support these recommendations.
Performance of a supercharged direct-injection stratified-charge rotary combustion engine
NASA Technical Reports Server (NTRS)
Bartrand, Timothy A.; Willis, Edward A.
1990-01-01
A zero-dimensional thermodynamic performance computer model for direct-injection stratified-charge rotary combustion engines was modified and run for a single rotor supercharged engine. Operating conditions for the computer runs were a single boost pressure and a matrix of speeds, loads and engine materials. A representative engine map is presented showing the predicted range of efficient operation. After discussion of the engine map, a number of engine features are analyzed individually. These features are: heat transfer and the influence insulating materials have on engine performance and exhaust energy; intake manifold pressure oscillations and interactions with the combustion chamber; and performance losses and seal friction. Finally, code running times and convergence data are presented.
A two-dimensional graphing program for the Tektronix 4050-series graphics computers
Kipp, K.L.
1983-01-01
A refined, two-dimensional graph-plotting program was developed for use on Tektronix 4050-series graphics computers. Important features of this program include: any combination of logarithmic and linear axes, optional automatic scaling and numbering of the axes, multiple-curve plots, character or drawn symbol-point plotting, optional cartridge-tape data input and plot-format storage, optional spline fitting for smooth curves, and built-in data-editing options. The program is run while the Tektronix is not connected to any large auxiliary computer, although data from files on an auxiliary computer easily can be transferred to data-cartridge for later plotting. The user is led through the plot-construction process by a series of questions and requests for data input. Five example plots are presented to illustrate program capability and the sequence of program operation. (USGS)
2018-03-22
generators by not running them as often and reducing wet-stacking. Force Projection: If the IPDs of the microgrid replace, but don’t add to, the number...decrease generator run time, reduce fuel consumption, enable silent operation, and provide power redundancy for military applications. Important...it requires some failsafe features – run out of water, drive out of the sun. o Integration was a challenge; series of valves to run this experiment
The barefoot debate: can minimalist shoes reduce running-related injuries?
Rixe, Jeffrey A; Gallo, Robert A; Silvis, Matthew L
2012-01-01
Running has evolved throughout history from a necessary form of locomotion to an athletic and recreational pursuit. During this transition, our barefoot ancestors developed footwear. By the late 1970s, running popularity surged, and footwear manufacturers developed the running shoe. Despite new shoe technology and expert advice, runners still face high injury rates, which have yet to decline. Recently, "minimalist" running, marked by a soft forefoot strike and shorter, quicker strides, has become increasingly popular within the running community. Biomechanical studies have suggested that these features of barefoot-style running may lead to a reduction in injury rates. After conducting more outcomes-based research, minimalist footwear and gait retraining may serve as new methods to reduce injuries within the running population.
Effect of step width manipulation on tibial stress during running.
Meardon, Stacey A; Derrick, Timothy R
2014-08-22
Narrow step width has been linked to variables associated with tibial stress fracture. The purpose of this study was to evaluate the effect of step width on bone stresses using a standardized model of the tibia. 15 runners ran at their preferred 5k running velocity in three running conditions, preferred step width (PSW) and PSW±5% of leg length. 10 successful trials of force and 3-D motion data were collected. A combination of inverse dynamics, musculoskeletal modeling and beam theory was used to estimate stresses applied to the tibia using subject-specific anthropometrics and motion data. The tibia was modeled as a hollow ellipse. Multivariate analysis revealed that tibial stresses at the distal 1/3 of the tibia differed with step width manipulation (p=0.002). Compression on the posterior and medial aspect of the tibia was inversely related to step width such that as step width increased, compression on the surface of tibia decreased (linear trend p=0.036 and 0.003). Similarly, tension on the anterior surface of the tibia decreased as step width increased (linear trend p=0.029). Widening step width linearly reduced shear stress at all 4 sites (p<0.001 for all). The data from this study suggests that stresses experienced by the tibia during running were influenced by step width when using a standardized model of the tibia. Wider step widths were generally associated with reduced loading of the tibia and may benefit runners at risk of or experiencing stress injury at the tibia, especially if they present with a crossover running style. Copyright © 2014 Elsevier Ltd. All rights reserved.
Galavis, Paulina E; Hollensen, Christian; Jallow, Ngoneh; Paliwal, Bhudatt; Jeraj, Robert
2010-10-01
Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45-60 minutes post-injection of 10 mCi of [(18)F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be considered as a good candidates for tumor segmentation.
GALAVIS, PAULINA E.; HOLLENSEN, CHRISTIAN; JALLOW, NGONEH; PALIWAL, BHUDATT; JERAJ, ROBERT
2014-01-01
Background Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Material and methods Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45–60 minutes post-injection of 10 mCi of [18F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Conclusion Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be considered as a good candidates for tumor segmentation. PMID:20831489
Quantum algorithm for linear regression
NASA Astrophysics Data System (ADS)
Wang, Guoming
2017-07-01
We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.
Simulation of uphill/downhill running on a level treadmill using additional horizontal force.
Gimenez, Philippe; Arnal, Pierrick J; Samozino, Pierre; Millet, Guillaume Y; Morin, Jean-Benoit
2014-07-18
Tilting treadmills allow a convenient study of biomechanics during uphill/downhill running, but they are not commonly available and there is even fewer tilting force-measuring treadmill. The aim of the present study was to compare uphill/downhill running on a treadmill (inclination of ± 8%) with running on a level treadmill using additional backward or forward pulling forces to simulate the effect of gravity. This comparison specifically focused on the energy cost of running, stride frequency (SF), electromyographic activity (EMG), leg and foot angles at foot strike, and ground impact shock. The main results are that SF, impact shock, and leg and foot angle parameters determined were very similar and significantly correlated between the two methods, the intercept and slope of the linear regression not differing significantly from zero and unity, respectively. The correlation of oxygen uptake (V̇O2) data between both methods was not significant during uphill running (r=0.42; P>0.05). V̇O2 data were correlated during downhill running (r=0.74; P<0.01) but there was a significant difference between the methods (bias=-2.51 ± 1.94 ml min(-1) kg(-1)). Linear regressions for EMG of vastus lateralis, biceps femoris, gastrocnemius lateralis, soleus and tibialis anterior were not different from the identity line but the systematic bias was elevated for this parameter. In conclusion, this method seems appropriate for the study of SF, leg and foot angle, impact shock parameters but is less applicable for physiological variables (EMG and energy cost) during uphill/downhill running when using a tilting force-measuring treadmill is not possible. Copyright © 2014 Elsevier Ltd. All rights reserved.
Smart maintenance of riverbanks using a standard data layer and Augmented Reality
NASA Astrophysics Data System (ADS)
Pierdicca, Roberto; Frontoni, Emanuele; Zingaretti, Primo; Mancini, Adriano; Malinverni, Eva Savina; Tassetti, Anna Nora; Marcheggiani, Ernesto; Galli, Andrea
2016-10-01
Linear buffer strips (BS) along watercourses are commonly adopted to reduce run-off, accumulation of bank-top sediments and the leaking of pesticides into fresh-waters, which strongly increase water pollution. However, the monitoring of their conditions is a difficult task because they are scattered over wide rural areas. This work demonstrates the benefits of using a standard data layer and Augmented Reality (AR) in watershed control and outlines the guideline of a novel approach for the health-check of linear BS. We designed a mobile environmental monitoring system for smart maintenance of riverbanks by embedding the AR technology within a Geographical Information System (GIS). From the technological point of view, the system's architecture consists of a cloud-based service for data sharing, using a standard data layer, and of a mobile device provided with a GPS based AR engine for augmented data visualization. The proposed solution aims to ease the overall inspection process by reducing the time required to run a survey. Indeed, ordinary operational survey conditions are usually performed basing the fieldwork on just classical digitized maps. Our application proposes to enrich inspections by superimposing information on the device screen with the same point of view of the camera, providing an intuitive visualization of buffer strip location. This way, the inspection officer can quickly and dynamically access relevant information overlaying geographic features, comments and other contents in real time. The solution has been tested in fieldwork to prove at what extent this cutting-edge technology contributes to an effective monitoring over large territorial settings. The aim is to encourage officers, land managers and practitioners toward more effective monitoring and management practices.
Fukuoka, Yoshiyuki; Horiuchi, Masahiro
2017-01-01
Energy cost of transport per unit distance (CoT; J·kg-1·km-1) displays a U-shaped fashion in walking and a linear fashion in running as a function of gait speed (v; km·h-1). There exists an intersection between U-shaped and linear CoT-v relationships, being termed energetically optimal transition speed (EOTS; km·h-1). Combined effects of gradient and moderate normobaric hypoxia (15.0% O2) were investigated when walking and running at the EOTS in fifteen young males. The CoT values were determined at eight walking speeds (2.4–7.3 km·h-1) and four running speeds (7.3–9.4 km·h-1) on level and gradient slopes (±5%) at normoxia and hypoxia. Since an alteration of tibialis anterior (TA) activity has been known as a trigger for gait transition, electromyogram was recorded from TA and its antagonists (gastrocnemius medialis (GM) and gastrocnemius lateralis (GL)) for about 30 steps during walking and running corresponding to the individual EOTS in each experimental condition. Mean power frequency (MPF; Hz) of each muscle was quantified to evaluate alterations of muscle fiber recruitment pattern. The EOTS was not significantly different between normoxia and hypoxia on any slopes (ranging from 7.412 to 7.679 km·h-1 at normoxia and 7.516 to 7.678 km·h-1 at hypoxia) due to upward shifts (enhanced metabolic rate) of both U-shaped and linear CoT-v relationships at hypoxia. GM, but not GL, activated more when switching from walking to running on level and gentle downhill slopes. Significant decreases in the muscular activity and/or MPF were observed only in the TA when switching the gait pattern. Taken together, the EOTS was not slowed by moderate hypoxia in the population of this study. Muscular activities of lower leg extremities and those muscle fiber recruitment patterns are dependent on the gradient when walking and running at the EOTS. PMID:28301525
Relationship between age and elite marathon race time in world single age records from 5 to 93 years
2014-01-01
Background The aims of the study were (i) to investigate the relationship between elite marathon race times and age in 1-year intervals by using the world single age records in marathon running from 5 to 93 years and (ii) to evaluate the sex difference in elite marathon running performance with advancing age. Methods World single age records in marathon running in 1-year intervals for women and men were analysed regarding changes across age for both men and women using linear and non-linear regression analyses for each age for women and men. Results The relationship between elite marathon race time and age was non-linear (i.e. polynomial regression 4th degree) for women and men. The curve was U-shaped where performance improved from 5 to ~20 years. From 5 years to ~15 years, boys and girls performed very similar. Between ~20 and ~35 years, performance was quite linear, but started to decrease at the age of ~35 years in a curvilinear manner with increasing age in both women and men. The sex difference increased non-linearly (i.e. polynomial regression 7th degree) from 5 to ~20 years, remained unchanged at ~20 min from ~20 to ~50 years and increased thereafter. The sex difference was lowest (7.5%, 10.5 min) at the age of 49 years. Conclusion Elite marathon race times improved from 5 to ~20 years, remained linear between ~20 and ~35 years, and started to increase at the age of ~35 years in a curvilinear manner with increasing age in both women and men. The sex difference in elite marathon race time increased non-linearly and was lowest at the age of ~49 years. PMID:25120915
Running vacuum cosmological models: linear scalar perturbations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perico, E.L.D.; Tamayo, D.A., E-mail: elduartep@usp.br, E-mail: tamayo@if.usp.br
In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ( H {sup 2}) or Λ( R ). Such models assume an equation of state for the vacuum given by P-bar {sub Λ} = - ρ-bar {sub Λ}, relating its background pressure P-bar {sub Λ} with its mean energy density ρ-bar {sub Λ} ≡ Λ/8π G . This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interactionmore » between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely ρ-bar {sub Λ} = Σ {sub i} ρ-bar {sub Λ} {sub i} . Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ( H {sup 2}) scenario the vacuum is coupled with every matter component, whereas the Λ( R ) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.« less
Features in visual search combine linearly
Pramod, R. T.; Arun, S. P.
2014-01-01
Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search. PMID:24715328
Can Linear Superiorization Be Useful for Linear Optimization Problems?
Censor, Yair
2017-01-01
Linear superiorization considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are (i) Does linear superiorization provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? and (ii) How does linear superiorization fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: “yes” and “very well”, respectively. PMID:29335660
NASA Astrophysics Data System (ADS)
Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan
2018-04-01
Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.
RICOR K527 highly reliable linear cooler: applications and model overview
NASA Astrophysics Data System (ADS)
Riabzev, Sergey; Nachman, Ilan; Levin, Eli; Perach, Adam; Vainshtein, Igor; Gover, Dan
2017-05-01
The K527 linear cooler was developed in order to meet the requirements of reliability, cooling power needs and versatility for a wide range of applications such as hand held, 24/7 and MWS. During the recent years the cooler was incorporated in variety of systems. Some of these systems can be sensitive to vibrations which are induced from the cooler. In order to reduce those vibrations significantly, a Tuned Dynamic Absorber (TDA) was added to the cooler. Other systems, such as the MWS type, are not sensitive to vibrations, but require a robust cooler in order to meet the high demand for environmental vibration and temperature. Therefore various mounting interfaces are designed to meet system requirements. The latest K527 version was designed to be integrated with the K508 cold finger, in order to give it versatility to standard detectors that are already designed and available for the K508 cooler type. The reliability of the cooler is of a high priority. In order to meet the 30,000 working hours target, special design features were implemented. Eight K527 coolers have passed the 19,360 working hours without degradations, and are still running according to our expectations.
Faisal, Faisal; Tursoy, Turgut; Berk, Niyazi
2018-04-01
This study investigates the relationship between Internet usage, financial development, economic growth, capital and electricity consumption using quarterly data from 1993Q1 to 2014Q4. The integration order of the series is analysed using the structural break unit root test. The ARDL bounds test for cointegration in addition to the Bayer-Hanck (2013) combined cointegration test is applied to analyse the existence of cointegration among the variables. The study found strong evidence of a long-run relationship between the variables. The long-run results under the ARDL framework confirm the existence of an inverted U-shaped relationship between financial development and electricity consumption, not only in the long-run, but also in the short-run. The study also confirms the existence of a U-shaped relationship between Internet usage and electricity consumption; however, the effect is insignificant. Additionally, the influence of trade, capital and economic growth is examined in both the long run and short run (ARDL-ECM). Finally, the results of asymmetric causality suggest a positive shock in electricity consumption that has a positive causal impact on Internet usage. The authors recommend that the Turkish Government should direct financial institutions to moderate the investment in the ICT sector by advancing credits at lower cost for purchasing energy-efficient technologies. In doing so, the Turkish Government can increase productivity in order to achieve sustainable growth, while simultaneously reducing emissions to improve environmental quality.
Lockie, Robert G; Schultz, Adrian B; Callaghan, Samuel J; Jeffriess, Matthew D; Berry, Simon P
2013-01-01
Field sport coaches must use reliable and valid tests to assess change-of-direction speed in their athletes. Few tests feature linear sprinting with acute change- of-direction maneuvers. The Change-of-Direction and Acceleration Test (CODAT) was designed to assess field sport change-of-direction speed, and includes a linear 5-meter (m) sprint, 45° and 90° cuts, 3- m sprints to the left and right, and a linear 10-m sprint. This study analyzed the reliability and validity of this test, through comparisons to 20-m sprint (0-5, 0-10, 0-20 m intervals) and Illinois agility run (IAR) performance. Eighteen Australian footballers (age = 23.83 ± 7.04 yrs; height = 1.79 ± 0.06 m; mass = 85.36 ± 13.21 kg) were recruited. Following familiarization, subjects completed the 20-m sprint, CODAT, and IAR in 2 sessions, 48 hours apart. Intra-class correlation coefficients (ICC) assessed relative reliability. Absolute reliability was analyzed through paired samples t-tests (p ≤ 0.05) determining between-session differences. Typical error (TE), coefficient of variation (CV), and differences between the TE and smallest worthwhile change (SWC), also assessed absolute reliability and test usefulness. For the validity analysis, Pearson's correlations (p ≤ 0.05) analyzed between-test relationships. Results showed no between-session differences for any test (p = 0.19-0.86). CODAT time averaged ~6 s, and the ICC and CV equaled 0.84 and 3.0%, respectively. The homogeneous sample of Australian footballers meant that the CODAT's TE (0.19 s) exceeded the usual 0.2 x standard deviation (SD) SWC (0.10 s). However, the CODAT is capable of detecting moderate performance changes (SWC calculated as 0.5 x SD = 0.25 s). There was a near perfect correlation between the CODAT and IAR (r = 0.92), and very large correlations with the 20-m sprint (r = 0.75-0.76), suggesting that the CODAT was a valid change-of-direction speed test. Due to movement specificity, the CODAT has value for field sport assessment. Key pointsThe change-of-direction and acceleration test (CODAT) was designed specifically for field sport athletes from specific speed research, and data derived from time-motion analyses of sports such as rugby union, soccer, and Australian football. The CODAT features a linear 5-meter (m) sprint, 45° and 90° cuts and 3-m sprints to the left and right, and a linear 10-m sprint.The CODAT was found to be a reliable change-of-direction speed assessment when considering intra-class correlations between two testing sessions, and the coefficient of variation between trials. A homogeneous sample of Australian footballers resulted in absolute reliability limitations when considering differences between the typical error and smallest worthwhile change. However, the CODAT will detect moderate (0.5 times the test's standard deviation) changes in performance.The CODAT correlated with the Illinois agility run, highlighting that it does assess change-of-direction speed. There were also significant relationships with short sprint performance (i.e. 0-5 m and 0-10 m), demonstrating that linear acceleration is assessed within the CODAT, without the extended duration and therefore metabolic limitations of the IAR. Indeed, the average duration of the test (~6 seconds) is field sport-specific. Therefore, the CODAT could be used as an assessment of change-of-direction speed in field sport athletes.
Monitoring Temperature and Fan Speed Using Ganglia and Winbond Chips
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaffrey, Cattie; /SLAC
2006-09-27
Effective monitoring is essential to keep a large group of machines, like the ones at Stanford Linear Accelerator Center (SLAC), up and running. SLAC currently uses Ganglia Monitoring System to observe about 2000 machines, analyzing metrics like CPU usage and I/O rate. However, metrics essential to machine hardware health, such as temperature and fan speed, are not being monitored. Many machines have a Winbond w83782d chip which monitors three temperatures, two of which come from dual CPUs, and returns the information when the sensor command is invoked. Ganglia also provides a feature, gmetric, that allows the users to monitor theirmore » own metrics and incorporate them into the monitoring system. The programming language Perl is chosen to implement a script that invokes the sensors command, extracts the temperature and fan speed information, and calls gmetric with the appropriate arguments. Two machines were used to test the script; the two CPUs on each machine run at about 65 Celsius, which is well within the operating temperature range (The maximum safe temperature range is 77-82 Celsius for the Pentium III processors being used). Installing the script on all machines with a Winbond w83782d chip allows the SLAC Scientific Computing and Computing Services group (SCCS) to better evaluate current cooling methods.« less
Searleman, Adam C.; Iliuk, Anton B.; Collier, Timothy S.; Chodosh, Lewis A.; Tao, W. Andy; Bose, Ron
2014-01-01
Altered protein phosphorylation is a feature of many human cancers that can be targeted therapeutically. Phosphopeptide enrichment is a critical step for maximizing the depth of phosphoproteome coverage by MS, but remains challenging for tissue specimens because of their high complexity. We describe the first analysis of a tissue phosphoproteome using polymer-based metal ion affinity capture (PolyMAC), a nanopolymer that has excellent yield and specificity for phosphopeptide enrichment, on a transgenic mouse model of HER2-driven breast cancer. By combining phosphotyrosine immunoprecipitation with PolyMAC, 411 unique peptides with 139 phosphotyrosine, 45 phosphoserine, and 29 phosphothreonine sites were identified from five LC-MS/MS runs. Combining reverse phase liquid chromatography fractionation at pH 8.0 with PolyMAC identified 1571 unique peptides with 1279 phosphoserine, 213 phosphothreonine, and 21 phosphotyrosine sites from eight LC-MS/MS runs. Linear motif analysis indicated that many of the phosphosites correspond to well-known phosphorylation motifs. Analysis of the tyrosine phosphoproteome with the Drug Gene Interaction database uncovered a network of potential therapeutic targets centered on Src family kinases with inhibitors that are either FDA-approved or in clinical development. These results demonstrate that PolyMAC is well suited for phosphoproteomic analysis of tissue specimens. PMID:24723360
Classification of speech dysfluencies using LPC based parameterization techniques.
Hariharan, M; Chee, Lim Sin; Ai, Ooi Chia; Yaacob, Sazali
2012-06-01
The goal of this paper is to discuss and compare three feature extraction methods: Linear Predictive Coefficients (LPC), Linear Prediction Cepstral Coefficients (LPCC) and Weighted Linear Prediction Cepstral Coefficients (WLPCC) for recognizing the stuttered events. Speech samples from the University College London Archive of Stuttered Speech (UCLASS) were used for our analysis. The stuttered events were identified through manual segmentation and were used for feature extraction. Two simple classifiers namely, k-nearest neighbour (kNN) and Linear Discriminant Analysis (LDA) were employed for speech dysfluencies classification. Conventional validation method was used for testing the reliability of the classifier results. The study on the effect of different frame length, percentage of overlapping, value of ã in a first order pre-emphasizer and different order p were discussed. The speech dysfluencies classification accuracy was found to be improved by applying statistical normalization before feature extraction. The experimental investigation elucidated LPC, LPCC and WLPCC features can be used for identifying the stuttered events and WLPCC features slightly outperforms LPCC features and LPC features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krafft, S; Court, L; Briere, T
2014-06-15
Purpose: Radiation induced lung damage (RILD) is an important dose-limiting toxicity for patients treated with radiation therapy. Scoring systems for RILD are subjective and limit our ability to find robust predictors of toxicity. We investigate the dose and time-related response for texture-based lung CT image features that serve as potential quantitative measures of RILD. Methods: Pre- and post-RT diagnostic imaging studies were collected for retrospective analysis of 21 patients treated with photon or proton radiotherapy for NSCLC. Total lung and selected isodose contours (0–5, 5–15, 15–25Gy, etc.) were deformably registered from the treatment planning scan to the pre-RT and availablemore » follow-up CT studies for each patient. A CT image analysis framework was utilized to extract 3698 unique texture-based features (including co-occurrence and run length matrices) for each region of interest defined by the isodose contours and the total lung volume. Linear mixed models were fit to determine the relationship between feature change (relative to pre-RT), planned dose and time post-RT. Results: Seventy-three follow-up CT scans from 21 patients (median: 3 scans/patient) were analyzed to describe CT image feature change. At the p=0.05 level, dose affected feature change in 2706 (73.1%) of the available features. Similarly, time affected feature change in 408 (11.0%) of the available features. Both dose and time were significant predictors of feature change in a total of 231 (6.2%) of the extracted image features. Conclusion: Characterizing the dose and time-related response of a large number of texture-based CT image features is the first step toward identifying objective measures of lung toxicity necessary for assessment and prediction of RILD. There is evidence that numerous features are sensitive to both the radiation dose and time after RT. Beyond characterizing feature response, further investigation is warranted to determine the utility of these features as surrogates of clinically significant lung injury.« less
OPC care-area feedforwarding to MPC
NASA Astrophysics Data System (ADS)
Dillon, Brian; Peng, Yi-Hsing; Hamaji, Masakazu; Tsunoda, Dai; Muramatsu, Tomoyuki; Ohara, Shuichiro; Zou, Yi; Arnoux, Vincent; Baron, Stanislas; Zhang, Xiaolong
2016-10-01
Demand for mask process correction (MPC) is growing for leading-edge process nodes. MPC was originally intended to correct CD linearity for narrow assist features difficult to resolve on a photomask without any correction, but it has been extended to main features as process nodes have been shrinking. As past papers have observed, MPC shows improvements in photomask fidelity. Using advanced shape and dose corrections could give more improvements, especially at line-ends and corners. However, there is a dilemma on using such advanced corrections on full mask level because it increases data volume and run time. In addition, write time on variable shaped beam (VSB) writers also increases as the number of shots increases. Optical proximity correction (OPC) care-area defines circuit design locations that require high mask fidelity under mask writing process variations such as energy fluctuation. It is useful for MPC to switch its correction strategy and permit the use of advanced mask correction techniques in those local care-areas where they provide maximum wafer benefits. The use of mask correction techniques tailored to localized post-OPC design can result in similar desired level of data volume, run time, and write time. ASML Brion and NCS have jointly developed a method to feedforward the care-area information from Tachyon LMC to NDE-MPC to provide real benefit for improving both mask writing and wafer printing quality. This paper explains the detail of OPC care-area feedforwarding to MPC between ASML Brion and NCS, and shows the results. In addition, improvements on mask and wafer simulations are also shown. The results indicate that the worst process variation (PV) bands are reduced up to 37% for a 10nm tech node metal case.
A Method for Generating Reduced-Order Linear Models of Multidimensional Supersonic Inlets
NASA Technical Reports Server (NTRS)
Chicatelli, Amy; Hartley, Tom T.
1998-01-01
Simulation of high speed propulsion systems may be divided into two categories, nonlinear and linear. The nonlinear simulations are usually based on multidimensional computational fluid dynamics (CFD) methodologies and tend to provide high resolution results that show the fine detail of the flow. Consequently, these simulations are large, numerically intensive, and run much slower than real-time. ne linear simulations are usually based on large lumping techniques that are linearized about a steady-state operating condition. These simplistic models often run at or near real-time but do not always capture the detailed dynamics of the plant. Under a grant sponsored by the NASA Lewis Research Center, Cleveland, Ohio, a new method has been developed that can be used to generate improved linear models for control design from multidimensional steady-state CFD results. This CFD-based linear modeling technique provides a small perturbation model that can be used for control applications and real-time simulations. It is important to note the utility of the modeling procedure; all that is needed to obtain a linear model of the propulsion system is the geometry and steady-state operating conditions from a multidimensional CFD simulation or experiment. This research represents a beginning step in establishing a bridge between the controls discipline and the CFD discipline so that the control engineer is able to effectively use multidimensional CFD results in control system design and analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, W; Riyahi, S; Lu, W
Purpose: Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (radiation pneumonitis and radiation fibrosis). For these features to be clinically useful, they need to be relatively invariant (robust) to tumor size and not correlated with normal lung volume. Methods: The free-breathing CTs of 14 lung SBRT patients were studied. Different sizes of GTVs were simulated with spheres placed at the upper lobe and lower lobe respectively in the normal lung (contralateral to tumor). 27 texture features (9 from intensity histogram, 8 from grey-level co-occurrence matrix [GLCM] and 10 from grey-level run-length matrix [GLRM])more » were extracted from [normal lung-GTV]. To measure the variability of a feature F, the relative difference D=|Fref -Fsim|/Fref*100% was calculated, where Fref was for the entire normal lung and Fsim was for [normal lung-GTV]. A feature was considered as robust if the largest non-outlier (Q3+1.5*IQR) D was less than 5%, and considered as not correlated with normal lung volume when their Pearson correlation was lower than 0.50. Results: Only 11 features were robust. All first-order intensity-histogram features (mean, max, etc.) were robust, while most higher-order features (skewness, kurtosis, etc.) were unrobust. Only two of the GLCM and four of the GLRM features were robust. Larger GTV resulted greater feature variation, this was particularly true for unrobust features. All robust features were not correlated with normal lung volume while three unrobust features showed high correlation. Excessive variations were observed in two low grey-level run features and were later identified to be from one patient with local lung diseases (atelectasis) in the normal lung. There was no dependence on GTV location. Conclusion: We identified 11 robust normal lung CT texture features that can be further examined for the prediction of radiation-induced lung disease. Interestingly, low grey-level run features identified normal lung diseases. This work was supported in part by the National Cancer Institute Grants R01CA172638.« less
g8: Physics with Linearly-Polarized Photons in Hall B of JLab
NASA Astrophysics Data System (ADS)
Cole, Philip L.
2001-11-01
The set of experiments forming the g8 run in Hall B took place this past summer (6/4/01-8/13/01) in Hall B of Jefferson Lab. These experiments make use of a beam of linearly-polarized photons produced through coherent bremsstrahlung and represent the first time such a probe has been employed at Jefferson Lab. Several new and upgraded Hall-B beamline devices were commissioned prior to the production running of g8. The scientific purpose of g8 is to improve the understanding of the underlying symmetry of the quark degrees of freedom in the nucleon, the nature of the parity exchange between the incident photon and the target nucleon, and the mechanism of associated strangeness production in electromagnetic reactions. With the high-quality beam of the tagged and collimated linearly-polarized photons and the nearly complete angular coverage of the Hall-B spectrometer, we will extract the differential cross sections and polarization observables for the photoproduction of vector mesons and kaons at photon energies ranging between 1.9 and 2.1 GeV. We collected over 1.2 trillion triggers. After data cuts, we expect to have 500 times the world's data set on rhos and omegas produced via a beam of linearly-polarized photons. A report on the results of the commissioning of the beamline devices and the progress of the analysis of the g8 run will be presented.
Reinforcement of drinking by running: effect of fixed ratio and reinforcement time1
Premack, David; Schaeffer, Robert W.; Hundt, Alan
1964-01-01
Rats were required to complete varying numbers of licks (FR), ranging from 10 to 300, in order to free an activity wheel for predetermined times (CT) ranging from 2 to 20 sec. The reinforcement of drinking by running was shown both by an increased frequency of licking, and by changes in length of the burst of licking relative to operant-level burst length. In log-log coordinates, instrumental licking tended to be a linear increasing function of FR for the range tested, a linear decreasing function of CT for the range tested. Pause time was implicated in both of the above relations, being a generally increasing function of both FR and CT. PMID:14120150
REINFORCEMENT OF DRINKING BY RUNNING: EFFECT OF FIXED RATIO AND REINFORCEMENT TIME.
PREMACK, D; SCHAEFFER, R W; HUNDT, A
1964-01-01
Rats were required to complete varying numbers of licks (FR), ranging from 10 to 300, in order to free an activity wheel for predetermined times (CT) ranging from 2 to 20 sec. The reinforcement of drinking by running was shown both by an increased frequency of licking, and by changes in length of the burst of licking relative to operant-level burst length. In log-log coordinates, instrumental licking tended to be a linear increasing function of FR for the range tested, a linear decreasing function of CT for the range tested. Pause time was implicated in both of the above relations, being a generally increasing function of both FR and CT.
TerraceM: A Matlab® tool to analyze marine terraces from high-resolution topography
NASA Astrophysics Data System (ADS)
Jara-Muñoz, Julius; Melnick, Daniel; Strecker, Manfred
2015-04-01
To date, Light detection and ranging (LiDAR), high- resolution topographic data sets enable remote identification of submeter-scale geomorphic features bringing valuable information of the landscape and geomorphic markers of tectonic deformation such as fault-scarp offsets, fluvial and marine terraces. Recent studies of marine terraces using LiDAR data have demonstrated that these landforms can be readily isolated from other landforms in the landscape, using slope and roughness parameters that allow for unambiguously mapping regional extents of terrace sequences. Marine terrace elevation has been used since decades as geodetic benchmarks of Quaternary deformation. Uplift rates may be estimated by locating the shoreline angle, a geomorphic feature correlated with the high-stand position of past sea levels. Indeed, precise identification of the shoreline-angle position is an important requirement to obtain reliable tectonic rates and coherent spatial correlation. To improve our ability to rapidly assess and map different shoreline angles at a regional scale we have developed the TerraceM application. TerraceM is a Matlab® tool that allows estimating the shoreline angle and its associated error using high-resolution topography. For convenience, TerraceM includes a graphical user interface (GUI) linked with Google Maps® API. The analysis starts by defining swath profiles from a shapefile created on a GIS platform orientated orthogonally to the terrace riser. TerraceM functions are included to extract and analyze the swath profiles. Two types of coastal landscapes may be analyzed using different methodologies: staircase sequences of multiple terraces and rough, rocky coasts. The former are measured by outlining the paleo-cliffs and paleo-platforms, whereas the latter are assessed by picking the elevation of sea-stack tops. By calculating the intersection between first-order interpolations of the maximum topography of swath profiles we define the shoreline angle in staircase terraces. For rocky coasts, the maximum stack peaks for a defined search ratio as well as a defined inflection point on the adjacent main cliff are interpolated to calculate the shoreline angle at the intersection with the cliff. Error estimates are based on the standard deviation of the linear regressions. The geomorphic age of terraces (Kt) can be also calculated by the linear diffusion equation (Hanks et al., 1989), with a best-fitting model found by minimizing the RMS. TerraceM has the ability to efficiently process several profiles in batch-mode run. Results may be exported in various formats, including Google Earth and ArcGis, basic statistics are automatically computed. Test runs have been made at Santa Cruz, California, using various topographic data sets and comparing results with published field measurements (Anderson and Menking, 1994). Repeatability was evaluated using multiple test runs made by students in a classroom setting.
NASA Astrophysics Data System (ADS)
Liu, Chang; Wu, Xing; Mao, Jianlin; Liu, Xiaoqin
2017-07-01
In the signal processing domain, there has been growing interest in using acoustic emission (AE) signals for the fault diagnosis and condition assessment instead of vibration signals, which has been advocated as an effective technique for identifying fracture, crack or damage. The AE signal has high frequencies up to several MHz which can avoid some signals interference, such as the parts of bearing (i.e. rolling elements, ring and so on) and other rotating parts of machine. However, acoustic emission signal necessitates advanced signal sampling capabilities and requests ability to deal with large amounts of sampling data. In this paper, compressive sensing (CS) is introduced as a processing framework, and then a compressive features extraction method is proposed. We use it for extracting the compressive features from compressively-sensed data directly, and also prove the energy preservation properties. First, we study the AE signals under the CS framework. The sparsity of AE signal of the rolling bearing is checked. The observation and reconstruction of signal is also studied. Second, we present a method of extraction AE compressive feature (AECF) from compressively-sensed data directly. We demonstrate the energy preservation properties and the processing of the extracted AECF feature. We assess the running state of the bearing using the AECF trend. The AECF trend of the running state of rolling bearings is consistent with the trend of traditional features. Thus, the method is an effective way to evaluate the running trend of rolling bearings. The results of the experiments have verified that the signal processing and the condition assessment based on AECF is simpler, the amount of data required is smaller, and the amount of computation is greatly reduced.
Seismotectonics and fault structure of the California Central Coast
Hardebeck, Jeanne L.
2010-01-01
I present and interpret new earthquake relocations and focal mechanisms for the California Central Coast. The relocations improve upon catalog locations by using 3D seismic velocity models to account for lateral variations in structure and by using relative arrival times from waveform cross-correlation and double-difference methods to image seismicity features more sharply. Focal mechanisms are computed using ray tracing in the 3D velocity models. Seismicity alignments on the Hosgri fault confirm that it is vertical down to at least 12 km depth, and the focal mechanisms are consistent with right-lateral strike-slip motion on a vertical fault. A prominent, newly observed feature is an ~25 km long linear trend of seismicity running just offshore and parallel to the coastline in the region of Point Buchon, informally named the Shoreline fault. This seismicity trend is accompanied by a linear magnetic anomaly, and both the seismicity and the magnetic anomaly end where they obliquely meet the Hosgri fault. Focal mechanisms indicate that the Shoreline fault is a vertical strike-slip fault. Several seismicity lineations with vertical strike-slip mechanisms are observed in Estero Bay. Events greater than about 10 km depth in Estero Bay, however, exhibit reverse-faulting mechanisms, perhaps reflecting slip at the top of the remnant subducted slab. Strike-slip mechanisms are observed offshore along the Hosgri–San Simeon fault system and onshore along the West Huasna and Rinconada faults, while reverse mechanisms are generally confined to the region between these two systems. This suggests a model in which the reverse faulting is primarily due to restraining left-transfer of right-lateral slip.
A lower-extremities kinematic comparison of deep-water running styles and treadmill running.
Killgore, Garry L; Wilcox, Anthony R; Caster, Brian L; Wood, Terry M
2006-11-01
The purpose of this investigation was to identify a deep-water running (DWR) style that most closely approximates terrestrial running, particularly relative to the lower extremities. Twenty intercollegiate distance runners (women, N = 12; men, N = 8) were videotaped from the right sagittal view while running on a treadmill (TR) and in deep water at 55-60% of their TR VO(2)max using 2 DWR styles: cross-country (CC) and high-knee (HK). Variables of interest were horizontal (X) and vertical (Y) displacement of the knee and ankle, stride rate (SR), VO(2), heart rate (HR), and rating of perceived exertion (RPE). Multivariate omnibus tests revealed statistically significant differences for RPE (p < 0.001). The post hoc pairwise comparisons revealed significant differences between TR and both DWR styles (p < 0.001). The kinematic variables multivariate omnibus tests were found to be statistically significant (p < 0.001 to p < 0.019). The post hoc pairwise comparisons revealed significant differences in SR (p < 0.001) between TR (1.25 +/- 0.08 Hz) and both DWR styles and also between the CC (0.81 +/- 0.08 Hz) and HK (1.14 +/- 0.10 Hz) styles of DWR. The CC style of DWR was found to be similar to TR with respect to linear ankle displacement, whereas the HK style was significantly different from TR in all comparisons made for ankle and knee displacement. The CC style of DWR is recommended as an adjunct to distance running training if the goal is to mimic the specificity of the ankle linear horizontal displacement of land-based running, but the SR will be slower at a comparable percentage of VO(2)max.
Pugh, L. G. C. E.
1971-01-01
1. O2 intakes were determined on subjects running and walking at various constant speeds, (a) against wind of up to 18·5 m/sec (37 knots) in velocity, and (b) on gradients ranging from 2 to 8%. 2. In running and walking against wind, O2 intakes increased as the square of wind velocity. 3. In running on gradients the relation of O2 intake and lifting work was linear and independent of speed. In walking on gradients the relation was linear at work rates above 300 kg m/min, but curvilinear at lower work rates. 4. In a 65 kg athlete running at 4·45 m/sec (marathon speed) V̇O2 increased from 3·0 l./min with minimal wind to 5·0 l./min at a wind velocity of 18·5 m/sec. The corresponding values for a 75 kg subject walking at 1·25 m/sec were 0·8 l./min with minimal wind and 3·1 l./min at a wind velocity of 18·5 m/sec. 5. Direct measurements of wind pressure on shapes of similar area to one of the subjects yielded higher values than those predicted from the relation of wind velocity and lifting work at equal O2 intakes. Horizontal work against wind was more efficient than vertical work against gravity. 6. The energy cost of overcoming air resistance in track running may be 7·5% of the total energy cost at middle distance speed and 13% at sprint speed. Running 1 m behind another runner virtually eliminated air resistance and reduced V̇O2 by 6·5% at middle distance speed. PMID:5574828
NASA Astrophysics Data System (ADS)
Ryan, D. P.; Roth, G. S.
1982-04-01
Complete documentation of the 15 programs and 11 data files of the EPA Atomic Absorption Instrument Automation System is presented. The system incorporates the following major features: (1) multipoint calibration using first, second, or third degree regression or linear interpolation, (2) timely quality control assessments for spiked samples, duplicates, laboratory control standards, reagent blanks, and instrument check standards, (3) reagent blank subtraction, and (4) plotting of calibration curves and raw data peaks. The programs of this system are written in Data General Extended BASIC, Revision 4.3, as enhanced for multi-user, real-time data acquisition. They run in a Data General Nova 840 minicomputer under the operating system RDOS, Revision 6.2. There is a functional description, a symbol definitions table, a functional flowchart, a program listing, and a symbol cross reference table for each program. The structure of every data file is also detailed.
Color composite C-band and L-band image of Kilauea volcanoe on Hawaii
NASA Technical Reports Server (NTRS)
1994-01-01
This color composite C-band and L-band image of the Kilauea volcano on the Big Island of Hawaii was acuired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperature Radar (SIR-C/X-SAR) flying on the Space Shuttle Endeavour. The city of Hilo can be seen at the top. The image shows the different types of lava flows around the crater Pu'u O'o. Ash deposits which erupted in 1790 from the summit of Kilauea volcano show up as dark in this image, and fine details associated with lava flows which erupted in 1919 and 1974 can be seen to the south of the summit in an area called the Ka'u Desert. Other historic lava flows can also be seen. Highway 11 is the linear feature running from Hilo to the Kilauea volcano. The Jet Propulsion Laboratory alternative photo number is P-43918.
Short-Term Planning of Hybrid Power System
NASA Astrophysics Data System (ADS)
Knežević, Goran; Baus, Zoran; Nikolovski, Srete
2016-07-01
In this paper short-term planning algorithm for hybrid power system consist of different types of cascade hydropower plants (run-of-the river, pumped storage, conventional), thermal power plants (coal-fired power plants, combined cycle gas-fired power plants) and wind farms is presented. The optimization process provides a joint bid of the hybrid system, and thus making the operation schedule of hydro and thermal power plants, the operation condition of pumped-storage hydropower plants with the aim of maximizing profits on day ahead market, according to expected hourly electricity prices, the expected local water inflow in certain hydropower plants, and the expected production of electrical energy from the wind farm, taking into account previously contracted bilateral agreement for electricity generation. Optimization process is formulated as hourly-discretized mixed integer linear optimization problem. Optimization model is applied on the case study in order to show general features of the developed model.
Dynamical feature extraction at the sensory periphery guides chemotaxis
Schulze, Aljoscha; Gomez-Marin, Alex; Rajendran, Vani G; Lott, Gus; Musy, Marco; Ahammad, Parvez; Deogade, Ajinkya; Sharpe, James; Riedl, Julia; Jarriault, David; Trautman, Eric T; Werner, Christopher; Venkadesan, Madhusudhan; Druckmann, Shaul; Jayaraman, Vivek; Louis, Matthieu
2015-01-01
Behavioral strategies employed for chemotaxis have been described across phyla, but the sensorimotor basis of this phenomenon has seldom been studied in naturalistic contexts. Here, we examine how signals experienced during free olfactory behaviors are processed by first-order olfactory sensory neurons (OSNs) of the Drosophila larva. We find that OSNs can act as differentiators that transiently normalize stimulus intensity—a property potentially derived from a combination of integral feedback and feed-forward regulation of olfactory transduction. In olfactory virtual reality experiments, we report that high activity levels of the OSN suppress turning, whereas low activity levels facilitate turning. Using a generalized linear model, we explain how peripheral encoding of olfactory stimuli modulates the probability of switching from a run to a turn. Our work clarifies the link between computations carried out at the sensory periphery and action selection underlying navigation in odor gradients. DOI: http://dx.doi.org/10.7554/eLife.06694.001 PMID:26077825
Iris recognition: on the segmentation of degraded images acquired in the visible wavelength.
Proença, Hugo
2010-08-01
Iris recognition imaging constraints are receiving increasing attention. There are several proposals to develop systems that operate in the visible wavelength and in less constrained environments. These imaging conditions engender acquired noisy artifacts that lead to severely degraded images, making iris segmentation a major issue. Having observed that existing iris segmentation methods tend to fail in these challenging conditions, we present a segmentation method that can handle degraded images acquired in less constrained conditions. We offer the following contributions: 1) to consider the sclera the most easily distinguishable part of the eye in degraded images, 2) to propose a new type of feature that measures the proportion of sclera in each direction and is fundamental in segmenting the iris, and 3) to run the entire procedure in deterministically linear time in respect to the size of the image, making the procedure suitable for real-time applications.
Reilly, Stephen M; McElroy, Eric J; Andrew Odum, R; Hornyak, Valerie A
2006-01-01
The lumbering locomotor behaviours of tuataras and salamanders are the best examples of quadrupedal locomotion of early terrestrial vertebrates. We show they use the same walking (out-of-phase) and running (in-phase) patterns of external mechanical energy fluctuations of the centre-of-mass known in fast moving (cursorial) animals. Thus, walking and running centre-of-mass mechanics have been a feature of tetrapods since quadrupedal locomotion emerged over 400 million years ago. When walking, these sprawling animals save external mechanical energy with the same pendular effectiveness observed in cursorial animals. However, unlike cursorial animals (that change footfall patterns and mechanics with speed), tuataras and salamanders use only diagonal couplet gaits and indifferently change from walking to running mechanics with no significant change in total mechanical energy. Thus, the change from walking to running is not related to speed and the advantage of walking versus running is unclear. Furthermore, lumbering mechanics in primitive tetrapods is reflected in having total mechanical energy driven by potential energy (rather than kinetic energy as in cursorial animals) and relative centre-of-mass displacements an order of magnitude greater than cursorial animals. Thus, large vertical displacements associated with lumbering locomotion in primitive tetrapods may preclude their ability to increase speed. PMID:16777753
Follow the line: Mysterious bright streaks on Dione and Rhea
NASA Astrophysics Data System (ADS)
Martin, E. S.; Patthoff, D. A.
2017-12-01
Our recent mapping of the wispy terrains of Saturn's moons Dione and Rhea has revealed unique linear features that are generally long (10s-100s km), narrow (1-10 km), brighter than the surrounding terrains, and their detection may be sensitive to lighting geometries. We refer to these features as `linear virgae.' Wherever linear virgae are observed, they appear to crosscut all other structures, suggesting that they are the youngest features on these satellites. Despite their young age and wide distribution, linear virgae on Rhea and Dione have largely been overlooked in the literature. Linear virgae on Dione have previously been identified in Voyager and Cassini Data, but their formation remains an open question. If linear virgae are found to be endogenic, it would suggest that the surfaces of Dione and Rhea have been active recently. Alternatively, if linear virgae are exogenic it would suggest that the surfaces have been modified by a possibly common mechanism. Further work would be necessary to determine both a source of material and the dynamical environment that could produce these features. Here we present detailed morphometric measurements to further constrain whether linear virgae on Rhea and Dione share common origins. We complete an in-depth assessment of the lighting geometries where these features are visible. If linear virgae in the Saturnian system show common morphologies and distributions, a new, recently active, possibly system-wide mechanism may be revealed, thereby improving our understanding of the recent dynamical environment around Saturn.
Evaluation and treatment of biking and running injuries.
Oser, Sean M; Oser, Tamara K; Silvis, Matthew L
2013-12-01
Exercise is universally recognized as a key feature for maintaining good health. Likewise, lack of physical activity is a major risk factor for chronic disease and disability, an especially important fact considering our rapidly aging population. Biking and running are frequently recommended as forms of exercise. As more individuals participate in running-related and cycling-related activities, physicians must be increasingly aware of the common injuries encountered in these pursuits. This review focuses on the evaluation and management of common running-related and cycling-related injuries. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hashemi, H.; Tax, D. M. J.; Duin, R. P. W.; Javaherian, A.; de Groot, P.
2008-11-01
Seismic object detection is a relatively new field in which 3-D bodies are visualized and spatial relationships between objects of different origins are studied in order to extract geologic information. In this paper, we propose a method for finding an optimal classifier with the help of a statistical feature ranking technique and combining different classifiers. The method, which has general applicability, is demonstrated here on a gas chimney detection problem. First, we evaluate a set of input seismic attributes extracted at locations labeled by a human expert using regularized discriminant analysis (RDA). In order to find the RDA score for each seismic attribute, forward and backward search strategies are used. Subsequently, two non-linear classifiers: multilayer perceptron (MLP) and support vector classifier (SVC) are run on the ranked seismic attributes. Finally, to capitalize on the intrinsic differences between both classifiers, the MLP and SVC results are combined using logical rules of maximum, minimum and mean. The proposed method optimizes the ranked feature space size and yields the lowest classification error in the final combined result. We will show that the logical minimum reveals gas chimneys that exhibit both the softness of MLP and the resolution of SVC classifiers.
Can linear superiorization be useful for linear optimization problems?
NASA Astrophysics Data System (ADS)
Censor, Yair
2017-04-01
Linear superiorization (LinSup) considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are: (i) does LinSup provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? (ii) How does LinSup fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: ‘yes’ and ‘very well’, respectively.
NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (AMDAHL VERSION)
NASA Technical Reports Server (NTRS)
Rogers, J. E.
1994-01-01
The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).
NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Rogers, J. E.
1994-01-01
The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).
NASA Technical Reports Server (NTRS)
2002-01-01
(Released 20 May 2002) The Science This THEMIS visible image shows a portion of the summit region of Arsia Mons, one of the four giant volcanoes in the Tharsis region of Mars. This volcano stands over 20 km above the surrounding plains, and is approximately 450 km in diameter at its base. A large volcanic crater known as a 'caldera' is located at the summit of all of the Tharsis volcanoes. These calderas are produced by massive volcanic explosions and collapse. The Arsia Mons summit caldera alone is over 120 km in diameter, making it larger than many volcanoes on Earth. The THEMIS image shows a portion of the eastern wall of the caldera, revealing the steep walls and linear features associated with the collapse that formed the caldera. The ridge with linear faults that extends from the lower left toward the center right was formed at some stage during a collapse event. Several circular pits are present, and several of these pits appear to have coalesced into a long, unusual trough. These pits and troughs likely formed when lava was removed from beneath them and the overlying surface collapsed. Numerous lava flows can be seen on the floor of the caldera. Many of these flows occurred after the collapse that formed the caldera crater, and have buried many of the pre-existing features. The faulted, pitted ridge appears to have been partially flooded by these lava flows, indicating that the caldera of Arsia Mons has undergone a complex history of numerous events. The wispy bright features throughout the image are water-ice clouds that commonly form over the volcano summits during the early northern spring when this image was acquired. The Story When the Martian volcano Arsia Mons exploded long ago, it sent lava spewing out everywhere. With the removal of this molten material, the volcano then collapsed at its opening (the top of its cone) to form a sunken volcanic crater known as a caldera. You can see it more fully in the context image to the right. The eastern wall of the caldera is the pale white strip running diagonally across the bottom third of the image. By looking at this steep wall and the streaks running down its sides, you can imagine how all of the remaining material rushed down into the void left by expelled magma and ash to form the caldera depression. Numerous lava flows that occurred after the collapse texturize the floor of the caldera, and have buried many of its pre-existing features. These later lava flows might be a little harder to see, because wispy bright features blur this image slightly, giving it an almost marbled, hazy appearance. They are water-ice clouds that typically form over the volcano summits during the early northern spring. What they don't obscure very much is the raised ridge created during the collapse of the volcano's cone (running slightly north of the caldera wall along the same diagonal). Draped across the smoother caldera floor, this pitted ridge has been partially flooded by lava flows, indicating quite a complex history of geologic events has taken place here. Faults cut through the ridge, contributing to its streamer-like appearance. And, in a process somewhat like the formation of the caldera itself, all of the round and oblong pits and troughs in the ridge formed when lava was removed from underneath these areas, and the overlying surface then collapsed. Arsia Mons is one of the four giant Martian volcanoes found in a region called Tharsis. Arsia Mons is about 270 miles wide in diameter at its base, and rises 12 miles high above the surrounding plains. The caldera at its summit is more than 72 miles wide, making it larger than volcanoes on Earth. By comparison, the largest volcano on Earth is Mauna Loa on the island of Hawaii, which is about 6.3 miles high and 75 miles wide in diameter at its base.
A Block-LU Update for Large-Scale Linear Programming
1990-01-01
linear programming problems. Results are given from runs on the Cray Y -MP. 1. Introduction We wish to use the simplex method [Dan63] to solve the...standard linear program, minimize cTx subject to Ax = b 1< x <U, where A is an m by n matrix and c, x, 1, u, and b are of appropriate dimension. The simplex...the identity matrix. The basis is used to solve for the search direction y and the dual variables 7r in the following linear systems: Bky = aq (1.2) and
Feature extraction with deep neural networks by a generalized discriminant analysis.
Stuhlsatz, André; Lippel, Jens; Zielke, Thomas
2012-04-01
We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.
Graph run-length matrices for histopathological image segmentation.
Tosun, Akif Burak; Gunduz-Demir, Cigdem
2011-03-01
The histopathological examination of tissue specimens is essential for cancer diagnosis and grading. However, this examination is subject to a considerable amount of observer variability as it mainly relies on visual interpretation of pathologists. To alleviate this problem, it is very important to develop computational quantitative tools, for which image segmentation constitutes the core step. In this paper, we introduce an effective and robust algorithm for the segmentation of histopathological tissue images. This algorithm incorporates the background knowledge of the tissue organization into segmentation. For this purpose, it quantifies spatial relations of cytological tissue components by constructing a graph and uses this graph to define new texture features for image segmentation. This new texture definition makes use of the idea of gray-level run-length matrices. However, it considers the runs of cytological components on a graph to form a matrix, instead of considering the runs of pixel intensities. Working with colon tissue images, our experiments demonstrate that the texture features extracted from "graph run-length matrices" lead to high segmentation accuracies, also providing a reasonable number of segmented regions. Compared with four other segmentation algorithms, the results show that the proposed algorithm is more effective in histopathological image segmentation.
Feature Visibility Limits in the Non-Linear Enhancement of Turbid Images
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.
2003-01-01
The advancement of non-linear processing methods for generic automatic clarification of turbid imagery has led us from extensions of entirely passive multiscale Retinex processing to a new framework of active measurement and control of the enhancement process called the Visual Servo. In the process of testing this new non-linear computational scheme, we have identified that feature visibility limits in the post-enhancement image now simplify to a single signal-to-noise figure of merit: a feature is visible if the feature-background signal difference is greater than the RMS noise level. In other words, a signal-to-noise limit of approximately unity constitutes a lower limit on feature visibility.
Investigation of phase distribution using Phame® in-die phase measurements
NASA Astrophysics Data System (ADS)
Buttgereit, Ute; Perlitz, Sascha
2009-03-01
As lithography mask processes move toward 45nm and 32nm node, mask complexity increases steadily, mask specifications tighten and process control becomes extremely important. Driven by this fact the requirements for metrology tools increase as well. Efforts in metrology have been focused on accurately measuring CD linearity and uniformity across the mask, and accurately measuring phase variation on Alternating/Attenuated PSM and transmission for Attenuated PSM. CD control on photo masks is usually done through the following processes: exposure dose/focus change, resist develop and dry etch. The key requirement is to maintain correct CD linearity and uniformity across the mask. For PSM specifically, the effect of CD uniformity for both Alternating PSM and Attenuated PSM and etch depth for Alternating PSM becomes also important. So far phase measurement has been limited to either measuring large-feature phase using interferometer-based metrology tools or measuring etch depth using AFM and converting etch depth into phase under the assumption that trench profile and optical properties of the layers remain constant. However recent investigations show that the trench profile and optical property of layers impact the phase. This effect is getting larger for smaller CD's. The currently used phase measurement methods run into limitations because they are not able to capture 3D mask effects, diffraction limitations or polarization effects. The new phase metrology system - Phame(R) developed by Carl Zeiss SMS overcomes those limitations and enables laterally resolved phase measurement in any kind of production feature on the mask. The resolution of the system goes down to 120nm half pitch at mask level. We will report on tool performance data with respect to static and dynamic phase repeatability focusing on Alternating PSM. Furthermore the phase metrology system was used to investigate mask process signatures on Alternating PSM in order to further improve the overall PSM process performance. Especially global loading effects caused by the pattern density and micro loading effects caused by the feature size itself have been evaluated using the capability of measuring phase in the small production features. The results of this study will be reported in this paper.
Horizontal Running Mattress Suture Modified with Intermittent Simple Loops
Chacon, Anna H; Shiman, Michael I; Strozier, Narissa; Zaiac, Martin N
2013-01-01
Using the combination of a horizontal running mattress suture with intermittent loops achieves both good eversion with the horizontal running mattress plus the ease of removal of the simple loops. This combination technique also avoids the characteristic railroad track marks that result from prolonged non-absorbable suture retention. The unique feature of our technique is the incorporation of one simple running suture after every two runs of the horizontal running mattress suture. To demonstrate its utility, we used the suturing technique on several patients and analyzed the cosmetic outcome with post-operative photographs in comparison to other suturing techniques. In summary, the combination of running horizontal mattress suture with simple intermittent loops demonstrates functional and cosmetic benefits that can be readily taught, comprehended, and employed, leading to desirable aesthetic results and wound edge eversion. PMID:23723610
An adaptive deep Q-learning strategy for handwritten digit recognition.
Qiao, Junfei; Wang, Gongming; Li, Wenjing; Chen, Min
2018-02-22
Handwritten digits recognition is a challenging problem in recent years. Although many deep learning-based classification algorithms are studied for handwritten digits recognition, the recognition accuracy and running time still need to be further improved. In this paper, an adaptive deep Q-learning strategy is proposed to improve accuracy and shorten running time for handwritten digit recognition. The adaptive deep Q-learning strategy combines the feature-extracting capability of deep learning and the decision-making of reinforcement learning to form an adaptive Q-learning deep belief network (Q-ADBN). First, Q-ADBN extracts the features of original images using an adaptive deep auto-encoder (ADAE), and the extracted features are considered as the current states of Q-learning algorithm. Second, Q-ADBN receives Q-function (reward signal) during recognition of the current states, and the final handwritten digits recognition is implemented by maximizing the Q-function using Q-learning algorithm. Finally, experimental results from the well-known MNIST dataset show that the proposed Q-ADBN has a superiority to other similar methods in terms of accuracy and running time. Copyright © 2018 Elsevier Ltd. All rights reserved.
Reulen, Holger; Kneib, Thomas
2016-04-01
One important goal in multi-state modelling is to explore information about conditional transition-type-specific hazard rate functions by estimating influencing effects of explanatory variables. This may be performed using single transition-type-specific models if these covariate effects are assumed to be different across transition-types. To investigate whether this assumption holds or whether one of the effects is equal across several transition-types (cross-transition-type effect), a combined model has to be applied, for instance with the use of a stratified partial likelihood formulation. Here, prior knowledge about the underlying covariate effect mechanisms is often sparse, especially about ineffectivenesses of transition-type-specific or cross-transition-type effects. As a consequence, data-driven variable selection is an important task: a large number of estimable effects has to be taken into account if joint modelling of all transition-types is performed. A related but subsequent task is model choice: is an effect satisfactory estimated assuming linearity, or is the true underlying nature strongly deviating from linearity? This article introduces component-wise Functional Gradient Descent Boosting (short boosting) for multi-state models, an approach performing unsupervised variable selection and model choice simultaneously within a single estimation run. We demonstrate that features and advantages in the application of boosting introduced and illustrated in classical regression scenarios remain present in the transfer to multi-state models. As a consequence, boosting provides an effective means to answer questions about ineffectiveness and non-linearity of single transition-type-specific or cross-transition-type effects.
Generation, propagation and run-up of tsunamis due to the Chicxulub impact event
NASA Astrophysics Data System (ADS)
Weisz, R.; Wuennenmann, K.; Bahlburg, H.
2003-04-01
The Chicxulub impact event can be investigated in (1) local, (2) regional and in (3) global scales. Our investigations focus on the regional scale, especially on the influence of tsunami waves on the coast around the Gulf of Mexico caused by the impact. During an impact two types of tsunamis are generated. The first wave is known as the "rim wave" and is generated in front of the ejecta curtain. The second one is linked to the late modification stage of the impact and results from the collapsing cavity of water. We designate this wave as "collapse wave". The "rim wave" and "collapse wave" are able to propagate over long distances, without a significant loss of wave amplitude. Corresponding to the amplitudes, the waves have a potentially large influence on the coastal areas. Run-up distance and run-up height can be used as parameters for describing this influence. We are utilizing a multimaterial hydrocode (SALE) to simulate the generation of tsunami waves. The propagation of the waves is based on the non-linear shallow water theory, because tsunami waves are defined to be long waves. The position of the coast line varies according to the tsunami run-up and is implemented with open boundary conditions. We show with our investigations (1) the generation of tsunami waves due to shallow water impacts, (2) wave damping during propagation, and (3) the influence of the "rim wave" and the "collapse wave" on the coastal areas. Here, we present our first results from numerical modeling of tsunami waves owing to a Chicxulub sized impactor. The characteristics of the “rim wave” depend on the size of the bolide and the water depth. However, the amplitude and velocity of the “collapse wave” is only determined by the water depth in the impact area. The numerical modeling of the tsunami propagation and run-up is calculated along a section from the impact point towards to the west and gives the moderate damping of both waves and the run-up on the coastal area. As a first approximation, the bathymetric data, used in the wave propagation and run-up, correspond to a linearized bathymetry of the Recent Gulf of Mexico. The linearized bathymetry allows to study the influence of the bathymetry on wave propagation and run-up. Additionally, we give preliminary results of the implementation of the two-dimensional propagation and run-up model for arbitrary bathymetries. The two-dimensional wave propagation model will enable us to more realistically asses the influence of the impact-related tsunamis on the coasts around the Gulf of Mexico due to the Chicxulub impact event.
Application of Nearly Linear Solvers to Electric Power System Computation
NASA Astrophysics Data System (ADS)
Grant, Lisa L.
To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.
NASA Astrophysics Data System (ADS)
Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing
2018-05-01
We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.
Improved LTVMPC design for steering control of autonomous vehicle
NASA Astrophysics Data System (ADS)
Velhal, Shridhar; Thomas, Susy
2017-01-01
An improved linear time varying model predictive control for steering control of autonomous vehicle running on slippery road is presented. Control strategy is designed such that the vehicle will follow the predefined trajectory with highest possible entry speed. In linear time varying model predictive control, nonlinear vehicle model is successively linearized at each sampling instant. This linear time varying model is used to design MPC which will predict the future horizon. By incorporating predicted input horizon in each successive linearization the effectiveness of controller has been improved. The tracking performance using steering with front wheel and braking at four wheels are presented to illustrate the effectiveness of the proposed method.
Performance differences between sexes in 50-mile to 3,100-mile ultramarathons.
Zingg, Matthias A; Knechtle, Beat; Rosemann, Thomas; Rüst, Christoph A
2015-01-01
Anecdotal reports have assumed that women would be able to outrun men in long-distance running. The aim of this study was to test this assumption by investigating the changes in performance difference between sexes in the best ultramarathoners in 50-mile, 100-mile, 200-mile, 1,000-mile, and 3,100-mile events held worldwide between 1971 and 2012. The sex differences in running speed for the fastest runners ever were analyzed using one-way analysis of variance with subsequent Tukey-Kramer posthoc analysis. Changes in sex difference in running speed of the annual fastest were analyzed using linear and nonlinear regression analyses, correlation analyses, and mixed-effects regression analyses. The fastest men ever were faster than the fastest women ever in 50-mile (17.5%), 100-mile (17.4%), 200-mile (9.7%), 1,000-mile (20.2%), and 3,100-mile (18.6%) events. For the ten fastest finishers ever, men were faster than women in 50-mile (17.1%±1.9%), 100-mile (19.2%±1.5%), and 1,000-mile (16.7%±1.6%) events. No correlation existed between sex difference and running speed for the fastest ever (r (2)=0.0039, P=0.91) and the ten fastest ever (r (2)=0.15, P=0.74) for all distances. For the annual fastest, the sex difference in running speed decreased linearly in 50-mile events from 14.6% to 8.9%, remained unchanged in 100-mile (18.0%±8.4%) and 1,000-mile (13.7%±9.1%) events, and increased in 3,100-mile events from 12.5% to 16.9%. For the annual ten fastest runners, the performance difference between sexes decreased linearly in 50-mile events from 31.6%±3.6% to 8.9%±1.8% and in 100-mile events from 26.0%±4.4% to 24.7%±0.9%. To summarize, the fastest men were ~17%-20% faster than the fastest women for all distances from 50 miles to 3,100 miles. The linear decrease in sex difference for 50-mile and 100-mile events may suggest that women are reducing the sex gap for these distances.
Performance differences between sexes in 50-mile to 3,100-mile ultramarathons
Zingg, Matthias A; Knechtle, Beat; Rosemann, Thomas; Rüst, Christoph A
2015-01-01
Anecdotal reports have assumed that women would be able to outrun men in long-distance running. The aim of this study was to test this assumption by investigating the changes in performance difference between sexes in the best ultramarathoners in 50-mile, 100-mile, 200-mile, 1,000-mile, and 3,100-mile events held worldwide between 1971 and 2012. The sex differences in running speed for the fastest runners ever were analyzed using one-way analysis of variance with subsequent Tukey–Kramer posthoc analysis. Changes in sex difference in running speed of the annual fastest were analyzed using linear and nonlinear regression analyses, correlation analyses, and mixed-effects regression analyses. The fastest men ever were faster than the fastest women ever in 50-mile (17.5%), 100-mile (17.4%), 200-mile (9.7%), 1,000-mile (20.2%), and 3,100-mile (18.6%) events. For the ten fastest finishers ever, men were faster than women in 50-mile (17.1%±1.9%), 100-mile (19.2%±1.5%), and 1,000-mile (16.7%±1.6%) events. No correlation existed between sex difference and running speed for the fastest ever (r2=0.0039, P=0.91) and the ten fastest ever (r2=0.15, P=0.74) for all distances. For the annual fastest, the sex difference in running speed decreased linearly in 50-mile events from 14.6% to 8.9%, remained unchanged in 100-mile (18.0%±8.4%) and 1,000-mile (13.7%±9.1%) events, and increased in 3,100-mile events from 12.5% to 16.9%. For the annual ten fastest runners, the performance difference between sexes decreased linearly in 50-mile events from 31.6%±3.6% to 8.9%±1.8% and in 100-mile events from 26.0%±4.4% to 24.7%±0.9%. To summarize, the fastest men were ~17%–20% faster than the fastest women for all distances from 50 miles to 3,100 miles. The linear decrease in sex difference for 50-mile and 100-mile events may suggest that women are reducing the sex gap for these distances. PMID:25653567
NASA Astrophysics Data System (ADS)
Farag, Mohammed; Fleckenstein, Matthias; Habibi, Saeid
2017-02-01
Model-order reduction and minimization of the CPU run-time while maintaining the model accuracy are critical requirements for real-time implementation of lithium-ion electrochemical battery models. In this paper, an isothermal, continuous, piecewise-linear, electrode-average model is developed by using an optimal knot placement technique. The proposed model reduces the univariate nonlinear function of the electrode's open circuit potential dependence on the state of charge to continuous piecewise regions. The parameterization experiments were chosen to provide a trade-off between extensive experimental characterization techniques and purely identifying all parameters using optimization techniques. The model is then parameterized in each continuous, piecewise-linear, region. Applying the proposed technique cuts down the CPU run-time by around 20%, compared to the reduced-order, electrode-average model. Finally, the model validation against real-time driving profiles (FTP-72, WLTP) demonstrates the ability of the model to predict the cell voltage accurately with less than 2% error.
Hamidi, Mehrdad; Zarei, Najmeh
2009-05-01
Bovine serum albumin (BSA) is among the most widely used proteins in protein formulations as well as in the development of novel delivery systems as a typical model for therapeutic/diagnostic proteins and the new versions of vaccines. The development of reliable and easily available assay methods for quantitation of this protein would therefore play a crucial role in these types of studies. A simple gradient reversed-phase high-performance liquid chromatography with ultra-violet detection (HPLC-UV) method has been developed for quantitation of BSA in dosage forms and protein delivery systems. The method produced linear responses throughout the wide BSA concentration range of 1 to 100 micro g/mL. The average within-run and between-run variations of the method within the linear concentration range of BSA were 2.46% and 2.20%, respectively, with accuracies of 104.49% and 104.58% for within-run and between-run samples, respectively. The limits of detection (LOD) and quantitation (LOQ) of the method were 0.5 and 1 microg/mL, respectively. The method showed acceptable system suitability indices, which enabled us to use it successfully during our particulate vaccine delivery research project. Copyright 2009 John Wiley & Sons, Ltd.
Ma, Junlong; Wang, Chengbin; Yue, Jiaxin; Li, Mianyang; Zhang, Hongrui; Ma, Xiaojing; Li, Xincui; Xue, Dandan; Qing, Xiaoyan; Wang, Shengjiang; Xiang, Daijun; Cong, Yulong
2013-01-01
Several automated urine sediment analyzers have been introduced to clinical laboratories. Automated microscopic pattern recognition is a new technique for urine particle analysis. We evaluated the analytical and diagnostic performance of the UriSed automated microscopic analyzer and compared with manual microscopy for urine sediment analysis. Precision, linearity, carry-over, and method comparison were carried out. A total of 600 urine samples sent for urinalysis were assessed using the UriSed automated microscopic analyzer and manual microscopy. Within-run and between-run precision of the UriSed for red blood cells (RBC) and white blood cells (WBC) were acceptable at all levels (CV < 20%). Within-run and between-run imprecision of the UriSed testing for cast, squamous epithelial cells (EPI), and bacteria (BAC) were good at middle level and high level (CV < 20%). The linearity analysis revealed substantial agreement between the measured value and the theoretical value of the UriSed for RBC, WBC, cast, EPI, and BAC (r > 0.95). There was no carry-over. RBC, WBC, and squamous epithelial cells with sensitivities and specificities were more than 80% in this study. There is substantial agreement between the UriSed automated microscopic analyzer and the manual microscopy methods. The UriSed provides for a rapid turnaround time.
Leg stiffness and stride frequency in human running.
Farley, C T; González, O
1996-02-01
When humans and other mammals run, the body's complex system of muscle, tendon and ligament springs behaves like a single linear spring ('leg spring'). A simple spring-mass model, consisting of a single linear leg spring and a mass equivalent to the animal's mass, has been shown to describe the mechanics of running remarkably well. Force platform measurements from running animals, including humans, have shown that the stiffness of the leg spring remains nearly the same at all speeds and that the spring-mass system is adjusted for higher speeds by increasing the angle swept by the leg spring. The goal of the present study is to determine the relative importance of changes to the leg spring stiffness and the angle swept by the leg spring when humans alter their stride frequency at a given running speed. Human subjects ran on treadmill-mounted force platform at 2.5ms-1 while using a range of stride frequencies from 26% below to 36% above the preferred stride frequency. Force platform measurements revealed that the stiffness of the leg spring increased by 2.3-fold from 7.0 to 16.3 kNm-1 between the lowest and highest stride frequencies. The angle swept by the leg spring decreased at higher stride frequencies, partially offsetting the effect of the increased leg spring stiffness on the mechanical behavior of the spring-mass system. We conclude that the most important adjustment to the body's spring system to accommodate higher stride frequencies is that leg spring becomes stiffer.
Structural and lithologic study of northern coast ranges and Sacramento Valley, California
NASA Technical Reports Server (NTRS)
Rich, E. I. (Principal Investigator)
1973-01-01
The author has identified the following significant results. The pattern of linear systems within the project area has been extended into the western foothill belt of the Sierra Nevada. The chief pattern of linear features in the western Sierran foothill belt trends about N. 10 - 15 deg W., but in the vicinity of the Feather River the trend of the features abruptly changes to about N. 50-60 deg W and appears to be contiguous across the Sacramento Valley with a similar system of linear features in the Coast Ranges. The linear features in the Modoc Plateau and Klamath Mt. areas appear unrelated to the systems detected in the Coast Ranges of Sierran foothill belt. Although the change in trend of the Sierran structural features has been previously suggested and the interrelationship of the Klamath Mt. region with the northern Sierra Nevadas has been postulated, the data obtained from the ERTS-1 imagery strengthens these notions and provides for the first time evidence of a direct connection of the structural trends within the alluviated part of the Sacramento Valley. In addition rocks of Pleistocene and Holocene age are offset by some of the linear features seen on ERTS-1 imagery and hence may record the latest episode of geologic deformation in north-central California.
The MICE grand challenge lightcone simulation - I. Dark matter clustering
NASA Astrophysics Data System (ADS)
Fosalba, P.; Crocce, M.; Gaztañaga, E.; Castander, F. J.
2015-04-01
We present a new N-body simulation from the Marenostrum Institut de Ciències de l'Espai (MICE) collaboration, the MICE Grand Challenge (MICE-GC), containing about 70 billion dark matter particles in a (3 Gpc h-1)3 comoving volume. Given its large volume and fine spatial resolution, spanning over five orders of magnitude in dynamic range, it allows an accurate modelling of the growth of structure in the universe from the linear through the highly non-linear regime of gravitational clustering. We validate the dark matter simulation outputs using 3D and 2D clustering statistics, and discuss mass-resolution effects in the non-linear regime by comparing to previous simulations and the latest numerical fits. We show that the MICE-GC run allows for a measurement of the BAO feature with per cent level accuracy and compare it to state-of-the-art theoretical models. We also use sub-arcmin resolution pixelized 2D maps of the dark matter counts in the lightcone to make tomographic analyses in real and redshift space. Our analysis shows the simulation reproduces the Kaiser effect on large scales, whereas we find a significant suppression of power on non-linear scales relative to the real space clustering. We complete our validation by presenting an analysis of the three-point correlation function in this and previous MICE simulations, finding further evidence for mass-resolution effects. This is the first of a series of three papers in which we present the MICE-GC simulation, along with a wide and deep mock galaxy catalogue built from it. This mock is made publicly available through a dedicated web portal, http://cosmohub.pic.es.
A harmonic linear dynamical system for prominent ECG feature extraction.
Thi, Ngoc Anh Nguyen; Yang, Hyung-Jeong; Kim, SunHee; Do, Luu Ngoc
2014-01-01
Unsupervised mining of electrocardiography (ECG) time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series.
A face and palmprint recognition approach based on discriminant DCT feature extraction.
Jing, Xiao-Yuan; Zhang, David
2004-12-01
In the field of image processing and recognition, discrete cosine transform (DCT) and linear discrimination are two widely used techniques. Based on them, we present a new face and palmprint recognition approach in this paper. It first uses a two-dimensional separability judgment to select the DCT frequency bands with favorable linear separability. Then from the selected bands, it extracts the linear discriminative features by an improved Fisherface method and performs the classification by the nearest neighbor classifier. We detailedly analyze theoretical advantages of our approach in feature extraction. The experiments on face databases and palmprint database demonstrate that compared to the state-of-the-art linear discrimination methods, our approach obtains better classification performance. It can significantly improve the recognition rates for face and palmprint data and effectively reduce the dimension of feature space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fave, X; Court, L; UT Health Science Center, Graduate School of Biomedical Sciences, Houston, TX
Purpose: To determine how radiomics features change during radiation therapy and whether those changes (delta-radiomics features) can improve prognostic models built with clinical factors. Methods: 62 radiomics features, including histogram, co-occurrence, run-length, gray-tone difference, and shape features, were calculated from pretreatment and weekly intra-treatment CTs for 107 stage III NSCLC patients (5–9 images per patient). Image preprocessing for each feature was determined using the set of pretreatment images: bit-depth resample and/or a smoothing filter were tested for their impact on volume-correlation and significance of each feature in univariate cox regression models to maximize their information content. Next, the optimized featuresmore » were calculated from the intratreatment images and tested in linear mixed-effects models to determine which features changed significantly with dose-fraction. The slopes in these significant features were defined as delta-radiomics features. To test their prognostic potential multivariate cox regression models were fitted, first using only clinical features and then clinical+delta-radiomics features for overall-survival, local-recurrence, and distant-metastases. Leave-one-out cross validation was used for model-fitting and patient predictions. Concordance indices(c-index) and p-values for the log-rank test with patients stratified at the median were calculated. Results: Approximately one-half of the 62 optimized features required no preprocessing, one-fourth required smoothing, and one-fourth required smoothing and resampling. From these, 54 changed significantly during treatment. For overall-survival, the c-index improved from 0.52 for clinical factors alone to 0.62 for clinical+delta-radiomics features. For distant-metastases, the c-index improved from 0.53 to 0.58, while for local-recurrence it did not improve. Patient stratification significantly improved (p-value<0.05) for overallsurvival and distant-metastases when delta-radiomics features were included. The delta-radiomics versions of autocorrelation, kurtosis, and compactness were selected most frequently in leave-one-out iterations. Conclusion: Weekly changes in radiomics features can potentially be used to evaluate treatment response and predict patient outcomes. High-risk patients could be recommended for dose escalation or consolidation chemotherapy. This project was funded in part by grants from the National Cancer Institute (NCI) and the Cancer Prevention Research Institute of Texas (CPRIT).« less
NASA Technical Reports Server (NTRS)
Guedry, F. E.; Paloski, W. F. (Principal Investigator)
1996-01-01
When head motion includes a linear velocity component, eye velocity required to track an earth-fixed target depends upon: a) angular and linear head velocity, b) target distance, and c) direction of gaze relative to the motion trajectory. Recent research indicates that eye movements (LVOR), presumably otolith-mediated, partially compensate for linear velocity in small head excursions on small devices. Canal-mediated eye velocity (AVOR), otolith-mediated eye velocity (LVOR), and Ocular Torsion (OT) can be measured, one by one, on small devices. However, response dynamics that depend upon the ratio of linear to angular velocity in the motion trajectory and on subject orientation relative to the trajectory are present in a centrifuge paradigm. With this paradigm, two 3-min runs yields measures of: LVOR differentially modulated by different subject orientations in the two runs; OT dynamics in four conditions; two directions of "steady-state" OT, and two directions of AVOR. Efficient assessment of the dynamics (and of the underlying central integrative processes) may require a centrifuge radius of 1.0 meters or more. Clinical assessment of the spatial orientation system should include evaluation of central integrative processes that determine the dynamics of these responses.
Fully 3D modeling of tokamak vertical displacement events with realistic parameters
NASA Astrophysics Data System (ADS)
Pfefferle, David; Ferraro, Nathaniel; Jardin, Stephen; Bhattacharjee, Amitava
2016-10-01
In this work, we model the complex multi-domain and highly non-linear physics of Vertical Displacement Events (VDEs), one of the most damaging off-normal events in tokamaks, with the implicit 3D extended MHD code M3D-C1. The code has recently acquired the capability to include finite thickness conducting structures within the computational domain. By exploiting the possibility of running a linear 3D calculation on top of a non-linear 2D simulation, we monitor the non-axisymmetric stability and assess the eigen-structure of kink modes as the simulation proceeds. Once a stability boundary is crossed, a fully 3D non-linear calculation is launched for the remainder of the simulation, starting from an earlier time of the 2D run. This procedure, along with adaptive zoning, greatly increases the efficiency of the calculation, and allows to perform VDE simulations with realistic parameters and high resolution. Simulations are being validated with NSTX data where both axisymmetric (toroidally averaged) and non-axisymmetric induced and conductive (halo) currents have been measured. This work is supported by US DOE Grant DE-AC02-09CH11466.
Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study
Radhakrishnan, Hari; Rouson, Damian W. I.; Morris, Karla; ...
2015-01-01
This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO) and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP) facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were donemore » using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.« less
Responses to Intensity-Shifted Auditory Feedback during Running Speech
ERIC Educational Resources Information Center
Patel, Rupal; Reilly, Kevin J.; Archibald, Erin; Cai, Shanqing; Guenther, Frank H.
2015-01-01
Purpose: Responses to intensity perturbation during running speech were measured to understand whether prosodic features are controlled in an independent or integrated manner. Method: Nineteen English-speaking healthy adults (age range = 21-41 years) produced 480 sentences in which emphatic stress was placed on either the 1st or 2nd word. One…
FAST Modularization Framework for Wind Turbine Simulation: Full-System Linearization
Jonkman, Jason M.; Jonkman, Bonnie J.
2016-10-03
The wind engineering community relies on multiphysics engineering software to run nonlinear time-domain simulations e.g. for design-standards-based loads analysis. Although most physics involved in wind energy are nonlinear, linearization of the underlying nonlinear system equations is often advantageous to understand the system response and exploit well-established methods and tools for analyzing linear systems. Here, this paper presents the development and verification of the new linearization functionality of the open-source engineering tool FAST v8 for land-based wind turbines, as well as the concepts and mathematical background needed to understand and apply it correctly.
FAST modularization framework for wind turbine simulation: full-system linearization
NASA Astrophysics Data System (ADS)
Jonkman, J. M.; Jonkman, B. J.
2016-09-01
The wind engineering community relies on multiphysics engineering software to run nonlinear time-domain simulations e.g. for design-standards-based loads analysis. Although most physics involved in wind energy are nonlinear, linearization of the underlying nonlinear system equations is often advantageous to understand the system response and exploit well- established methods and tools for analyzing linear systems. This paper presents the development and verification of the new linearization functionality of the open-source engineering tool FAST v8 for land-based wind turbines, as well as the concepts and mathematical background needed to understand and apply it correctly.
Validation of Supersonic Film Cooling Modeling for Liquid Rocket Engine Applications
NASA Technical Reports Server (NTRS)
Morris, Christopher I.; Ruf, Joseph H.
2010-01-01
Topics include: upper stage engine key requirements and design drivers; Calspan "stage 1" results, He slot injection into hypersonic flow (air); test articles for shock generator diagram, slot injector details, and instrumentation positions; test conditions; modeling approach; 2-d grid used for film cooling simulations of test article; heat flux profiles from 2-d flat plate simulations (run #4); heat flux profiles from 2-d backward facing step simulations (run #43); isometric sketch of single coolant nozzle, and x-z grid of half-nozzle domain; comparison of 2-d and 3-d simulations of coolant nozzles (run #45); flowfield properties along coolant nozzle centerline (run #45); comparison of 3-d CFD nozzle flow calculations with experimental data; nozzle exit plane reduced to linear profile for use in 2-d film-cooling simulations (run #45); synthetic Schlieren image of coolant injection region (run #45); axial velocity profiles from 2-d film-cooling simulation (run #45); coolant mass fraction profiles from 2-d film-cooling simulation (run #45); heat flux profiles from 2-d film cooling simulations (run #45); heat flux profiles from 2-d film cooling simulations (runs #47, #45, and #47); 3-d grid used for film cooling simulations of test article; heat flux contours from 3-d film-cooling simulation (run #45); and heat flux profiles from 3-d and 2-d film cooling simulations (runs #44, #46, and #47).
Extraction of linear features on SAR imagery
NASA Astrophysics Data System (ADS)
Liu, Junyi; Li, Deren; Mei, Xin
2006-10-01
Linear features are usually extracted from SAR imagery by a few edge detectors derived from the contrast ratio edge detector with a constant probability of false alarm. On the other hand, the Hough Transform is an elegant way of extracting global features like curve segments from binary edge images. Randomized Hough Transform can reduce the computation time and memory usage of the HT drastically. While Randomized Hough Transform will bring about a great deal of cells invalid during the randomized sample. In this paper, we propose a new approach to extract linear features on SAR imagery, which is an almost automatic algorithm based on edge detection and Randomized Hough Transform. The presented improved method makes full use of the directional information of each edge candidate points so as to solve invalid cumulate problems. Applied result is in good agreement with the theoretical study, and the main linear features on SAR imagery have been extracted automatically. The method saves storage space and computational time, which shows its effectiveness and applicability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gremos, K.; Sendlein, L.V.A.
1993-03-01
Significant areas of the continental US (Kentucky included) are underlain by karstified limestone. In many of these areas agriculture is a leading business and a potential non-point source of pollution to the groundwater. A study is underway to assess the Best Management Practices (BMP) on a farm in north-central Woodford County in Kentucky. As part of the study, various computer-based decision models for integrated farm operation will be assessed. Because surface area and run off are integral parts of all of these models, diversion of surface run off through karst features such as sinkholes will modify predictions from these models.more » This study utilizes areal photographs to identify all sinkholes on the property and characterize their morphometric parameters such as length, width, depth, and area and distribution. Sink hole areas represent approximately 10 percent of the area and all but a few discharge within the basin monitored as part of the model. The bedrock geology and fractures of the area have been defined using fracture trace analysis and a rectified drainage linear analysis. Surface drainage patterns, spring distribution, and stream and spring discharge data have been collected. Dye tracing has identified groundwater basins whose catchment area is outside the boundaries of the study site.« less
Sodium Binding Sites and Permeation Mechanism in the NaChBac Channel: A Molecular Dynamics Study.
Guardiani, Carlo; Rodger, P Mark; Fedorenko, Olena A; Roberts, Stephen K; Khovanov, Igor A
2017-03-14
NaChBac was the first discovered bacterial sodium voltage-dependent channel, yet computational studies are still limited due to the lack of a crystal structure. In this work, a pore-only construct built using the NavMs template was investigated using unbiased molecular dynamics and metadynamics. The potential of mean force (PMF) from the unbiased run features four minima, three of which correspond to sites IN, CEN, and HFS discovered in NavAb. During the run, the selectivity filter (SF) is spontaneously occupied by two ions, and frequent access of a third one is often observed. In the innermost sites IN and CEN, Na + is fully hydrated by six water molecules and occupies an on-axis position. In site HFS sodium interacts with a glutamate and a serine from the same subunit and is forced to adopt an off-axis placement. Metadynamics simulations biasing one and two ions show an energy barrier in the SF that prevents single-ion permeation. An analysis of the permeation mechanism was performed both computing minimum energy paths in the axial-axial PMF and through a combination of Markov state modeling and transition path theory. Both approaches reveal a knock-on mechanism involving at least two but possibly three ions. The currents predicted from the unbiased simulation using linear response theory are in excellent agreement with single-channel patch-clamp recordings.
Automatic parquet block sorting using real-time spectral classification
NASA Astrophysics Data System (ADS)
Astrom, Anders; Astrand, Erik; Johansson, Magnus
1999-03-01
This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.
Phan, Xuan; Grisbrook, Tiffany L; Wernli, Kevin; Stearne, Sarah M; Davey, Paul; Ng, Leo
2017-08-01
This study aimed to determine if a quantifiable relationship exists between the peak sound amplitude and peak vertical ground reaction force (vGRF) and vertical loading rate during running. It also investigated whether differences in peak sound amplitude, contact time, lower limb kinematics, kinetics and foot strike technique existed when participants were verbally instructed to run quietly compared to their normal running. A total of 26 males completed running trials for two sound conditions: normal running and quiet running. Simple linear regressions revealed no significant relationships between impact sound and peak vGRF in the normal and quiet conditions and vertical loading rate in the normal condition. t-Tests revealed significant within-subject decreases in peak sound, peak vGRF and vertical loading rate during the quiet compared to the normal running condition. During the normal running condition, 15.4% of participants utilised a non-rearfoot strike technique compared to 76.9% in the quiet condition, which was corroborated by an increased ankle plantarflexion angle at initial contact. This study demonstrated that quieter impact sound is not directly associated with a lower peak vGRF or vertical loading rate. However, given the instructions to run quietly, participants effectively reduced peak impact sound, peak vGRF and vertical loading rate.
Comparing Two Tools for Mobile-Device Forensics
2017-09-01
baseline standard. 2.4 Mobile Operating Systems "A mobile operating system is an operating system that is specifically designed to run on mobile devices... run on mobile devices" [7]. There are many different types of mobile operating systems and they are constantly changing, which means an operating...to this is that the security features make forensic analysis more difficult [11]. 2.4.2 iPhone "The iPhone runs an operating system called iOS. It is a
Setting Standards for Medically-Based Running Analysis
Vincent, Heather K.; Herman, Daniel C.; Lear-Barnes, Leslie; Barnes, Robert; Chen, Cong; Greenberg, Scott; Vincent, Kevin R.
2015-01-01
Setting standards for medically based running analyses is necessary to ensure that runners receive a high-quality service from practitioners. Medical and training history, physical and functional tests, and motion analysis of running at self-selected and faster speeds are key features of a comprehensive analysis. Self-reported history and movement symmetry are critical factors that require follow-up therapy or long-term management. Pain or injury is typically the result of a functional deficit above or below the site along the kinematic chain. PMID:25014394
NASA Astrophysics Data System (ADS)
Ulrich, T.; Gabriel, A. A.
2016-12-01
The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.
A FORTRAN program for multivariate survival analysis on the personal computer.
Mulder, P G
1988-01-01
In this paper a FORTRAN program is presented for multivariate survival or life table regression analysis in a competing risks' situation. The relevant failure rate (for example, a particular disease or mortality rate) is modelled as a log-linear function of a vector of (possibly time-dependent) explanatory variables. The explanatory variables may also include the variable time itself, which is useful for parameterizing piecewise exponential time-to-failure distributions in a Gompertz-like or Weibull-like way as a more efficient alternative to Cox's proportional hazards model. Maximum likelihood estimates of the coefficients of the log-linear relationship are obtained from the iterative Newton-Raphson method. The program runs on a personal computer under DOS; running time is quite acceptable, even for large samples.
Sonderegger, Karin; Tschopp, Markus; Taube, Wolfgang
2016-01-01
There are several approaches to quantifying physical load in team sports using positional data. Distances in different speed zones are most commonly used. Recent studies have used acceleration data in addition in order to take short intense actions into account. However, the fact that acceleration decreases with increasing initial running speed is ignored and therefore introduces a bias. The aim of our study was to develop a new methodological approach that removes this bias. For this purpose, percentage acceleration was calculated as the ratio of the maximal acceleration of the action (amax,action) and the maximal voluntary acceleration (amax) that can be achieved for a particular initial running speed (percentage acceleration [%] = amax,action / amax * 100). To define amax, seventy-two highly trained junior male soccer players (17.1 ± 0.6 years) completed maximal sprints from standing and three different constant initial running speeds (vinit; trotting: ~6.0 km·h-1; jogging: ~10.8 km·h-1; running: ~15.0 km·h-1). The amax was 6.01 ± 0.55 from a standing start, 4.33 ± 0.40 from trotting, 3.20 ± 0.49 from jogging and 2.29 ± 0.34 m·s-2 from running. The amax correlated significantly with vinit (r = -0.98) and the linear regression equation of highly-trained junior soccer players was: amax = -0.23 * vinit + 5.99. Using linear regression analysis, we propose to classify high-intensity actions as accelerations >75% of the amax, corresponding to acceleration values for our population of >4.51 initiated from standing, >3.25 from trotting, >2.40 from jogging, and >1.72 m·s-2 from running. The use of percentage acceleration avoids the bias of underestimating actions with high and overestimating actions with low initial running speed. Furthermore, percentage acceleration allows determining individual intensity thresholds that are specific for one population or one single player.
The ins and outs of modelling vertical displacement events
NASA Astrophysics Data System (ADS)
Pfefferle, David
2017-10-01
Of the many reasons a plasma discharge disrupts, Vertical Displacement Events (VDEs) lead to the most severe forces and stresses on the vacuum vessel and Plasma Facing Components (PFCs). After loss of positional control, the plasma column drifts across the vacuum vessel and comes in contact with the first wall, at which point the stored magnetic and thermal energy is abruptly released. The vessel forces have been extensively modelled in 2D but, with the constraint of axisymmetry, the fundamental 3D effects that lead to toroidal peaking, sideways forces, field-line stochastisation and halo current rotation have been vastly overlooked. In this work, we present the main results of an intense VDE modelling activity using the implicit 3D extended MHD code M3D-C1 and share our experience with the multi-domain and highly non-linear physics encountered. At the culmination of code development by the M3D-C1 group over the last decade, highlighted by the inclusion of a finite-thickness resistive vacuum vessel within the computational domain, a series of fully 3D non-linear simulations are performed using realistic transport coefficients based on the reconstruction of so-called NSTX frozen VDEs, where the feedback control was purposely switched off to trigger a vertical instability. The vertical drift phase, the evolution of the current quench and the onset of 3D halo/eddy currents are diagnosed and investigated in detail. The sensitivity of the current quench to parameter changes is assessed via 2D non-linear runs. The growth of individual toroidal modes is monitored via linear-complex runs. The intricate evolution of the plasma, which is decaying to large extent in force-balance with induced halo/wall currents, is carefully resolved via 3D non-linear runs. The location, amplitude and rotation of normal currents and wall forces are analysed and compared with experimental traces.
Efficiently Sorting Zoo-Mesh Data Sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, R; Max, N; Silva, C
The authors describe the SXMPVO algorithm for performing a visibility ordering zoo-meshed polyhedra. The algorithm runs in practice in linear time and the visibility ordering which it produces is exact.
Reproducibility and Prognosis of Quantitative Features Extracted from CT Images12
Balagurunathan, Yoganand; Gu, Yuhua; Wang, Hua; Kumar, Virendra; Grove, Olya; Hawkins, Sam; Kim, Jongphil; Goldgof, Dmitry B; Hall, Lawrence O; Gatenby, Robert A; Gillies, Robert J
2014-01-01
We study the reproducibility of quantitative imaging features that are used to describe tumor shape, size, and texture from computed tomography (CT) scans of non-small cell lung cancer (NSCLC). CT images are dependent on various scanning factors. We focus on characterizing image features that are reproducible in the presence of variations due to patient factors and segmentation methods. Thirty-two NSCLC nonenhanced lung CT scans were obtained from the Reference Image Database to Evaluate Response data set. The tumors were segmented using both manual (radiologist expert) and ensemble (software-automated) methods. A set of features (219 three-dimensional and 110 two-dimensional) was computed, and quantitative image features were statistically filtered to identify a subset of reproducible and nonredundant features. The variability in the repeated experiment was measured by the test-retest concordance correlation coefficient (CCCTreT). The natural range in the features, normalized to variance, was measured by the dynamic range (DR). In this study, there were 29 features across segmentation methods found with CCCTreT and DR ≥ 0.9 and R2Bet ≥ 0.95. These reproducible features were tested for predicting radiologist prognostic score; some texture features (run-length and Laws kernels) had an area under the curve of 0.9. The representative features were tested for their prognostic capabilities using an independent NSCLC data set (59 lung adenocarcinomas), where one of the texture features, run-length gray-level nonuniformity, was statistically significant in separating the samples into survival groups (P ≤ .046). PMID:24772210
Varley, Matthew C; Di Salvo, Valter; Modonutti, Mattia; Gregson, Warren; Mendez-Villanueva, Alberto
2018-03-01
This study investigated the effects of successive matches on match-running in elite under-23 soccer players during an international tournament. Match-running data was collected using a semi-automated multi-camera tracking system during an international under-23 tournament from all participating outfield players. Players who played 100% of all group stage matches were included (3 matches separated by 72 h, n = 44). Differences in match-running performance between matches were identified using a generalised linear mixed model. There were no clear effects for total, walking, jogging, running, high-speed running and sprinting distance between matches 1 and 3 (effect size (ES); -0.32 to 0.05). Positional analysis found that sprint distance was largely maintained from matches 1 to 3 across all positions. Attackers had a moderate decrease in total, jogging and running distance between matches 1 and 3 (ES; -0.72 to -0.66). Classifying players as increasers or decreasers in match-running revealed that match-running changes are susceptible to individual differences. Sprint performance appears to be maintained over successive matches regardless of playing position. However, reductions in other match-running categories vary between positions. Changes in match-running over successive matches affect individuals differently; thus, players should be monitored on an individual basis.
New operator assistance features in the CMS Run Control System
NASA Astrophysics Data System (ADS)
Andre, J.-M.; Behrens, U.; Branson, J.; Brummer, P.; Chaze, O.; Cittolin, S.; Contescu, C.; Craigs, B. G.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Doualot, N.; Erhan, S.; Fulcher, J. R.; Gigi, D.; Gładki, M.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Janulis, M.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; O'Dell, V.; Orsini, L.; Paus, C.; Petrova, P.; Pieri, M.; Racz, A.; Reis, T.; Sakulin, H.; Schwick, C.; Simelevicius, D.; Vougioukas, M.; Zejdl, P.
2017-10-01
During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.
New Operator Assistance Features in the CMS Run Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andre, J.M.; et al.
During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potentialmore » clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.« less
78 FR 24037 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-24
... and to detect a pump running in an empty fuel tank. We are issuing this AD to reduce the potential of... features to detect electrical faults, to detect a pump running in an empty fuel tank, and to ensure that a fuel pump's operation is not affected by certain conditions. Comments We gave the public the...
Running Batch Jobs on Peregrine | High-Performance Computing | NREL
Using Resource Feature to Request Different Node Types Peregrine has several types of compute nodes incompatibility and get the job running. More information about requesting different node types in Peregrine is available. Queues In order to meet the needs of different types of jobs, nodes on Peregrine are available
Exploring Peer-to-Peer Library Content and Engagement on a Student-Run Facebook Group
ERIC Educational Resources Information Center
van Beynen, Kaya; Swenson, Camielle
2016-01-01
Student-run Facebook groups offer librarians a new means of interacting with students in their native digital domain. Facebook groups, a service launched in 2010 enables university students to create a virtual forum to discuss their concerns, issues, and promote events. While still a relatively new feature, these groups are increasingly being…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thorson, L.D.
A description is given of a new version of the TRUMP (UCRL-14754) computer code, NOTRUMP, which runs on both the CDC-7600 and CRAY-1. There are slight differences in the input and major changes in output capability. A postprocessor, AFTER, is available to manipulate some of the new output features. Old data decks for TRUMP will normally run with only minor changes.
GALAHAD: 1. Pharmacophore identification by hypermolecular alignment of ligands in 3D
NASA Astrophysics Data System (ADS)
Richmond, Nicola J.; Abrams, Charlene A.; Wolohan, Philippa R. N.; Abrahamian, Edmond; Willett, Peter; Clark, Robert D.
2006-09-01
Alignment of multiple ligands based on shared pharmacophoric and pharmacosteric features is a long-recognized challenge in drug discovery and development. This is particularly true when the spatial overlap between structures is incomplete, in which case no good template molecule is likely to exist. Pair-wise rigid ligand alignment based on linear assignment (the LAMDA algorithm) has the potential to address this problem (Richmond et al. in J Mol Graph Model 23:199-209, 2004). Here we present the version of LAMDA embodied in the GALAHAD program, which carries out multi-way alignments by iterative construction of hypermolecules that retain the aggregate as well as the individual attributes of the ligands. We have also generalized the cost function from being purely atom-based to being one that operates on ionic, hydrogen bonding, hydrophobic and steric features. Finally, we have added the ability to generate useful partial-match 3D search queries from the hypermolecules obtained. By running frozen conformations through the GALAHAD program, one can utilize the extended version of LAMDA to generate pharmacophores and pharmacosteres that agree well with crystal structure alignments for a range of literature datasets, with minor adjustments of the default parameters generating even better models. Allowing for inclusion of partial match constraints in the queries yields pharmacophores that are consistently a superset of full-match pharmacophores identified in previous analyses, with the additional features representing points of potentially beneficial interaction with the target.
Mysterious Roving Rocks of Racetrack Playa
2017-12-08
The trails can be straight, or they can curve. Sometimes, two trails run alongside each other. Those two lines running from left to right in the back look like they were made by a car; but they were made by rocks. Photo credit: NASA/GSFC/Maggie McAdam To read a feature story on the Racetrack Playa go to: www.nasa.gov/topics/earth/features/roving-rocks.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe. Follow us on Twitter Join us on Facebook
Stereo Refractive Imaging of Breaking Free-Surface Waves in the Surf Zone
NASA Astrophysics Data System (ADS)
Mandel, Tracy; Weitzman, Joel; Koseff, Jeffrey; Environmental Fluid Mechanics Laboratory Team
2014-11-01
Ocean waves drive the evolution of coastlines across the globe. Wave breaking suspends sediments, while wave run-up, run-down, and the undertow transport this sediment across the shore. Complex bathymetric features and natural biotic communities can influence all of these dynamics, and provide protection against erosion and flooding. However, our knowledge of the exact mechanisms by which this occurs, and how they can be modeled and parameterized, is limited. We have conducted a series of controlled laboratory experiments with the goal of elucidating these details. These have focused on quantifying the spatially-varying characteristics of breaking waves and developing more accurate techniques for measuring and predicting wave setup, setdown, and run-up. Using dynamic refraction stereo imaging, data on free-surface slope and height can be obtained over an entire plane. Wave evolution is thus obtained with high spatial precision. These surface features are compared with measures of instantaneous turbulence and mean currents within the water column. We then use this newly-developed ability to resolve three-dimensional surface features over a canopy of seagrass mimics, in order to validate theoretical formulations of wave-vegetation interactions in the surf zone.
Feature Mining and Health Assessment for Gearboxes Using Run-Up/Coast-Down Signals
Zhao, Ming; Lin, Jing; Miao, Yonghao; Xu, Xiaoqiang
2016-01-01
Vibration signals measured in the run-up/coast-down (R/C) processes usually carry rich information about the health status of machinery. However, a major challenge in R/C signals analysis lies in how to exploit more diagnostic information, and how this information could be properly integrated to achieve a more reliable maintenance decision. Aiming at this problem, a framework of R/C signals analysis is presented for the health assessment of gearbox. In the proposed methodology, we first investigate the data preprocessing and feature selection issues for R/C signals. Based on that, a sparsity-guided feature enhancement scheme is then proposed to extract the weak phase jitter associated with gear defect. In order for an effective feature mining and integration under R/C, a generalized phase demodulation technique is further established to reveal the evolution of modulation feature with operating speed and rotation angle. The experimental results indicate that the proposed methodology could not only detect the presence of gear damage, but also offer a novel insight into the dynamic behavior of gearbox. PMID:27827831
Feature Mining and Health Assessment for Gearboxes Using Run-Up/Coast-Down Signals.
Zhao, Ming; Lin, Jing; Miao, Yonghao; Xu, Xiaoqiang
2016-11-02
Vibration signals measured in the run-up/coast-down (R/C) processes usually carry rich information about the health status of machinery. However, a major challenge in R/C signals analysis lies in how to exploit more diagnostic information, and how this information could be properly integrated to achieve a more reliable maintenance decision. Aiming at this problem, a framework of R/C signals analysis is presented for the health assessment of gearbox. In the proposed methodology, we first investigate the data preprocessing and feature selection issues for R/C signals. Based on that, a sparsity-guided feature enhancement scheme is then proposed to extract the weak phase jitter associated with gear defect. In order for an effective feature mining and integration under R/C, a generalized phase demodulation technique is further established to reveal the evolution of modulation feature with operating speed and rotation angle. The experimental results indicate that the proposed methodology could not only detect the presence of gear damage, but also offer a novel insight into the dynamic behavior of gearbox.
Ahlfeld, David P.; Barlow, Paul M.; Mulligan, Anne E.
2005-01-01
GWM is a Ground?Water Management Process for the U.S. Geological Survey modular three?dimensional ground?water model, MODFLOW?2000. GWM uses a response?matrix approach to solve several types of linear, nonlinear, and mixed?binary linear ground?water management formulations. Each management formulation consists of a set of decision variables, an objective function, and a set of constraints. Three types of decision variables are supported by GWM: flow?rate decision variables, which are withdrawal or injection rates at well sites; external decision variables, which are sources or sinks of water that are external to the flow model and do not directly affect the state variables of the simulated ground?water system (heads, streamflows, and so forth); and binary variables, which have values of 0 or 1 and are used to define the status of flow?rate or external decision variables. Flow?rate decision variables can represent wells that extend over one or more model cells and be active during one or more model stress periods; external variables also can be active during one or more stress periods. A single objective function is supported by GWM, which can be specified to either minimize or maximize the weighted sum of the three types of decision variables. Four types of constraints can be specified in a GWM formulation: upper and lower bounds on the flow?rate and external decision variables; linear summations of the three types of decision variables; hydraulic?head based constraints, including drawdowns, head differences, and head gradients; and streamflow and streamflow?depletion constraints. The Response Matrix Solution (RMS) Package of GWM uses the Ground?Water Flow Process of MODFLOW to calculate the change in head at each constraint location that results from a perturbation of a flow?rate variable; these changes are used to calculate the response coefficients. For linear management formulations, the resulting matrix of response coefficients is then combined with other components of the linear management formulation to form a complete linear formulation; the formulation is then solved by use of the simplex algorithm, which is incorporated into the RMS Package. Nonlinear formulations arise for simulated conditions that include water?table (unconfined) aquifers or head?dependent boundary conditions (such as streams, drains, or evapotranspiration from the water table). Nonlinear formulations are solved by sequential linear programming; that is, repeated linearization of the nonlinear features of the management problem. In this approach, response coefficients are recalculated for each iteration of the solution process. Mixed?binary linear (or mildly nonlinear) formulations are solved by use of the branch and bound algorithm, which is also incorporated into the RMS Package. Three sample problems are provided to demonstrate the use of GWM for typical ground?water flow management problems. These sample problems provide examples of how GWM input files are constructed to specify the decision variables, objective function, constraints, and solution process for a GWM run. The GWM Process runs with the MODFLOW?2000 Global and Ground?Water Flow Processes, but in its current form GWM cannot be used with the Observation, Sensitivity, Parameter?Estimation, or Ground?Water Transport Processes. The GWM Process is written with a modular structure so that new objective functions, constraint types, and solution algorithms can be added.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
Wang, Yun; Huang, Fangzhou
2018-01-01
The selection of feature genes with high recognition ability from the gene expression profiles has gained great significance in biology. However, most of the existing methods have a high time complexity and poor classification performance. Motivated by this, an effective feature selection method, called supervised locally linear embedding and Spearman's rank correlation coefficient (SLLE-SC2), is proposed which is based on the concept of locally linear embedding and correlation coefficient algorithms. Supervised locally linear embedding takes into account class label information and improves the classification performance. Furthermore, Spearman's rank correlation coefficient is used to remove the coexpression genes. The experiment results obtained on four public tumor microarray datasets illustrate that our method is valid and feasible. PMID:29666661
Xu, Jiucheng; Mu, Huiyu; Wang, Yun; Huang, Fangzhou
2018-01-01
The selection of feature genes with high recognition ability from the gene expression profiles has gained great significance in biology. However, most of the existing methods have a high time complexity and poor classification performance. Motivated by this, an effective feature selection method, called supervised locally linear embedding and Spearman's rank correlation coefficient (SLLE-SC 2 ), is proposed which is based on the concept of locally linear embedding and correlation coefficient algorithms. Supervised locally linear embedding takes into account class label information and improves the classification performance. Furthermore, Spearman's rank correlation coefficient is used to remove the coexpression genes. The experiment results obtained on four public tumor microarray datasets illustrate that our method is valid and feasible.
Informedia at TRECVID2014: MED and MER, Semantic Indexing, Surveillance Event Detection
2014-11-10
multiple ranked lists for a given system query. Our system incorporates various retrieval methods such as Vector Space Model, tf-idf, BM25, language...separable space before applying the linear classifier. As the EFM is an approximation, we run the risk of a slight drop in performance. Figure 4 shows...validation set are fused. • CMU_Run3: After removing junk shots (by the junk /black frame detectors), MultiModal Pseudo Relevance Feedback (MMPRF) [12
Theta phase precession of grid and place cell firing in open environments
Jeewajee, A.; Barry, C.; Douchamps, V.; Manson, D.; Lever, C.; Burgess, N.
2014-01-01
Place and grid cells in the rodent hippocampal formation tend to fire spikes at successively earlier phases relative to the local field potential theta rhythm as the animal runs through the cell's firing field on a linear track. However, this ‘phase precession’ effect is less well characterized during foraging in two-dimensional open field environments. Here, we mapped runs through the firing fields onto a unit circle to pool data from multiple runs. We asked which of seven behavioural and physiological variables show the best circular–linear correlation with the theta phase of spikes from place cells in hippocampal area CA1 and from grid cells from superficial layers of medial entorhinal cortex. The best correlate was the distance to the firing field peak projected onto the animal's current running direction. This was significantly stronger than other correlates, such as instantaneous firing rate and time-in-field, but similar in strength to correlates with other measures of distance travelled through the firing field. Phase precession was stronger in place cells than grid cells overall, and robust phase precession was seen in traversals through firing field peripheries (although somewhat less than in traversals through the centre), consistent with phase coding of displacement along the current direction. This type of phase coding, of place field distance ahead of or behind the animal, may be useful for allowing calculation of goal directions during navigation. PMID:24366140
Shi, Ping; Hu, Sijung; Yu, Hongliu
2018-02-01
The aim of this study was to analyze the recovery of heart rate variability (HRV) after treadmill exercise and to investigate the autonomic nervous system response after exercise. Frequency domain indices, i.e., LF(ms 2 ), HF(ms 2 ), LF(n.u.), HF(n.u.) and LF/HF, and lagged Poincaré plot width (SD1 m ) and length (SD2 m ) were introduced for comparison between the baseline period (Pre-E) before treadmill running and two periods after treadmill running (Post-E1 and Post-E2). The correlations between lagged Poincaré plot indices and frequency domain indices were applied to reveal the long-range correlation between linear and nonlinear indices during the recovery of HRV. The results suggested entirely attenuated autonomic nervous activity to the heart following the treadmill exercise. After the treadmill running, the sympathetic nerves achieved dominance and the parasympathetic activity was suppressed, which lasted for more than 4 min. The correlation coefficients between lagged Poincaré plot indices and spectral power indices could separate not only Pre-E and two sessions after the treadmill running, but also the two sessions in recovery periods, i.e., Post-E1 and Post-E2. Lagged Poincaré plot as an innovative nonlinear method showed a better performance over linear frequency domain analysis and conventional nonlinear Poincaré plot.
Kobayasi, Kohta I.; Hage, Steffen R.; Berquist, Sean; Feng, Jiang; Zhang, Shuyi; Metzner, Walter
2012-01-01
Mammalian vocalizations exhibit large variations in their spectrotemporal features, although it is still largely unknown which result from intrinsic biomechanical properties of the larynx and which are under direct neuromuscular control. Here we show that mere changes in laryngeal air flow yield several non-linear effects on sound production, in an isolated larynx preparation from horseshoe bats. Most notably, there are sudden jumps between two frequency bands used for either echolocation or communication in natural vocalizations. These jumps resemble changes in “registers” as in yodelling. In contrast, simulated contractions of the main larynx muscle produce linear frequency changes, but are limited to echolocation or communication frequencies. Only by combining non-linear and linear properties can this larynx therefore produce sounds covering the entire frequency range of natural calls. This may give behavioural meaning to yodelling-like vocal behaviour and reshape our thinking about how the brain controls the multitude of spectral vocal features in mammals. PMID:23149729
Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae
NASA Technical Reports Server (NTRS)
Rosu, Grigore; Havelund, Klaus
2001-01-01
The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.
Trautwein, C.M.; Rowan, L.C.
1987-01-01
Linear structural features and hydrothermally altered rocks that were interpreted from Landsat data have been used by the U.S. Geological Survey (USGS) in regional mineral resource appraisals for more than a decade. In the past, linear features and alterations have been incorporated into models for assessing mineral resources potential by manually overlaying these and other data sets. Recently, USGS research into computer-based geographic information systems (GIS) for mineral resources assessment programs has produced several new techniques for data analysis, quantification, and integration to meet assessment objectives.
Geomorphic domains and linear features on Landsat images, Circle Quadrangle, Alaska
Simpson, S.L.
1984-01-01
A remote sensing study using Landsat images was undertaken as part of the Alaska Mineral Resource Assessment Program (AMRAP). Geomorphic domains A and B, identified on enhanced Landsat images, divide Circle quadrangle south of Tintina fault zone into two regional areas having major differences in surface characteristics. Domain A is a roughly rectangular, northeast-trending area of relatively low relief and simple, widely spaced drainages, except where igneous rocks are exposed. In contrast, domain B, which bounds two sides of domain A, is more intricately dissected showing abrupt changes in slope and relatively high relief. The northwestern part of geomorphic domain A includes a previously mapped tectonostratigraphic terrane. The southeastern boundary of domain A occurs entirely within the adjoining tectonostratigraphic terrane. The sharp geomorphic contrast along the southeastern boundary of domain A and the existence of known faults along this boundary suggest that the southeastern part of domain A may be a subdivision of the adjoining terrane. Detailed field studies would be necessary to determine the characteristics of the subdivision. Domain B appears to be divisible into large areas of different geomorphic terrains by east-northeast-trending curvilinear lines drawn on Landsat images. Segments of two of these lines correlate with parts of boundaries of mapped tectonostratigraphic terranes. On Landsat images prominent north-trending lineaments together with the curvilinear lines form a large-scale regional pattern that is transected by mapped north-northeast-trending high-angle faults. The lineaments indicate possible lithlogic variations and/or structural boundaries. A statistical strike-frequency analysis of the linear features data for Circle quadrangle shows that northeast-trending linear features predominate throughout, and that most northwest-trending linear features are found south of Tintina fault zone. A major trend interval of N.64-72E. in the linear feature data, corresponds to the strike of foliations in metamorphic rocks and magnetic anomalies reflecting compositional variations suggesting that most linear features in the southern part of the quadrangle probably are related to lithologic variations brought about by folding and foliation of metamorphic rocks. A second important trend interval, N.14-35E., may be related to thrusting south of the Tintina fault zone, as high concentrations of linear features within this interval are found in areas of mapped thrusts. Low concentrations of linear features are found in areas of most igneous intrusives. High concentrations of linear features do not correspond to areas of mineralization in any consistent or significant way that would allow concentration patterns to be easily used as an aid in locating areas of mineralization. The results of this remote sensing study indicate that there are several possibly important areas where further detailed studies are warranted.
Geology and log responses of the Rose Run sandstone in Randolph Township, Portage County, Ohio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moyer, C.C.
1996-09-01
Approximately 75 wells have penetrated the Cambrian Rose Run sandstone in Randolph Township, Portage County, Ohio, about half of which should produce well beyond economic payout. Only one deep test (to the Rose Run or deeper) was drilled in this Township prior to 1990. Two separate and distinct Rose Run producing fields exist in the Township; the western field is predominately gas-productive and the east is predominantly oil-productive. Both fields are on the north side of the Akron-Suffield Fault Zone, which is part of a regional cross-strike structural discontinuity extending from the Pittsburgh, Pennsylvania area northwestward to Lake Erie. Thismore » feature exhibits control over Berea, Oriskany, Newburg, Clinton, and Rose Run production.« less
Linearly Supporting Feature Extraction for Automated Estimation of Stellar Atmospheric Parameters
NASA Astrophysics Data System (ADS)
Li, Xiangru; Lu, Yu; Comte, Georges; Luo, Ali; Zhao, Yongheng; Wang, Yongjun
2015-05-01
We describe a scheme to extract linearly supporting (LSU) features from stellar spectra to automatically estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H]. “Linearly supporting” means that the atmospheric parameters can be accurately estimated from the extracted features through a linear model. The successive steps of the process are as follow: first, decompose the spectrum using a wavelet packet (WP) and represent it by the derived decomposition coefficients; second, detect representative spectral features from the decomposition coefficients using the proposed method Least Absolute Shrinkage and Selection Operator (LARS)bs; third, estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H] from the detected features using a linear regression method. One prominent characteristic of this scheme is its ability to evaluate quantitatively the contribution of each detected feature to the atmospheric parameter estimate and also to trace back the physical significance of that feature. This work also shows that the usefulness of a component depends on both the wavelength and frequency. The proposed scheme has been evaluated on both real spectra from the Sloan Digital Sky Survey (SDSS)/SEGUE and synthetic spectra calculated from Kurucz's NEWODF models. On real spectra, we extracted 23 features to estimate {{T}{\\tt{eff} }}, 62 features for log g, and 68 features for [Fe/H]. Test consistencies between our estimates and those provided by the Spectroscopic Parameter Pipeline of SDSS show that the mean absolute errors (MAEs) are 0.0062 dex for log {{T}{\\tt{eff} }} (83 K for {{T}{\\tt{eff} }}), 0.2345 dex for log g, and 0.1564 dex for [Fe/H]. For the synthetic spectra, the MAE test accuracies are 0.0022 dex for log {{T}{\\tt{eff} }} (32 K for {{T}{\\tt{eff} }}), 0.0337 dex for log g, and 0.0268 dex for [Fe/H].
Financial Performance of Health Insurers: State-Run Versus Federal-Run Exchanges.
Hall, Mark A; McCue, Michael J; Palazzolo, Jennifer R
2018-06-01
Many insurers incurred financial losses in individual markets for health insurance during 2014, the first year of Affordable Care Act mandated changes. This analysis looks at key financial ratios of insurers to compare profitability in 2014 and 2013, identify factors driving financial performance, and contrast the financial performance of health insurers operating in state-run exchanges versus the federal exchange. Overall, the median loss of sampled insurers was -3.9%, no greater than their loss in 2013. Reduced administrative costs offset increases in medical losses. Insurers performed better in states with state-run exchanges than insurers in states using the federal exchange in 2014. Medical loss ratios are the underlying driver more than administrative costs in the difference in performance between states with federal versus state-run exchanges. Policy makers looking to improve the financial performance of the individual market should focus on features that differentiate the markets associated with state-run versus federal exchanges.
Wave run-up on a high-energy dissipative beach
Ruggiero, P.; Holman, R.A.; Beach, R.A.
2004-01-01
Because of highly dissipative conditions and strong alongshore gradients in foreshore beach morphology, wave run-up data collected along the central Oregon coast during February 1996 stand in contrast to run-up data currently available in the literature. During a single data run lasting approximately 90 min, the significant vertical run-up elevation varied by a factor of 2 along the 1.6 km study site, ranging from 26 to 61% of the offshore significant wave height, and was found to be linearly dependent on the local foreshore beach slope that varied by a factor of 5. Run-up motions on this high-energy dissipative beach were dominated by infragravity (low frequency) energy with peak periods of approximately 230 s. Incident band energy levels were 2.5 to 3 orders of magnitude lower than the low-frequency spectral peaks and typically 96% of the run-up variance was in the infragravity band. A broad region of the run-up spectra exhibited an f-4 roll off, typical of saturation, extending to frequencies lower than observed in previous studies. The run-up spectra were dependent on beach slope with spectra for steeper foreshore slopes shifted toward higher frequencies than spectra for shallower foreshore slopes. At infragravity frequencies, run-up motions were coherent over alongshore length scales in excess of 1 km, significantly greater than decorrelation length scales on moderate to reflective beaches. Copyright 2004 by the American Geophysical Union.
Clinical evaluation of the Technico Stat/Ion system.
Slaunwhite, D; Clements, J C; Reynoso, G
1977-02-01
1. We describe our evaluation of the Technicon Stat/Ion, an instrument which performs sodium, chloride and bicarbonate analysis simultaneously. 2. All four of the assays resulted in linear response over the entire clinical range with insignificant carryover between specimens. 3. Precision studies for within-run variation were: sodium 0.3 percent, potassium 0.7 percent, chloride 0.5 percent and bicarbonate 1.6 percent. Day-to-day precision was similar to the within-run precision. 4. Comparison methods for sodium, potassium, chloride and bicarbonate utilizing flame photometry, chloridometry and titration of released carbon dioxide respectively showed the following linear regression and correlation coefficients: sodium y=0.96+5.5 (a=0.988) potassium y=1.01x+0.0 (a=.996) chloride y=0.99x+1.0 (a=.993)bicarbonate y=1.0x+1.2 (alpha=.969).
Reduced-Order Models Based on POD-Tpwl for Compositional Subsurface Flow Simulation
NASA Astrophysics Data System (ADS)
Durlofsky, L. J.; He, J.; Jin, L. Z.
2014-12-01
A reduced-order modeling procedure applicable for compositional subsurface flow simulation will be described and applied. The technique combines trajectory piecewise linearization (TPWL) and proper orthogonal decomposition (POD) to provide highly efficient surrogate models. The method is based on a molar formulation (which uses pressure and overall component mole fractions as the primary variables) and is applicable for two-phase, multicomponent systems. The POD-TPWL procedure expresses new solutions in terms of linearizations around solution states generated and saved during previously simulated 'training' runs. High-dimensional states are projected into a low-dimensional subspace using POD. Thus, at each time step, only a low-dimensional linear system needs to be solved. Results will be presented for heterogeneous three-dimensional simulation models involving CO2 injection. Both enhanced oil recovery and carbon storage applications (with horizontal CO2 injectors) will be considered. Reasonably close agreement between full-order reference solutions and compositional POD-TPWL simulations will be demonstrated for 'test' runs in which the well controls differ from those used for training. Construction of the POD-TPWL model requires preprocessing overhead computations equivalent to about 3-4 full-order runs. Runtime speedups using POD-TPWL are, however, very significant - typically O(100-1000). The use of POD-TPWL for well control optimization will also be illustrated. For this application, some amount of retraining during the course of the optimization is required, which leads to smaller, but still significant, speedup factors.
[Evaluation of the Abbott Cell-Dyn Sapphire hematology analyzer].
Park, Younhee; Song, Jaewoo; Song, Sungwook; Song, Kyung Soon; Ahn, Mee Suk; Yang, Mi-Sook; Kim, Il; Choi, Jong Rak
2007-06-01
The performance of Cell-Dyn Sapphire (Abbott Diagnostic, USA) was compared to the Bayer Advia 2120 (Bayer Diagnostics, USA), Sysmex XE-2100 (Sysmex Corporation, Japan), and reference microscopy. Three hundred samples for routine CBC and WBC differentials were randomly chosen for a comparison analysis. The Cell-Dyn Sapphire system was evaluated according to the linearity, imprecision, inter-instrument correlations, and white blood cell differential. The CBC parameters (WBC, RBC, hemoglobin and platelet) showed a significant linearity with correlation coefficients greater than 0.99 (P<0.0001). Coefficients of variation (CV) for within-run and differential count of WBC were less than 5% except for Total CV for monocytes, eosinophils, and basophils and within-run CV for low valued eosinophils. The correlation coefficients with manual count were lower in monocytes, eosinophils, and basophils than in neutrophils and lymphocytes. The correlation with other hematology anlayzers was significant exclusive of basophils. These results demonstrate that the Cell-Dyn Sapphire has a good linearity, an acceptable reproducibility, a minimal carryover, and a comparable performance with the sysmex XE-2100 and Advia 2120.
Tough2{_}MP: A parallel version of TOUGH2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Wu, Yu-Shu; Ding, Chris
2003-04-09
TOUGH2{_}MP is a massively parallel version of TOUGH2. It was developed for running on distributed-memory parallel computers to simulate large simulation problems that may not be solved by the standard, single-CPU TOUGH2 code. The new code implements an efficient massively parallel scheme, while preserving the full capacity and flexibility of the original TOUGH2 code. The new software uses the METIS software package for grid partitioning and AZTEC software package for linear-equation solving. The standard message-passing interface is adopted for communication among processors. Numerical performance of the current version code has been tested on CRAY-T3E and IBM RS/6000 SP platforms. Inmore » addition, the parallel code has been successfully applied to real field problems of multi-million-cell simulations for three-dimensional multiphase and multicomponent fluid and heat flow, as well as solute transport. In this paper, we will review the development of the TOUGH2{_}MP, and discuss the basic features, modules, and their applications.« less
NASA Technical Reports Server (NTRS)
Moyerman, S.; Bierman, E.; Ade, P. A. R.; Aiken, R.; Barkats, D.; Bischoff, C.; Bock, J. J.; Chiang, H. C.; Dowell, C. D.; Duband, L.;
2012-01-01
The design and performance of a wide bandwidth linear polarization-modulator based on the Faraday effect is described. Faraday Rotation Modulators (FRMs) are solid-state polarization switches that are capable of modulation up to approx 10 kHz. Six FRMs were utilized during the 2006 observing season in the Background Imaging of Cosmic Extragalactic Polarization (BICEP) experiment; three FRMs were used at each of BICEP fs 100 and 150 GHz frequency bands. The technology was verified through high signal-to-noise detection of Galactic polarization using two of the six FRMs during four observing runs in 2006. The features exhibit strong agreement with BICEP fs measurements of the Galaxy using non-FRM pixels and with the Galactic polarization models. This marks the first detection of high signal-to-noise mm-wave celestial polarization using fast, active optical modulation. The performance of the FRMs during periods when they were not modulated was also analyzed and compared to results from BICEP fs 43 pixels without FRMs.
USC orthogonal multiprocessor for image processing with neural networks
NASA Astrophysics Data System (ADS)
Hwang, Kai; Panda, Dhabaleswar K.; Haddadi, Navid
1990-07-01
This paper presents the architectural features and imaging applications of the Orthogonal MultiProcessor (OMP) system, which is under construction at the University of Southern California with research funding from NSF and assistance from several industrial partners. The prototype OMP is being built with 16 Intel i860 RISC microprocessors and 256 parallel memory modules using custom-designed spanning buses, which are 2-D interleaved and orthogonally accessed without conflicts. The 16-processor OMP prototype is targeted to achieve 430 MIPS and 600 Mflops, which have been verified by simulation experiments based on the design parameters used. The prototype OMP machine will be initially applied for image processing, computer vision, and neural network simulation applications. We summarize important vision and imaging algorithms that can be restructured with neural network models. These algorithms can efficiently run on the OMP hardware with linear speedup. The ultimate goal is to develop a high-performance Visual Computer (Viscom) for integrated low- and high-level image processing and vision tasks.
The Effects of a Change in the Variability of Irrigation Water
NASA Astrophysics Data System (ADS)
Lyon, Kenneth S.
1983-10-01
This paper examines the short-run effects upon several variables of an increase in the variability of an input. The measure of an increase in the variability is the "mean preserving spread" suggested by Rothschild and Stiglitz (1970). The variables examined are real income (utility), expected profits, expected output, the quantity used of the controllable input, and the shadow price of the stochastic input. Four striking features of the results follow: (1) The concepts that have been useful in summarizing deterministic comparative static results are nearly absent when an input is stochastic. (2) Most of the signs of the partial derivatives depend upon more than concavity of the utility and production functions. (3) If the utility function is not "too" risk averse, then the risk-neutral results hold for the risk-aversion case. (4) If the production function is Cobb-Douglas, then definite results are achieved if the utility function is linear or if the "degree of risk-aversion" is "small."
DNA-magnetic bead detection using disposable cards and the anisotropic magnetoresistive sensor
NASA Astrophysics Data System (ADS)
Hien, L. T.; Quynh, L. K.; Huyen, V. T.; Tu, B. D.; Hien, N. T.; Phuong, D. M.; Nhung, P. H.; Giang, D. T. H.; Duc, N. H.
2016-12-01
A disposable card incorporating specific DNA probes targeting the 16 S rRNA gene of Streptococcus suis was developed for magnetically labeled target DNA detection. A single-stranded target DNA was hybridized with the DNA probe on the SPA/APTES/PDMS/Si as-prepared card, which was subsequently magnetically labeled with superparamagnetic beads for detection using an anisotropic magnetoresistive (AMR) sensor. An almost linear response between the output signal of the AMR sensor and amount of single-stranded target DNA varied from 4.5 to 18 pmol was identified. From the sensor output signal response towards the mass of magnetic beads which were directly immobilized on the disposable card surface, the limit of detection was estimated about 312 ng ferrites, which corresponds to 3.8 μemu. In comparison with DNA detection by conventional biosensor based on magnetic bead labeling, disposable cards are featured with higher efficiency and performances, ease of use and less running cost with respects to consumables for biosensor in biomedical analysis systems operating with immobilized bioreceptor.
EEG feature selection method based on decision tree.
Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun
2015-01-01
This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.
Composite dark energy: Cosmon models with running cosmological term and gravitational coupling
NASA Astrophysics Data System (ADS)
Grande, Javier; Solà, Joan; Štefančić, Hrvoje
2007-02-01
In the recent literature on dark energy (DE) model building we have learnt that cosmologies with variable cosmological parameters can mimic more traditional DE pictures exclusively based on scalar fields (e.g. quintessence and phantom). In a previous work we have illustrated this situation within the context of a renormalization group running cosmological term, Λ. Here we analyze the possibility that both the cosmological term and the gravitational coupling, G, are running parameters within a more general framework (a variant of the so-called “ΛXCDM models”) in which the DE fluid can be a mixture of a running Λ and another dynamical entity X (the “cosmon”) which may behave quintessence-like or phantom-like. We compute the effective EOS parameter, ω, of this composite fluid and show that the ΛXCDM can mimic to a large extent the standard ΛCDM model while retaining features hinting at its potential composite nature (such as the smooth crossing of the cosmological constant boundary ω=-1). We further argue that the ΛXCDM models can cure the cosmological coincidence problem. All in all we suggest that future experimental studies on precision cosmology should take seriously the possibility that the DE fluid can be a composite medium whose dynamical features are partially caused and renormalized by the quantum running of the cosmological parameters.
Fatigue associated with prolonged graded running.
Giandolini, Marlene; Vernillo, Gianluca; Samozino, Pierre; Horvais, Nicolas; Edwards, W Brent; Morin, Jean-Benoît; Millet, Guillaume Y
2016-10-01
Scientific experiments on running mainly consider level running. However, the magnitude and etiology of fatigue depend on the exercise under consideration, particularly the predominant type of contraction, which differs between level, uphill, and downhill running. The purpose of this review is to comprehensively summarize the neurophysiological and biomechanical changes due to fatigue in graded running. When comparing prolonged hilly running (i.e., a combination of uphill and downhill running) to level running, it is found that (1) the general shape of the neuromuscular fatigue-exercise duration curve as well as the etiology of fatigue in knee extensor and plantar flexor muscles are similar and (2) the biomechanical consequences are also relatively comparable, suggesting that duration rather than elevation changes affects neuromuscular function and running patterns. However, 'pure' uphill or downhill running has several fatigue-related intrinsic features compared with the level running. Downhill running induces severe lower limb tissue damage, indirectly evidenced by massive increases in plasma creatine kinase/myoglobin concentration or inflammatory markers. In addition, low-frequency fatigue (i.e., excitation-contraction coupling failure) is systematically observed after downhill running, although it has also been found in high-intensity uphill running for different reasons. Indeed, low-frequency fatigue in downhill running is attributed to mechanical stress at the interface sarcoplasmic reticulum/T-tubule, while the inorganic phosphate accumulation probably plays a central role in intense uphill running. Other fatigue-related specificities of graded running such as strategies to minimize the deleterious effects of downhill running on muscle function, the difference of energy cost versus heat storage or muscle activity changes in downhill, level, and uphill running are also discussed.
On the Convenience of Using the Complete Linearization Method in Modelling the BLR of AGN
NASA Astrophysics Data System (ADS)
Patriarchi, P.; Perinotto, M.
The Complete Linearization Method (Mihalas, 1978) consists in the determination of the radiation field (at a set of frequency points), atomic level populations, temperature, electron density etc., by resolving the system of radiative transfer, thermal equilibrium, statistical equilibrium equations simultaneously and self-consistently. Since the system is not linear, it must be solved by iteration after linearization, using a perturbative method, starting from an initial guess solution. Of course the Complete Linearization Method is more time consuming than the previous one. But how great can this disadvantage be in the age of supercomputers? It is possible to approximately evaluate the CPU time needed to run a model by computing the number of multiplications necessary to solve the system.
Spirou, Spiridon V; Papadimitroulas, Panagiotis; Liakou, Paraskevi; Georgoulias, Panagiotis; Loudos, George
2015-09-01
To present and evaluate a new methodology to investigate the effect of attenuation correction (AC) in single-photon emission computed tomography (SPECT) using textural features analysis, Monte Carlo techniques, and a computational anthropomorphic model. The GATE Monte Carlo toolkit was used to simulate SPECT experiments using the XCAT computational anthropomorphic model, filled with a realistic biodistribution of (99m)Tc-N-DBODC. The simulated gamma camera was the Siemens ECAM Dual-Head, equipped with a parallel hole lead collimator, with an image resolution of 3.54 × 3.54 mm(2). Thirty-six equispaced camera positions, spanning a full 360° arc, were simulated. Projections were calculated after applying a ± 20% energy window or after eliminating all scattered photons. The activity of the radioisotope was reconstructed using the MLEM algorithm. Photon attenuation was accounted for by calculating the radiological pathlength in a perpendicular line from the center of each voxel to the gamma camera. Twenty-two textural features were calculated on each slice, with and without AC, using 16 and 64 gray levels. A mask was used to identify only those pixels that belonged to each organ. Twelve of the 22 features showed almost no dependence on AC, irrespective of the organ involved. In both the heart and the liver, the mean and SD were the features most affected by AC. In the liver, six features were affected by AC only on some slices. Depending on the slice, skewness decreased by 22-34% with AC, kurtosis by 35-50%, long-run emphasis mean by 71-91%, and long-run emphasis range by 62-95%. In contrast, gray-level non-uniformity mean increased by 78-218% compared with the value without AC and run percentage mean by 51-159%. These results were not affected by the number of gray levels (16 vs. 64) or the data used for reconstruction: with the energy window or without scattered photons. The mean and SD were the main features affected by AC. In the heart, no other feature was affected. In the liver, other features were affected, but the effect was slice dependent. The number of gray levels did not affect the results.
Tay, Richard; Rifaat, Shakil Mohammad; Chin, Hoong Chor
2008-07-01
Leaving the scene of a crash without reporting it is an offence in most countries and many studies have been devoted to improving ways to identify hit-and-run vehicles and the drivers involved. However, relatively few studies have been conducted on identifying factors that contribute to the decision to run after the crash. This study identifies the factors that are associated with the likelihood of hit-and-run crashes including driver characteristics, vehicle types, crash characteristics, roadway features and environmental characteristics. Using a logistic regression model to delineate hit-and-run crashes from nonhit-and-run crashes, this study found that drivers were more likely to run when crashes occurred at night, on a bridge and flyover, bend, straight road and near shop houses; involved two vehicles, two-wheel vehicles and vehicles from neighboring countries; and when the driver was a male, minority, and aged between 45 and 69. On the other hand, collisions involving right turn and U-turn maneuvers, and occurring on undivided roads were less likely to be hit-and-run crashes.
2003-03-01
to be moved while the tunnel was running, reducing the need for tunnel shut-down and allowing for thermal equilibrium to be maintained during the high ...rather quickly. However, for the high speed runs, the tunnel heats up greatly, so data cannot be taken until the tunnel reaches thermal steady-state...January 1992. 10. Wilson, David G. and Korakianitis, Theodosius. The Design of High - Efficiency Turbo- machinery and Gas Turbines , 317—322. Upper Saddle
Linear feature detection algorithm for astronomical surveys - I. Algorithm description
NASA Astrophysics Data System (ADS)
Bektešević, Dino; Vinković, Dejan
2017-11-01
Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.
NASA Astrophysics Data System (ADS)
Aubert, J.; Fournier, A.
2011-10-01
Over the past decades, direct three-dimensional numerical modelling has been successfully used to reproduce the main features of the geodynamo. Here we report on efforts to solve the associated inverse problem, aiming at inferring the underlying properties of the system from the sole knowledge of surface observations and the first principle dynamical equations describing the convective dynamo. To this end we rely on twin experiments. A reference model time sequence is first produced and used to generate synthetic data, restricted here to the large-scale component of the magnetic field and its rate of change at the outer boundary. Starting from a different initial condition, a second sequence is next run and attempts are made to recover the internal magnetic, velocity and buoyancy anomaly fields from the sparse surficial data. In order to reduce the vast underdetermination of this problem, we use stochastic inversion, a linear estimation method determining the most likely internal state compatible with the observations and some prior knowledge, and we also implement a sequential evolution algorithm in order to invert time-dependent surface observations. The prior is the multivariate statistics of the numerical model, which are directly computed from a large number of snapshots stored during a preliminary direct run. The statistics display strong correlation between different harmonic degrees of the surface observations and internal fields, provided they share the same harmonic order, a natural consequence of the linear coupling of the governing dynamical equations and of the leading influence of the Coriolis force. Synthetic experiments performed with a weakly nonlinear model yield an excellent quantitative retrieval of the internal structure. In contrast, the use of a strongly nonlinear (and more realistic) model results in less accurate static estimations, which in turn fail to constrain the unobserved small scales in the time integration of the evolution scheme. Evaluating the quality of forecasts of the system evolution against the reference solution, we show that our scheme can improve predictions based on linear extrapolations on forecast horizons shorter than the system e-folding time. Still, in the perspective of forthcoming data assimilation activities, our study underlines the need of advanced estimation techniques able to cope with the moderate to strong nonlinearities present in the geodynamo.
Wohlwend, Martin; Olsen, Alexander; Håberg, Asta K.; Palmer, Helen S.
2017-01-01
The idea that physical activity differentially impacts upon performance of various cognitive tasks has recently gained increased interest. However, our current knowledge about how cognition is altered by acute physical activity is incomplete. To measure how different intensity levels of physical activity affect cognition during and after 1 bout of physical activity, 30 healthy, young participants were randomized to perform a not-X continuous performance test (CPT) during low (LI)- and moderate intensity (MI) running. The same participants were subsequently randomized to perform the not-X CPT post LI, MI, and high intensity (HI) running. In addition, exercise related mood changes were assessed through a self-report measure pre and post running at LI, MI, and HI. Results showed worsening of performance accuracy on the not-X CPT during one bout of moderate compared to low intensity running. Post running, there was a linear decrease in reaction time with increasing running intensity and no change in accuracy or mood. The decreased reaction times post HI running recovered back to baseline within 20 min. We conclude that accuracy is acutely deteriorated during the most straining physical activity while a transient intensity-dependent enhancement of cognitive control function is present following physical activity. PMID:28377735
Potential Relationship between Passive Plantar Flexor Stiffness and Running Performance.
Ueno, Hiromasa; Suga, Tadashi; Takao, Kenji; Tanaka, Takahiro; Misaki, Jun; Miyake, Yuto; Nagano, Akinori; Isaka, Tadao
2018-02-01
The present study aimed to determine the relationship between passive stiffness of the plantar flexors and running performance in endurance runners. Forty-eight well-trained male endurance runners and 24 untrained male control subjects participated in this study. Plantar flexor stiffness during passive dorsiflexion was calculated from the slope of the linear portion of the torque-angle curve. Of the endurance runners included in the present study, running economy in 28 endurance runners was evaluated by measuring energy cost during three 4-min trials (14, 16, and 18 km/h) of submaximal treadmill running. Passive stiffness of the plantar flexors was significantly higher in endurance runners than in untrained subjects. Moreover, passive plantar flexor stiffness in endurance runners was significantly correlated with a personal best 5000-m race time. Furthermore, passive plantar flexor stiffness in endurance runners was significantly correlated with energy cost during submaximal running at 16 km/h and 18 km/h, and a trend towards such significance was observed at 14 km/h. The present findings suggest that stiffer plantar flexors may help achieve better running performance, with greater running economy, in endurance runners. Therefore, in the clinical setting, passive stiffness of the plantar flexors may be a potential parameter for assessing running performance. © Georg Thieme Verlag KG Stuttgart · New York.
77 FR 73557 - Airworthiness Directives; Turbomeca S.A. Turboshaft Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-11
... turboshaft engines. This proposed AD was prompted by a finding that the engine's tachometer unit cycle... tachometer's unit cycle counting feature. This proposed AD would also require ground-run functional checks... accuracy of the engine's tachometer cycle counting feature. We are proposing this AD to prevent uncontained...
NASA Technical Reports Server (NTRS)
Quek, Kok How Francis
1990-01-01
A method of computing reliable Gaussian and mean curvature sign-map descriptors from the polynomial approximation of surfaces was demonstrated. Such descriptors which are invariant under perspective variation are suitable for hypothesis generation. A means for determining the pose of constructed geometric forms whose algebraic surface descriptors are nonlinear in terms of their orienting parameters was developed. This was done by means of linear functions which are capable of approximating nonlinear forms and determining their parameters. It was shown that biquadratic surfaces are suitable companion linear forms for cylindrical approximation and parameter estimation. The estimates provided the initial parametric approximations necessary for a nonlinear regression stage to fine tune the estimates by fitting the actual nonlinear form to the data. A hypothesis-based split-merge algorithm for extraction and pose determination of cylinders and planes which merge smoothly into other surfaces was developed. It was shown that all split-merge algorithms are hypothesis-based. A finite-state algorithm for the extraction of the boundaries of run-length regions was developed. The computation takes advantage of the run list topology and boundary direction constraints implicit in the run-length encoding.
2016-01-01
Understanding the relationship between physiological measurements from human subjects and their demographic data is important within both the biometric and forensic domains. In this paper we explore the relationship between measurements of the human hand and a range of demographic features. We assess the ability of linear regression and machine learning classifiers to predict demographics from hand features, thereby providing evidence on both the strength of relationship and the key features underpinning this relationship. Our results show that we are able to predict sex, height, weight and foot size accurately within various data-range bin sizes, with machine learning classification algorithms out-performing linear regression in most situations. In addition, we identify the features used to provide these relationships applicable across multiple applications. PMID:27806075
Miguel-Hurtado, Oscar; Guest, Richard; Stevenage, Sarah V; Neil, Greg J; Black, Sue
2016-01-01
Understanding the relationship between physiological measurements from human subjects and their demographic data is important within both the biometric and forensic domains. In this paper we explore the relationship between measurements of the human hand and a range of demographic features. We assess the ability of linear regression and machine learning classifiers to predict demographics from hand features, thereby providing evidence on both the strength of relationship and the key features underpinning this relationship. Our results show that we are able to predict sex, height, weight and foot size accurately within various data-range bin sizes, with machine learning classification algorithms out-performing linear regression in most situations. In addition, we identify the features used to provide these relationships applicable across multiple applications.
Inflationary features and shifts in cosmological parameters from Planck 2015 data
NASA Astrophysics Data System (ADS)
Obied, Georges; Dvorkin, Cora; Heinrich, Chen; Hu, Wayne; Miranda, Vinicius
2017-10-01
We explore the relationship between features in the Planck 2015 temperature and polarization data, shifts in the cosmological parameters, and features from inflation. Residuals in the temperature data from the best-fit power-law Λ CDM model at low multipole ℓ≲40 are mainly responsible for the high H0 and low σ8Ωm1 /2 values when comparing the ℓ<1000 portion to the full data set. These same residuals are better fit to inflationary features with a 1.9 σ preference for running of the running of the tilt or a stronger 99% C.L. local significance preference for a sharp drop in power around k =0.004 Mpc-1, relieving the internal tension with H0. At ℓ>1000 , the same in-phase acoustic residuals that drive the global H0 constraints and appear as a lensing anomaly also favor running parameters which allow even lower H0, but not once lensing reconstruction is considered. Polarization spectra are intrinsically highly sensitive to these parameter shifts, and even more so in the Planck 2015 TE data due to an anomalous suppression in power at ℓ≈165 , which disfavors the best-fit H0 Λ CDM solution by more than 2 σ , and high H0 value at almost 3 σ . Current polarization data also slightly enhance the significance of a sharp suppression of large-scale power but leave room for large improvements in the future with cosmic variance limited E -mode measurements.
Saravanan, Vijayakumar; Gautham, Namasivayam
2015-10-01
Proteins embody epitopes that serve as their antigenic determinants. Epitopes occupy a central place in integrative biology, not to mention as targets for novel vaccine, pharmaceutical, and systems diagnostics development. The presence of T-cell and B-cell epitopes has been extensively studied due to their potential in synthetic vaccine design. However, reliable prediction of linear B-cell epitope remains a formidable challenge. Earlier studies have reported discrepancy in amino acid composition between the epitopes and non-epitopes. Hence, this study proposed and developed a novel amino acid composition-based feature descriptor, Dipeptide Deviation from Expected Mean (DDE), to distinguish the linear B-cell epitopes from non-epitopes effectively. In this study, for the first time, only exact linear B-cell epitopes and non-epitopes have been utilized for developing the prediction method, unlike the use of epitope-containing regions in earlier reports. To evaluate the performance of the DDE feature vector, models have been developed with two widely used machine-learning techniques Support Vector Machine and AdaBoost-Random Forest. Five-fold cross-validation performance of the proposed method with error-free dataset and dataset from other studies achieved an overall accuracy between nearly 61% and 73%, with balance between sensitivity and specificity metrics. Performance of the DDE feature vector was better (with accuracy difference of about 2% to 12%), in comparison to other amino acid-derived features on different datasets. This study reflects the efficiency of the DDE feature vector in enhancing the linear B-cell epitope prediction performance, compared to other feature representations. The proposed method is made as a stand-alone tool available freely for researchers, particularly for those interested in vaccine design and novel molecular target development for systems therapeutics and diagnostics: https://github.com/brsaran/LBEEP.
Gellis, Allen C.; Myers, Michael; Noe, Gregory; Hupp, Cliff R.; Shenk, Edward; Myers, Luke
2017-01-01
Determining erosion and deposition rates in urban-suburban settings and how these processes are affected by large storms is important to understanding geomorphic processes in these landscapes. Sediment yields in the suburban and urban Upper Difficult Run are among the highest ever recorded in the Chesapeake Bay watershed, ranging from 161 to 376 Mg/km2/y. Erosion and deposition of streambanks, channel bed, and bars and deposition of floodplains were monitored between 1 March 2010 and 18 January 2013 in Upper Difficult Run, Virginia, USA. We documented the effects of two large storms, Tropical Storm Lee (September 2011), a 100-year event, and Super Storm Sandy (October 2012) a 5-year event, on channel erosion and deposition. Variability in erosion and deposition rates for all geomorphic features, temporally and spatially, are important conclusions of this study. Tropical Storm Lee was an erosive event, where erosion occurred on 82% of all streambanks and where 88% of streambanks that were aggrading before Tropical Storm Lee became erosional. Statistical analysis indicated that drainage area explains linear changes (cm/y) in eroding streambanks and that channel top width explains cross-sectional area changes (cm2/y) in eroding streambanks and floodplain deposition (mm/y). A quasi-sediment budget constructed for the study period using the streambanks, channel bed, channel bars, and floodplain measurements underestimated the measured suspended-sediment load by 61% (2130 Mg/y). Underestimation of the sediment load may be caused by measurement errors and to contributions from upland sediment sources, which were not measured but estimated at 36% of the gross input of sediment. Eroding streambanks contributed 42% of the gross input of sediment and accounted for 70% of the measured suspended-sediment load. Similar to other urban watersheds, the large percentage of impervious area in Difficult Run and direct runoff of precipitation leads to increased streamflow and streambank erosion. This study emphasizes the importance of streambanks in urban-suburban sediment budgets but also suggests that other sediment sources, such as upland sources, which were not measured in this study, can be an important source of sediment.
Radar returns from ground clutter in vicinity of airports
NASA Technical Reports Server (NTRS)
Raemer, H. R.; Rahgavan, R.; Bhattacharya, A.
1988-01-01
The objective of this project is to develop a dynamic simulation of the received signals from natural and man-made ground features in the vicinity of airports. The simulation is run during landing and takeoff stages of a flight. Vugraphs of noteworthy features of the simulation, ground clutter data bases, the development of algorithms for terrain features, typical wave theory results, and a gravity wave height profile are given.
Guidi, Andrea; Salvi, Sergio; Ottaviano, Manuel; Gentili, Claudio; Bertschy, Gilles; de Rossi, Danilo; Scilingo, Enzo Pasquale; Vanello, Nicola
2015-11-06
Bipolar disorder is one of the most common mood disorders characterized by large and invalidating mood swings. Several projects focus on the development of decision support systems that monitor and advise patients, as well as clinicians. Voice monitoring and speech signal analysis can be exploited to reach this goal. In this study, an Android application was designed for analyzing running speech using a smartphone device. The application can record audio samples and estimate speech fundamental frequency, F0, and its changes. F0-related features are estimated locally on the smartphone, with some advantages with respect to remote processing approaches in terms of privacy protection and reduced upload costs. The raw features can be sent to a central server and further processed. The quality of the audio recordings, algorithm reliability and performance of the overall system were evaluated in terms of voiced segment detection and features estimation. The results demonstrate that mean F0 from each voiced segment can be reliably estimated, thus describing prosodic features across the speech sample. Instead, features related to F0 variability within each voiced segment performed poorly. A case study performed on a bipolar patient is presented.
Guidi, Andrea; Salvi, Sergio; Ottaviano, Manuel; Gentili, Claudio; Bertschy, Gilles; de Rossi, Danilo; Scilingo, Enzo Pasquale; Vanello, Nicola
2015-01-01
Bipolar disorder is one of the most common mood disorders characterized by large and invalidating mood swings. Several projects focus on the development of decision support systems that monitor and advise patients, as well as clinicians. Voice monitoring and speech signal analysis can be exploited to reach this goal. In this study, an Android application was designed for analyzing running speech using a smartphone device. The application can record audio samples and estimate speech fundamental frequency, F0, and its changes. F0-related features are estimated locally on the smartphone, with some advantages with respect to remote processing approaches in terms of privacy protection and reduced upload costs. The raw features can be sent to a central server and further processed. The quality of the audio recordings, algorithm reliability and performance of the overall system were evaluated in terms of voiced segment detection and features estimation. The results demonstrate that mean F0 from each voiced segment can be reliably estimated, thus describing prosodic features across the speech sample. Instead, features related to F0 variability within each voiced segment performed poorly. A case study performed on a bipolar patient is presented. PMID:26561811
Adams, E J; Warrington, A P
2008-04-01
The simplicity of cobalt units gives them the advantage of reduced maintenance, running costs and downtime when compared with linear accelerators. However, treatments carried out on such units are typically limited to simple techniques. This study has explored the use of cobalt beams for conformal and intensity-modulated radiotherapy (IMRT). Six patients, covering a range of treatment sites, were planned using both X-ray photons (6/10 MV) and cobalt-60 gamma rays (1.17 and 1.33 MeV). A range of conformal and IMRT techniques were considered, as appropriate. Conformal plans created using cobalt beams for small breast, meningioma and parotid cases were found to compare well with those created using X-ray photons. By using additional fields, acceptable conformal plans were also created for oesophagus and prostate cases. IMRT plans were found to be of comparable quality for meningioma, parotid and thyroid cases on the basis of dose-volume histogram analysis. We conclude that it is possible to plan high-quality radical radiotherapy treatments for cobalt units. A well-designed beam blocking/compensation system would be required to enable a practical and efficient alternative to multileaf collimator (MLC)-based linac treatments to be offered. If cobalt units were to have such features incorporated into them, they could offer considerable benefits to the radiotherapy community.
Cai, Shuxian; Chen, Mei; Liu, Mengmeng; He, Wenhui; Liu, Zhijing; Wu, Dongzhi; Xia, Yaokun; Yang, Huanghao; Chen, Jinghua
2016-11-15
Herein, a signal magnification electrochemical aptasensor for the detection of breast cancer cell via free-running DNA walker is constructed. Theoretically, just one DNA walker, released by target cell-responsive reaction, can automatically cleave all D-RNA (a chimeric DNA/RNA oligonucleotide with a cleavage point rArU) anchored on electrode into shorter produces, giving rise to considerably detectable signal finally. Under the optimal conditions, the electrochemical signal decreased linearly with the concentration of MCF-7 cell. The linear range is from 0 to 500 cells mL(-1) with a detection limit of 47 cellsmL(-1). In a word, this approach may have advantages over traditional reported DNA machines for bioassay, particularly in terms of ease of operation, cost efficiency, free of labeling and of complex track design, which may hold great potential for wide application. Copyright © 2016 Elsevier B.V. All rights reserved.
Database usage and performance for the Fermilab Run II experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonham, D.; Box, D.; Gallas, E.
2004-12-01
The Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivering data to users and processing farms worldwide has represented major challenges to both experiments. The range of applications employing databases includes, calibration (conditions), trigger information, run configuration, run quality, luminosity, data management, and others. Oracle is the primary database product being used for these applications at Fermilab and some of its advanced features have been employed, such as table partitioning and replication. There is also experience with open source database products such as MySQL for secondary databasesmore » used, for example, in monitoring. Tools employed for monitoring the operation and diagnosing problems are also described.« less
Burckhardt, Bjoern B; Ramusovic, Sergej; Tins, Jutta; Laeer, Stephanie
2013-04-01
The orally active direct renin inhibitor aliskiren is approved for the treatment of essential hypertension in adults. Analytical methods utilized in clinical studies on efficacy and safety have not been fully described in the literature but need a large sample volume ranging from 200 to 700 μL, rendering them unsuitable particularly for pediatric applications. In the assay presented only 100 μL of serum is needed for mixed-mode solid-phase extraction. The chromatographic separation was performed on Xselect(TM) C18 CSH columns with mobile phase consisting of methanol-water-formic acid (75:25:0.005, v/v/v) and a flow rate of 0.4 mL/min. Running in positive electrospray ionization and multiple reaction monitoring the mass spectrometer was set to analyze precursor ion 552.2 m/z [M + H](+) to product ion 436.2 m/z during a total run time of 5 min. The method covers a linear calibration range of 0.146-1200 ng/mL. Intra-run and inter-run precisions were 0.4-7.2 and 0.6-12.9%. Mean recovery was at least 89%. Selectivity, accuracy and stability results comply with current European Medicines Agency and Food and Drug Administration guidelines. This successfully validated LC-MS/MS method with a wide linear calibration range requiring small serum amounts is suitable for pharmacokinetic investigations of aliskiren in pediatrics, adults and the elderly. Copyright © 2012 John Wiley & Sons, Ltd.
Linear signatures in nonlinear gyrokinetics: interpreting turbulence with pseudospectra
Hatch, D. R.; Jenko, F.; Navarro, A. Banon; ...
2016-07-26
A notable feature of plasma turbulence is its propensity to retain features of the underlying linear eigenmodes in a strongly turbulent state—a property that can be exploited to predict various aspects of the turbulence using only linear information. In this context, this work examines gradient-driven gyrokinetic plasma turbulence through three lenses—linear eigenvalue spectra, pseudospectra, and singular value decomposition (SVD). We study a reduced gyrokinetic model whose linear eigenvalue spectra include ion temperature gradient driven modes, stable drift waves, and kinetic modes representing Landau damping. The goal is to characterize in which ways, if any, these familiar ingredients are manifest inmore » the nonlinear turbulent state. This pursuit is aided by the use of pseudospectra, which provide a more nuanced view of the linear operator by characterizing its response to perturbations. We introduce a new technique whereby the nonlinearly evolved phase space structures extracted with SVD are linked to the linear operator using concepts motivated by pseudospectra. Using this technique, we identify nonlinear structures that have connections to not only the most unstable eigenmode but also subdominant modes that are nonlinearly excited. The general picture that emerges is a system in which signatures of the linear physics persist in the turbulence, albeit in ways that cannot be fully explained by the linear eigenvalue approach; a non-modal treatment is necessary to understand key features of the turbulence.« less
Rocker shoe, minimalist shoe, and standard running shoe: a comparison of running economy.
Sobhani, Sobhan; Bredeweg, Steef; Dekker, Rienk; Kluitenberg, Bas; van den Heuvel, Edwin; Hijmans, Juha; Postema, Klaas
2014-05-01
Running with rocker shoes is believed to prevent lower limb injuries. However, it is not clear how running in these shoes affects the energy expenditure. The purpose of this study was, therefore, to assess the effects of rocker shoes on running economy in comparison with standard and minimalist running shoes. Cross-over design. Eighteen endurance female runners (age=23.6 ± 3 years), who were inexperienced in running with rocker shoes and with minimalist/barefoot running, participated in this study. Oxygen consumption, carbon dioxide production, heart rate and rate of perceived exertion were measured while participants completed a 6-min sub-maximal treadmill running test for each footwear condition. The data of the last 2 min of each shoe condition were averaged for analysis. A linear mixed model was used to compare differences among three footwear conditions. Oxygen consumption during running with rocker shoes was on average 4.5% higher than with the standard shoes (p<0.001) and 5.6% higher than with the minimalist shoe (p<0.001). No significant differences were found in heart rate and rate of perceived exertion across three shoe conditions. Female runners, who are not experienced in running with the rocker shoes and minimalist shoes, show more energy expenditure during running with the rocker shoes compared with the standard and minimalist shoes. As the studied shoes were of different masses, part of the effect of increased energy expenditure with the rocker shoe is likely to be due to its larger mass as compared with standard running shoes and minimalist shoes. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Multimodal Image Alignment via Linear Mapping between Feature Modalities.
Jiang, Yanyun; Zheng, Yuanjie; Hou, Sujuan; Chang, Yuchou; Gee, James
2017-01-01
We propose a novel landmark matching based method for aligning multimodal images, which is accomplished uniquely by resolving a linear mapping between different feature modalities. This linear mapping results in a new measurement on similarity of images captured from different modalities. In addition, our method simultaneously solves this linear mapping and the landmark correspondences by minimizing a convex quadratic function. Our method can estimate complex image relationship between different modalities and nonlinear nonrigid spatial transformations even in the presence of heavy noise, as shown in our experiments carried out by using a variety of image modalities.
NASA Astrophysics Data System (ADS)
Masoud, Alaa; Koike, Katsuaki
2017-09-01
Detection and analysis of linear features related to surface and subsurface structures have been deemed necessary in natural resource exploration and earth surface instability assessment. Subjectivity in choosing control parameters required in conventional methods of lineament detection may cause unreliable results. To reduce this ambiguity, we developed LINDA (LINeament Detection and Analysis), an integrated tool with graphical user interface in Visual Basic. This tool automates processes of detection and analysis of linear features from grid data of topography (digital elevation model; DEM), gravity and magnetic surfaces, as well as data from remote sensing imagery. A simple interface with five display windows forms a user-friendly interactive environment. The interface facilitates grid data shading, detection and grouping of segments, lineament analyses for calculating strike and dip and estimating fault type, and interactive viewing of lineament geometry. Density maps of the center and intersection points of linear features (segments and lineaments) are also included. A systematic analysis of test DEMs and Landsat 7 ETM+ imagery datasets in the North and South Eastern Deserts of Egypt is implemented to demonstrate the capability of LINDA and correct use of its functions. Linear features from the DEM are superior to those from the imagery in terms of frequency, but both linear features agree with location and direction of V-shaped valleys and dykes and reference fault data. Through the case studies, LINDA applicability is demonstrated to highlight dominant structural trends, which can aid understanding of geodynamic frameworks in any region.
Retention time alignment of LC/MS data by a divide-and-conquer algorithm.
Zhang, Zhongqi
2012-04-01
Liquid chromatography-mass spectrometry (LC/MS) has become the method of choice for characterizing complex mixtures. These analyses often involve quantitative comparison of components in multiple samples. To achieve automated sample comparison, the components of interest must be detected and identified, and their retention times aligned and peak areas calculated. This article describes a simple pairwise iterative retention time alignment algorithm, based on the divide-and-conquer approach, for alignment of ion features detected in LC/MS experiments. In this iterative algorithm, ion features in the sample run are first aligned with features in the reference run by applying a single constant shift of retention time. The sample chromatogram is then divided into two shorter chromatograms, which are aligned to the reference chromatogram the same way. Each shorter chromatogram is further divided into even shorter chromatograms. This process continues until each chromatogram is sufficiently narrow so that ion features within it have a similar retention time shift. In six pairwise LC/MS alignment examples containing a total of 6507 confirmed true corresponding feature pairs with retention time shifts up to five peak widths, the algorithm successfully aligned these features with an error rate of 0.2%. The alignment algorithm is demonstrated to be fast, robust, fully automatic, and superior to other algorithms. After alignment and gap-filling of detected ion features, their abundances can be tabulated for direct comparison between samples.
ORNL Lightweighting Research Featured on MotorWeek
None
2018-06-06
PBS MotorWeek, television's longest running automotive series, featured ORNL lightweighting research for vehicle applications in an episode that aired in early April 2014. The crew captured footage of research including development of new metal alloys, additive manufacturing, carbon fiber production, advanced batteries, power electronics components, and neutron imaging applications for materials evaluation.
Utility of texture analysis for quantifying hepatic fibrosis on proton density MRI.
Yu, HeiShun; Buch, Karen; Li, Baojun; O'Brien, Michael; Soto, Jorge; Jara, Hernan; Anderson, Stephan W
2015-11-01
To evaluate the potential utility of texture analysis of proton density maps for quantifying hepatic fibrosis in a murine model of hepatic fibrosis. Following Institutional Animal Care and Use Committee (IACUC) approval, a dietary model of hepatic fibrosis was used and 15 ex vivo murine liver tissues were examined. All images were acquired using a 30 mm bore 11.7T magnetic resonance imaging (MRI) scanner with a multiecho spin-echo sequence. A texture analysis was employed extracting multiple texture features including histogram-based, gray-level co-occurrence matrix-based (GLCM), gray-level run-length-based features (GLRL), gray level gradient matrix (GLGM), and Laws' features. Texture features were correlated with histopathologic and digital image analysis of hepatic fibrosis. Histogram features demonstrated very weak to moderate correlations (r = -0.29 to 0.51) with hepatic fibrosis. GLCM features correlation and contrast demonstrated moderate-to-strong correlations (r = -0.71 and 0.59, respectively) with hepatic fibrosis. Moderate correlations were seen between hepatic fibrosis and the GLRL feature short run low gray-level emphasis (SRLGE) (r = -0. 51). GLGM features demonstrate very weak to weak correlations with hepatic fibrosis (r = -0.27 to 0.09). Moderate correlations were seen between hepatic fibrosis and Laws' features L6 and L7 (r = 0.58). This study demonstrates the utility of texture analysis applied to proton density MRI in a murine liver fibrosis model and validates the potential utility of texture-based features for the noninvasive, quantitative assessment of hepatic fibrosis. © 2015 Wiley Periodicals, Inc.
Analysis of separation test for automatic brake adjuster based on linear radon transformation
NASA Astrophysics Data System (ADS)
Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi
2015-01-01
The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.
Barlough, J E; Jacobson, R H; Downing, D R; Lynch, T J; Scott, F W
1987-01-01
The computer-assisted, kinetics-based enzyme-linked immunosorbent assay for coronavirus antibodies in cats was calibrated to the conventional indirect immunofluorescence assay by linear regression analysis and computerized interpolation (generation of "immunofluorescence assay-equivalent" titers). Procedures were developed for normalization and standardization of kinetics-based enzyme-linked immunosorbent assay results through incorporation of five different control sera of predetermined ("expected") titer in daily runs. When used with such sera and with computer assistance, the kinetics-based enzyme-linked immunosorbent assay minimized both within-run and between-run variability while allowing also for efficient data reduction and statistical analysis and reporting of results. PMID:3032390
Barlough, J E; Jacobson, R H; Downing, D R; Lynch, T J; Scott, F W
1987-01-01
The computer-assisted, kinetics-based enzyme-linked immunosorbent assay for coronavirus antibodies in cats was calibrated to the conventional indirect immunofluorescence assay by linear regression analysis and computerized interpolation (generation of "immunofluorescence assay-equivalent" titers). Procedures were developed for normalization and standardization of kinetics-based enzyme-linked immunosorbent assay results through incorporation of five different control sera of predetermined ("expected") titer in daily runs. When used with such sera and with computer assistance, the kinetics-based enzyme-linked immunosorbent assay minimized both within-run and between-run variability while allowing also for efficient data reduction and statistical analysis and reporting of results.
How Biomechanical Improvements in Running Economy Could Break the 2-hour Marathon Barrier.
Hoogkamer, Wouter; Kram, Rodger; Arellano, Christopher J
2017-09-01
A sub-2-hour marathon requires an average velocity (5.86 m/s) that is 2.5% faster than the current world record of 02:02:57 (5.72 m/s) and could be accomplished with a 2.7% reduction in the metabolic cost of running. Although supporting body weight comprises the majority of the metabolic cost of running, targeting the costs of forward propulsion and leg swing are the most promising strategies for reducing the metabolic cost of running and thus improving marathon running performance. Here, we calculate how much time could be saved by taking advantage of unconventional drafting strategies, a consistent tailwind, a downhill course, and specific running shoe design features while staying within the current International Association of Athletic Federations regulations for record purposes. Specifically, running in shoes that are 100 g lighter along with second-half scenarios of four runners alternately leading and drafting, or a tailwind of 6.0 m/s, combined with a 42-m elevation drop could result in a time well below the 2-hour marathon barrier.
NASA Astrophysics Data System (ADS)
Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.
2016-03-01
Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.
Factorization-based texture segmentation
Yuan, Jiangye; Wang, Deliang; Cheriyadat, Anil M.
2015-06-17
This study introduces a factorization-based approach that efficiently segments textured images. We use local spectral histograms as features, and construct an M × N feature matrix using M-dimensional feature vectors in an N-pixel image. Based on the observation that each feature can be approximated by a linear combination of several representative features, we factor the feature matrix into two matrices-one consisting of the representative features and the other containing the weights of representative features at each pixel used for linear combination. The factorization method is based on singular value decomposition and nonnegative matrix factorization. The method uses local spectral histogramsmore » to discriminate region appearances in a computationally efficient way and at the same time accurately localizes region boundaries. Finally, the experiments conducted on public segmentation data sets show the promise of this simple yet powerful approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuo, H; Tome, W; FOX, J
2014-06-15
Purpose: To study the feasibility of applying cancer risk model established from treated patients to predict the risk of recurrence on follow-up mammography after radiation therapy for both ipsilateral and contralateral breast. Methods: An extensive set of textural feature functions was applied to a set of 196 Mammograms from 50 patients. 56 Mammograms from 28 patients were used as training set, 44 mammograms from 22 patients were used as test set and the rest were used for prediction. Feature functions include Histogram, Gradient, Co-Occurrence Matrix, Run-Length Matrix and Wavelet Energy. An optimum subset of the feature functions was selected bymore » Fisher Coefficient (FO) or Mutual Information (MI) (up to top 10 features) or a method combined FO, MI and Principal Component (FMP) (up to top 30 features). One-Nearest Neighbor (1-NN), Linear Discriminant Analysis (LDA) and Nonlinear Discriminant Analysis (NDA) were utilized to build a risk model of breast cancer from the training set of mammograms at the time of diagnosis. The risk model was then used to predict the risk of recurrence from mammogram taken one year and three years after RT. Results: FPM with NDA has the best classification power in classifying the training set of the mammogram with lesions versus those without lesions. The model of FPM with NDA achieved a true positive (TP) rate of 82% compared to 45.5% of using FO with 1-NN. The best false positive (FP) rates were 0% and 3.6% in contra-lateral breast of 1-year and 3-years after RT, and 10.9% in ipsi-lateral breast of 3-years after RT. Conclusion: Texture analysis offers high dimension to differentiate breast tissue in mammogram. Using NDA to classify mammogram with lesion from mammogram without lesion, it can achieve rather high TP and low FP in the surveillance of mammogram for patient with conservative surgery combined RT.« less
Current and Future Applications of Machine Learning for the US Army
2018-04-13
designing from the unwieldy application of the first principles of flight controls, aerodynamics, blade propulsion, and so on, the designers turned...when the number of features runs into millions can become challenging. To overcome these issues, regularization techniques have been developed which...and compiled to run efficiently on either CPU or GPU architectures. 5) Keras63 is a library that contains numerous implementations of commonly used
A new synoptic scale resolving global climate simulation using the Community Earth System Model
NASA Astrophysics Data System (ADS)
Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana
2014-12-01
High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."
Local kernel nonparametric discriminant analysis for adaptive extraction of complex structures
NASA Astrophysics Data System (ADS)
Li, Quanbao; Wei, Fajie; Zhou, Shenghan
2017-05-01
The linear discriminant analysis (LDA) is one of popular means for linear feature extraction. It usually performs well when the global data structure is consistent with the local data structure. Other frequently-used approaches of feature extraction usually require linear, independence, or large sample condition. However, in real world applications, these assumptions are not always satisfied or cannot be tested. In this paper, we introduce an adaptive method, local kernel nonparametric discriminant analysis (LKNDA), which integrates conventional discriminant analysis with nonparametric statistics. LKNDA is adept in identifying both complex nonlinear structures and the ad hoc rule. Six simulation cases demonstrate that LKNDA have both parametric and nonparametric algorithm advantages and higher classification accuracy. Quartic unilateral kernel function may provide better robustness of prediction than other functions. LKNDA gives an alternative solution for discriminant cases of complex nonlinear feature extraction or unknown feature extraction. At last, the application of LKNDA in the complex feature extraction of financial market activities is proposed.
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.
1974-01-01
General classes of nonlinear and linear transformations were investigated for the reduction of the dimensionality of the classification (feature) space so that, for a prescribed dimension m of this space, the increase of the misclassification risk is minimized.
Analyzing linear spatial features in ecology.
Buettel, Jessie C; Cole, Andrew; Dickey, John M; Brook, Barry W
2018-06-01
The spatial analysis of dimensionless points (e.g., tree locations on a plot map) is common in ecology, for instance using point-process statistics to detect and compare patterns. However, the treatment of one-dimensional linear features (fiber processes) is rarely attempted. Here we appropriate the methods of vector sums and dot products, used regularly in fields like astrophysics, to analyze a data set of mapped linear features (logs) measured in 12 × 1-ha forest plots. For this demonstrative case study, we ask two deceptively simple questions: do trees tend to fall downhill, and if so, does slope gradient matter? Despite noisy data and many potential confounders, we show clearly that topography (slope direction and steepness) of forest plots does matter to treefall. More generally, these results underscore the value of mathematical methods of physics to problems in the spatial analysis of linear features, and the opportunities that interdisciplinary collaboration provides. This work provides scope for a variety of future ecological analyzes of fiber processes in space. © 2018 by the Ecological Society of America.
Subsurface failure in spherical bodies. A formation scenario for linear troughs on Vesta’s surface
Stickle, Angela M.; Schultz, P. H.; Crawford, D. A.
2014-10-13
Many asteroids in the Solar System exhibit unusual, linear features on their surface. The Dawn mission recently observed two sets of linear features on the surface of the asteroid 4 Vesta. Geologic observations indicate that these features are related to the two large impact basins at the south pole of Vesta, though no specific mechanism of origin has been determined. Furthermore, the orientation of the features is offset from the center of the basins. Experimental and numerical results reveal that the offset angle is a natural consequence of oblique impacts into a spherical target. We demonstrate that a set ofmore » shear planes develops in the subsurface of the body opposite to the point of first contact. Moreover, these subsurface failure zones then propagate to the surface under combined tensile-shear stress fields after the impact to create sets of approximately linear faults on the surface. Comparison between the orientation of damage structures in the laboratory and failure regions within Vesta can be used to constrain impact parameters (e.g., the approximate impact point and likely impact trajectory).« less
Parallel computing for automated model calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less
Step width alters iliotibial band strain during running.
Meardon, Stacey A; Campbell, Samuel; Derrick, Timothy R
2012-11-01
This study assessed the effect of step width during running on factors related to iliotibial band (ITB) syndrome. Three-dimensional (3D) kinematics and kinetics were recorded from 15 healthy recreational runners during overground running under various step width conditions (preferred and at least +/- 5% of their leg length). Strain and strain rate were estimated from a musculoskeletal model of the lower extremity. Greater ITB strain and strain rate were found in the narrower step width condition (p < 0.001, p = 0.040). ITB strain was significantly (p < 0.001) greater in the narrow condition than the preferred and wide conditions and it was greater in the preferred condition than the wide condition. ITB strain rate was significantly greater in the narrow condition than the wide condition (p = 0.020). Polynomial contrasts revealed a linear increase in both ITB strain and strain rate with decreasing step width. We conclude that relatively small decreases in step width can substantially increase ITB strain as well as strain rates. Increasing step width during running, especially in persons whose running style is characterized by a narrow step width, may be beneficial in the treatment and prevention of running-related ITB syndrome.
Liver transplantation with piggyback anastomosis using a linear stapler: a case report.
Akbulut, S; Wojcicki, M; Kayaalp, C; Yilmaz, S
2013-04-01
The so-called piggyback technique of liver transplantation (PB-LT) preserves the recipient's caval vein, shortening the warm ischemic time. It can be reduced even further by using a linear stapler for the cavocaval anastomosis. Herein, we have presented a case of a patient undergoing a side-to-side, whole-organ PB-LT for cryptogenic cirrhosis. Upper and lower orifices of the donor caval vein were closed at the back table using a running 5-0 polypropylene suture. Three stay sutures were then placed on caudal parts of both the recipient and donor caval with a 5-mm venotomies. The endoscopic linear stapler was placed upward through the orifices and fired. A second stapler was placed more cranially and fired resulting in a 8-9 cm long cavocavostomy. Some loose clips were flushed away from the caval lumen. The caval anastomosis was performed within 4 minutes; the time needed to close the caval vein stapler insertion orifices (4-0 polypropylene running suture) before reperfusion was 1 minute. All other anastomoses were performed as typically sutured. The presented technique enables one to reduce the warm ischemic time, which can be of particular importance with marginal grafts. Copyright © 2013 Elsevier Inc. All rights reserved.
Quantum algorithms for Gibbs sampling and hitting-time estimation
Chowdhury, Anirban Narayan; Somma, Rolando D.
2017-02-01
In this paper, we present quantum algorithms for solving two problems regarding stochastic processes. The first algorithm prepares the thermal Gibbs state of a quantum system and runs in time almost linear in √Nβ/Ζ and polynomial in log(1/ϵ), where N is the Hilbert space dimension, β is the inverse temperature, Ζ is the partition function, and ϵ is the desired precision of the output state. Our quantum algorithm exponentially improves the dependence on 1/ϵ and quadratically improves the dependence on β of known quantum algorithms for this problem. The second algorithm estimates the hitting time of a Markov chain. Formore » a sparse stochastic matrix Ρ, it runs in time almost linear in 1/(ϵΔ 3/2), where ϵ is the absolute precision in the estimation and Δ is a parameter determined by Ρ, and whose inverse is an upper bound of the hitting time. Our quantum algorithm quadratically improves the dependence on 1/ϵ and 1/Δ of the analog classical algorithm for hitting-time estimation. Finally, both algorithms use tools recently developed in the context of Hamiltonian simulation, spectral gap amplification, and solving linear systems of equations.« less
Weidemann, Gabrielle; Tangen, Jason M; Lovibond, Peter F; Mitchell, Christopher J
2009-04-01
P. Perruchet (1985b) showed a double dissociation of conditioned responses (CRs) and expectancy for an airpuff unconditioned stimulus (US) in a 50% partial reinforcement schedule in human eyeblink conditioning. In the Perruchet effect, participants show an increase in CRs and a concurrent decrease in expectancy for the airpuff across runs of reinforced trials; conversely, participants show a decrease in CRs and a concurrent increase in expectancy for the airpuff across runs of nonreinforced trials. Three eyeblink conditioning experiments investigated whether the linear trend in eyeblink CRs in the Perruchet effect is a result of changes in associative strength of the conditioned stimulus (CS), US sensitization, or learning the precise timing of the US. Experiments 1 and 2 demonstrated that the linear trend in eyeblink CRs is not the result of US sensitization. Experiment 3 showed that the linear trend in eyeblink CRs is present with both a fixed and a variable CS-US interval and so is not the result of learning the precise timing of the US. The results are difficult to reconcile with a single learning process model of associative learning in which expectancy mediates CRs. Copyright (c) 2009 APA, all rights reserved.
Constraints on running vacuum model with H(z) and f σ8
NASA Astrophysics Data System (ADS)
Geng, Chao-Qiang; Lee, Chung-Chi; Yin, Lu
2017-08-01
We examine the running vacuum model with Λ (H) = 3 ν H2 + Λ0, where ν is the model parameter and Λ0 is the cosmological constant. From the data of the cosmic microwave background radiation, weak lensing and baryon acoustic oscillation along with the time dependent Hubble parameter H(z) and weighted linear growth f (z)σ8(z) measurements, we find that ν=(1.37+0.72-0.95)× 10-4 with the best fitted χ2 value slightly smaller than that in the ΛCDM model.
Powerful Electromechanical Linear Actuator
NASA Technical Reports Server (NTRS)
Cowan, John R.; Myers, William N.
1994-01-01
Powerful electromechanical linear actuator designed to replace hydraulic actuator. Cleaner, simpler, and needs less maintenance. Features rotary-to-linear-motion converter with antibacklash gearing and position feedback via shaft-angle resolvers, which measure rotary motion.
Basso, Julia C; Morrell, Joan I
2017-10-01
Though voluntary wheel running (VWR) has been used extensively to induce changes in both behavior and biology, little attention has been given to the way in which different variables influence VWR. This lack of understanding has led to an inability to utilize this behavior to its full potential, possibly blunting its effects on the endpoints of interest. We tested how running experience, sex, gonadal hormones, and wheel apparatus influence VWR in a range of wheel access "doses". VWR increases over several weeks, with females eventually running 1.5 times farther and faster than males. Limiting wheel access can be used as a tool to motivate subjects to run but restricts maximal running speeds attained by the rodents. Additionally, circulating gonadal hormones regulate wheel running behavior, but are not the sole basis of sex differences in running. Limitations from previous studies include the predominate use of males, emphasis on distance run, variable amounts of wheel availability, variable light-dark cycles, and possible food and/or water deprivation. We designed a comprehensive set of experiments to address these inconsistencies, providing data regarding the "microfeatures" of running, including distance run, time spent running, running rate, bouting behavior, and daily running patterns. By systematically altering wheel access, VWR behavior can be finely tuned - a feature that we hypothesize is due to its positive incentive salience. We demonstrate how to maximize VWR, which will allow investigators to optimize exercise-induced changes in their behavioral and/or biological endpoints of interest. Published by Elsevier B.V.
Investigating Mars: Arsia Mons
2017-12-29
This image shows part of the southeastern flank of Arsia Mons, including the flat lying flows around the base of the volcano. These flows are located at the bottom of the image. Numerous small lava channels are visible aligned sub-parallel to the base of the volcano. Several narrow, lobate flows show the downslope direction from the top left of the image towards the bottom right. Running against this elevation change are large paired faults called graben. Graben form by faults that have allowed the material between them to "slide" down. The resultant topography is a linear depression. None of the lobate flows enter and then run along the fault valley, indicating that the faulting occurred after the lava flows. Arsia Mons is the southernmost of the Tharsis volcanoes. It is 270 miles (450km) in diameter, almost 12 miles (20km) high, and the summit caldera is 72 miles (120km) wide. For comparison, the largest volcano on Earth is Mauna Loa. From its base on the sea floor, Mauna Loa measures only 6.3 miles high and 75 miles in diameter. A large volcanic crater known as a caldera is located at the summit of all of the Tharsis volcanoes. These calderas are produced by massive volcanic explosions and collapse. The Arsia Mons summit caldera is larger than many volcanoes on Earth. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 17691 Latitude: -11.2622 Longitude: 241 Instrument: VIS Captured: 2005-12-09 23:06 https://photojournal.jpl.nasa.gov/catalog/PIA22154
Wavelet-like bases for thin-wire integral equations in electromagnetics
NASA Astrophysics Data System (ADS)
Francomano, E.; Tortorici, A.; Toscano, E.; Ala, G.; Viola, F.
2005-03-01
In this paper, wavelets are used in solving, by the method of moments, a modified version of the thin-wire electric field integral equation, in frequency domain. The time domain electromagnetic quantities, are obtained by using the inverse discrete fast Fourier transform. The retarded scalar electric and vector magnetic potentials are employed in order to obtain the integral formulation. The discretized model generated by applying the direct method of moments via point-matching procedure, results in a linear system with a dense matrix which have to be solved for each frequency of the Fourier spectrum of the time domain impressed source. Therefore, orthogonal wavelet-like basis transform is used to sparsify the moment matrix. In particular, dyadic and M-band wavelet transforms have been adopted, so generating different sparse matrix structures. This leads to an efficient solution in solving the resulting sparse matrix equation. Moreover, a wavelet preconditioner is used to accelerate the convergence rate of the iterative solver employed. These numerical features are used in analyzing the transient behavior of a lightning protection system. In particular, the transient performance of the earth termination system of a lightning protection system or of the earth electrode of an electric power substation, during its operation is focused. The numerical results, obtained by running a complex structure, are discussed and the features of the used method are underlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, Steve; Haji-Sheikh, Michael; Huntington, Andrew
The Voxtel VX-798 is a prototype X-ray pixel array detector (PAD) featuring a silicon sensor photodiode array of 48 x 48 pixels, each 130 mu m x 130 mu m x 520 mu m thick, coupled to a CMOS readout application specific integrated circuit (ASIC). The first synchrotron X-ray characterization of this detector is presented, and its ability to selectively count individual X-rays within two independent arrival time windows, a programmable energy range, and localized to a single pixel is demonstrated. During our first trial run at Argonne National Laboratory's Advance Photon Source, the detector achieved a 60 ns gatingmore » time and 700 eV full width at half-maximum energy resolution in agreement with design parameters. Each pixel of the PAD holds two independent digital counters, and the discriminator for X-ray energy features both an upper and lower threshold to window the energy of interest discarding unwanted background. This smart-pixel technology allows energy and time resolution to be set and optimized in software. It is found that the detector linearity follows an isolated dead-time model, implying that megahertz count rates should be possible in each pixel. Measurement of the line and point spread functions showed negligible spatial blurring. When combined with the timing structure of the synchrotron storage ring, it is demonstrated that the area detector can perform both picosecond time-resolved X-ray diffraction and fluorescence spectroscopy measurements.« less
Collins, Heather R; Zhu, Xun; Bhatt, Ramesh S; Clark, Jonathan D; Joseph, Jane E
2012-12-01
The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. This study parametrically varied demands on featural, first-order configural, or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing), or reflected generalized perceptual differentiation (i.e., differentiation that crosses category and processing type boundaries). ROIs were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories.
Collins, Heather R.; Zhu, Xun; Bhatt, Ramesh S.; Clark, Jonathan D.; Joseph, Jane E.
2015-01-01
The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. The present study parametrically varied demands on featural, first-order configural or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing) or reflected generalized perceptual differentiation (i.e. differentiation that crosses category and processing type boundaries). Regions of interest were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process-specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex, and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain-specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories. PMID:22849402
Algorithms for sorting unsigned linear genomes by the DCJ operations.
Jiang, Haitao; Zhu, Binhai; Zhu, Daming
2011-02-01
The double cut and join operation (abbreviated as DCJ) has been extensively used for genomic rearrangement. Although the DCJ distance between signed genomes with both linear and circular (uni- and multi-) chromosomes is well studied, the only known result for the NP-complete unsigned DCJ distance problem is an approximation algorithm for unsigned linear unichromosomal genomes. In this article, we study the problem of computing the DCJ distance on two unsigned linear multichromosomal genomes (abbreviated as UDCJ). We devise a 1.5-approximation algorithm for UDCJ by exploiting the distance formula for signed genomes. In addition, we show that UDCJ admits a weak kernel of size 2k and hence an FPT algorithm running in O(2(2k)n) time.
Deformed Palmprint Matching Based on Stable Regions.
Wu, Xiangqian; Zhao, Qiushi
2015-12-01
Palmprint recognition (PR) is an effective technology for personal recognition. A main problem, which deteriorates the performance of PR, is the deformations of palmprint images. This problem becomes more severe on contactless occasions, in which images are acquired without any guiding mechanisms, and hence critically limits the applications of PR. To solve the deformation problems, in this paper, a model for non-linearly deformed palmprint matching is derived by approximating non-linear deformed palmprint images with piecewise-linear deformed stable regions. Based on this model, a novel approach for deformed palmprint matching, named key point-based block growing (KPBG), is proposed. In KPBG, an iterative M-estimator sample consensus algorithm based on scale invariant feature transform features is devised to compute piecewise-linear transformations to approximate the non-linear deformations of palmprints, and then, the stable regions complying with the linear transformations are decided using a block growing algorithm. Palmprint feature extraction and matching are performed over these stable regions to compute matching scores for decision. Experiments on several public palmprint databases show that the proposed models and the KPBG approach can effectively solve the deformation problem in palmprint verification and outperform the state-of-the-art methods.
NASA Technical Reports Server (NTRS)
Pelletier, R. E.
1984-01-01
A need exists for digitized information pertaining to linear features such as roads, streams, water bodies and agricultural field boundaries as component parts of a data base. For many areas where this data may not yet exist or is in need of updating, these features may be extracted from remotely sensed digital data. This paper examines two approaches for identifying linear features, one utilizing raw data and the other classified data. Each approach uses a series of data enhancement procedures including derivation of standard deviation values, principal component analysis and filtering procedures using a high-pass window matrix. Just as certain bands better classify different land covers, so too do these bands exhibit high spectral contrast by which boundaries between land covers can be delineated. A few applications for this kind of data are briefly discussed, including its potential in a Universal Soil Loss Equation Model.
Interior car noise created by textured pavement surfaces : final report.
DOT National Transportation Integrated Search
1975-01-01
Because of widespread concern about the effect of textured pavement surfaces on interior car noise, sound pressure levels (SPL) were measured inside a test vehicle as it traversed 21 pavements with various textures. A linear regression analysis run o...
DOT National Transportation Integrated Search
2017-06-01
Performance analyses of newly constructed linear BMPs in retaining stormwater run-off from 1 in. precipitation in : post-construction highway applications and urban areas were conducted using numerical simulations and field : observation. A series of...
NASA Astrophysics Data System (ADS)
Sánchez, R.; Newman, D. E.; Mier, J. A.
2018-05-01
Fractional transport equations are used to build an effective model for transport across the running sandpile cellular automaton [Hwa et al., Phys. Rev. A 45, 7002 (1992), 10.1103/PhysRevA.45.7002]. It is shown that both temporal and spatial fractional derivatives must be considered to properly reproduce the sandpile transport features, which are governed by self-organized criticality, at least over sufficiently long or large scales. In contrast to previous applications of fractional transport equations to other systems, the specifics of sand motion require in this case that the spatial fractional derivatives used for the running sandpile must be of the completely asymmetrical Riesz-Feller type. Appropriate values for the fractional exponents that define these derivatives in the case of the running sandpile are obtained numerically.
Run-Reversal Equilibrium for Clinical Trial Randomization
Grant, William C.
2015-01-01
In this paper, we describe a new restricted randomization method called run-reversal equilibrium (RRE), which is a Nash equilibrium of a game where (1) the clinical trial statistician chooses a sequence of medical treatments, and (2) clinical investigators make treatment predictions. RRE randomization counteracts how each investigator could observe treatment histories in order to forecast upcoming treatments. Computation of a run-reversal equilibrium reflects how the treatment history at a particular site is imperfectly correlated with the treatment imbalance for the overall trial. An attractive feature of RRE randomization is that treatment imbalance follows a random walk at each site, while treatment balance is tightly constrained and regularly restored for the overall trial. Less predictable and therefore more scientifically valid experiments can be facilitated by run-reversal equilibrium for multi-site clinical trials. PMID:26079608
Information Commons Features Cutting-Edge Conservation and Technology
ERIC Educational Resources Information Center
Gilroy, Marilyn
2011-01-01
This article features Richard J. Klarchek Information Commons (IC) at Loyola University Chicago, an all-glass library building on the shore of Chicago's Lake Michigan that is not only a state-of-the-art digital research library and study space--it also runs on cutting-edge energy technology. The building has attracted attention and visitors from…
Determination of the anaerobic threshold by a noninvasive field test in runners.
Conconi, F; Ferrari, M; Ziglio, P G; Droghetti, P; Codeca, L
1982-04-01
The relationship between running speed (RS) and heart rate (HR) was determined in 210 runners. On a 400-m track the athletes ran continuously from an initial velocity of 12-14 km/h to submaximal velocities varying according to the athlete's capability. The HRs were determined through ECG. In all athletes examined, a deflection from the expected linearity of the RS-HR relationship was observed at submaximal RS. The test-retest correlation for the velocities at which this deflection from linearity occurred (Vd) determined in 26 athletes was 0.99. The velocity at the anaerobic threshold (AT), established by means of blood lactate measurements, and Vd were coincident in 10 runners. The correlation between Vd and average running speed (mean RS) in competition was 0.93 in the 5,000 m (mean Vd = 19.13 +/- 1.08 km/h; mean RS = 20.25 +/- 1.15 km/h), 0.95 in the marathon (mean Vd = 18.85 +/- 1.15 km/h; mean RS = 17.40 +/- 1.14 km/h), and 0.99 in the 1-h race (mean Vd = 18.70 +/- 0.98 km/h; mean RS = 18.65 +/- 0.92 km/h), thus showing that AT is critical in determining the running pace in aerobic competitive events.
3D Wavelet-Based Filter and Method
Moss, William C.; Haase, Sebastian; Sedat, John W.
2008-08-12
A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.
LiDAR Point Cloud and Stereo Image Point Cloud Fusion
2013-09-01
LiDAR point cloud (right) highlighting linear edge features ideal for automatic registration...point cloud (right) highlighting linear edge features ideal for automatic registration. Areas where topography is being derived, unfortunately, do...with the least amount of automatic correlation errors was used. The following graphic (Figure 12) shows the coverage of the WV1 stereo triplet as
Sensory processing and world modeling for an active ranging device
NASA Technical Reports Server (NTRS)
Hong, Tsai-Hong; Wu, Angela Y.
1991-01-01
In this project, we studied world modeling and sensory processing for laser range data. World Model data representation and operation were defined. Sensory processing algorithms for point processing and linear feature detection were designed and implemented. The interface between world modeling and sensory processing in the Servo and Primitive levels was investigated and implemented. In the primitive level, linear features detectors for edges were also implemented, analyzed and compared. The existing world model representations is surveyed. Also presented is the design and implementation of the Y-frame model, a hierarchical world model. The interfaces between the world model module and the sensory processing module are discussed as well as the linear feature detectors that were designed and implemented.
The linear trend of headache prevalence and some headache features in school children.
Ozge, Aynur; Buğdayci, Resul; Saşmaz, Tayyar; Kaleağasi, Hakan; Kurt, Oner; Karakelle, Ali; Siva, Aksel
2007-04-01
The objectives of this study were to determine the age and sex dependent linear trend of recurrent headache prevalence in schoolchildren in Mersin. A stratified sample composed of 5562 children; detailed characteristics were previously published. In this study the prevalence distribution of headache by age and sex showed a peak in the female population at the age of 11 (27.2%) with a plateau in the following years. The great stratified random sample results suggested that, in addition to socio-demographic features, detailed linear trend analysis showed headache features of children with headache have some specific characteristics dependent on age, gender and headache type. This study results can constitute a basis for the future epidemiological based studies.
Topology Optimization for Reducing Additive Manufacturing Processing Distortions
2017-12-01
features that curl or warp under thermal load and are subsequently struck by the recoater blade /roller. Support structures act to wick heat away and...was run for 150 iterations. The material properties for all examples were Young’s modulus E = 1 GPa, Poisson’s ratio ν = 0.25, and thermal expansion...the element-birth model is significantly more computationally expensive for a full op- timization run . Consider, the computational complexity of a
Skeletal muscle architectural adaptations to marathon run training.
Murach, Kevin; Greever, Cory; Luden, Nicholas D
2015-01-01
We assessed lateral gastrocnemius (LG) and vastus lateralis (VL) architecture in 16 recreational runners before and after 12 weeks of marathon training. LG fascicle length decreased 10% while pennation angle increased 17% (p < 0.05). There was a significant correlation between diminished blood lactate levels and LG pennation angle change (r = 0.90). No changes were observed in VL. This is the first evidence that run training can modify skeletal muscle architectural features.
Universal relations for range corrections to Efimov features
Ji, Chen; Braaten, Eric; Phillips, Daniel R.; ...
2015-09-09
In a three-body system of identical bosons interacting through a large S-wave scattering length a, there are several sets of features related to the Efimov effect that are characterized by discrete scale invariance. Effective field theory was recently used to derive universal relations between these Efimov features that include the first-order correction due to a nonzero effective range r s. We reveal a simple pattern in these range corrections that had not been previously identified. The pattern is explained by the renormalization group for the effective field theory, which implies that the Efimov three-body parameter runs logarithmically with the momentummore » scale at a rate proportional to r s/a. The running Efimov parameter also explains the empirical observation that range corrections can be largely taken into account by shifting the Efimov parameter by an adjustable parameter divided by a. Furthermore, the accuracy of universal relations that include first-order range corrections is verified by comparing them with various theoretical calculations using models with nonzero range.« less
Zhang, Renqin; McEwen, Jean-Sabin
2018-05-22
Cu K-edge X-ray absorption near-edge spectra (XANES) have been widely used to study the properties of Cu-SSZ-13. In this Letter, the sensitivity of the XANES features to the local environment for a Cu + cation with a linear configuration and a Cu 2+ cation with a square-linear configuration in Cu-SSZ-13 is reported. When a Cu + cation is bonded to H 2 O or NH 3 in a linear configuration, the XANES has a strong peak at around 8983 eV. The intensity of this peak decreases as the linear configuration is broken. As for the Cu 2+ cations in a square-planar configuration with a coordination number of 4, two peaks at around 8986 and 8993 eV are found. An intensity decrease for both peaks at around 8986 and 8993 eV is found in an NH 3 _4_Z 2 Cu model as the N-Cu-N angle changes from 180 to 100°. We correlate these features to the variation of the 4p state by PDOS analysis. In addition, the feature peaks for both the Cu + cation and Cu 2+ cation do not show a dependence on the Cu-N bond length. We further show that the feature peaks also change when the coordination number of the Cu cation is varied, while these feature peaks are independent of the zeolite topology. These findings help elucidate the experimental XANES features at an atomic and an electronic level.
Mo, Shiwei; Chow, Daniel H K
2018-05-19
Motor control, related to running performance and running related injuries, is affected by progression of fatigue during a prolonged run. Distance runners are usually recommended to train at or slightly above anaerobic threshold (AT) speed for improving performance. However, running at AT speed may result in accelerated fatigue. It is not clear how one adapts running gait pattern during a prolonged run at AT speed and if there are differences between runners with different training experience. To compare characteristics of stride-to-stride variability and complexity during a prolonged run at AT speed between novice runners (NR) and experienced runners (ER). Both NR (n = 17) and ER (n = 17) performed a treadmill run for 31 min at his/her AT speed. Stride interval dynamics was obtained throughout the run with the middle 30 min equally divided into six time intervals (denoted as T1, T2, T3, T4, T5 and T6). Mean, coefficient of variation (CV) and scaling exponent alpha of stride intervals were calculated for each interval of each group. This study revealed mean stride interval significantly increased with running time in a non-linear trend (p<0.001). The stride interval variability (CV) maintained relatively constant for NR (p = 0.22) and changed nonlinearly for ER (p = 0.023) throughout the run. Alpha was significantly different between groups at T2, T5 and T6, and nonlinearly changed with running time for both groups with slight differences. These findings provided insights into how the motor control system adapts to progression of fatigue and evidences that long-term training enhances motor control. Although both ER and NR could regulate gait complexity to maintain AT speed throughout the prolonged run, ER also regulated stride interval variability to achieve the goal. Copyright © 2018. Published by Elsevier B.V.
Nonferromagnetic linear variable differential transformer
Ellis, James F.; Walstrom, Peter L.
1977-06-14
A nonferromagnetic linear variable differential transformer for accurately measuring mechanical displacements in the presence of high magnetic fields is provided. The device utilizes a movable primary coil inside a fixed secondary coil that consists of two series-opposed windings. Operation is such that the secondary output voltage is maintained in phase (depending on polarity) with the primary voltage. The transducer is well-suited to long cable runs and is useful for measuring small displacements in the presence of high or alternating magnetic fields.
NASA Astrophysics Data System (ADS)
Chowdhury, Aritra; Sevinsky, Christopher J.; Santamaria-Pang, Alberto; Yener, Bülent
2017-03-01
The cancer diagnostic workflow is typically performed by highly specialized and trained pathologists, for which analysis is expensive both in terms of time and money. This work focuses on grade classification in colon cancer. The analysis is performed over 3 protein markers; namely E-cadherin, beta actin and colagenIV. In addition, we also use a virtual Hematoxylin and Eosin (HE) stain. This study involves a comparison of various ways in which we can manipulate the information over the 4 different images of the tissue samples and come up with a coherent and unified response based on the data at our disposal. Pre- trained convolutional neural networks (CNNs) is the method of choice for feature extraction. The AlexNet architecture trained on the ImageNet database is used for this purpose. We extract a 4096 dimensional feature vector corresponding to the 6th layer in the network. Linear SVM is used to classify the data. The information from the 4 different images pertaining to a particular tissue sample; are combined using the following techniques: soft voting, hard voting, multiplication, addition, linear combination, concatenation and multi-channel feature extraction. We observe that we obtain better results in general than when we use a linear combination of the feature representations. We use 5-fold cross validation to perform the experiments. The best results are obtained when the various features are linearly combined together resulting in a mean accuracy of 91.27%.
Roberts, Michael D; Toedebusch, Ryan G; Wells, Kevin D; Company, Joseph M; Brown, Jacob D; Cruthirds, Clayton L; Heese, Alexander J; Zhu, Conan; Rottinghaus, George E; Childs, Thomas E; Booth, Frank W
2014-01-01
We compared the nucleus accumbens (NAc) transcriptomes of generation 8 (G8), 34-day-old rats selectively bred for low (LVR) versus high voluntary running (HVR) behaviours in rats that never ran (LVRnon-run and HVRnon-run), as well as in rats after 6 days of voluntary wheel running (LVRrun and HVRrun). In addition, the NAc transcriptome of wild-type Wistar rats was compared. The purpose of this transcriptomics approach was to generate testable hypotheses as to possible NAc features that may be contributing to running motivation differences between lines. Ingenuity Pathway Analysis and Gene Ontology analyses suggested that ‘cell cycle’-related transcripts and the running-induced plasticity of dopamine-related transcripts were lower in LVR versus HVR rats. From these data, a hypothesis was generated that LVR rats might have less NAc neuron maturation than HVR rats. Follow-up immunohistochemistry in G9–10 LVRnon-run rats suggested that the LVR line inherently possessed fewer mature medium spiny (Darpp-32-positive) neurons (P < 0.001) and fewer immature (Dcx-positive) neurons (P < 0.001) than their G9–10 HVR counterparts. However, voluntary running wheel access in our G9–10 LVRs uniquely increased their Darpp-32-positive and Dcx-positive neuron densities. In summary, NAc cellularity differences and/or the lack of running-induced plasticity in dopamine signalling-related transcripts may contribute to low voluntary running motivation in LVR rats. PMID:24665095
NASA Astrophysics Data System (ADS)
Dingel, Benjamin
2017-01-01
In this invited paper, we summarize the current developments in linear optical field modulators (LOFMs) for coherent multilevel optical transmitters. Our focus is the presentation of a new, novel LOFM design that provides beneficial and necessary features such as lowest hardware component counts, lowered insertion loss, smaller RF power consumption, smaller footprint, simple structure, and lowered cost. We refer to this modulator as called Double-Pass LOFM (DP-LOFM) that becomes the building block for high-performance, linear Dual-Polarization, In-Phase- Quadrature-Phase (DP-IQ) modulator. We analyze its performance in term of slope linearity, and present one of its unique feature -- a built-in compensation functionality that no other linear modulators possessed till now.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
Thin Cloud Detection Method by Linear Combination Model of Cloud Image
NASA Astrophysics Data System (ADS)
Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.
2018-04-01
The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.
Jiang, Feng; Han, Ji-zhong
2018-01-01
Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods. PMID:29623088
Yu, Xu; Lin, Jun-Yu; Jiang, Feng; Du, Jun-Wei; Han, Ji-Zhong
2018-01-01
Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.
Han, Sungmin; Chu, Jun-Uk; Park, Jong Woong; Youn, Inchan
2018-05-15
Proprioceptive afferent activities recorded by a multichannel microelectrode have been used to decode limb movements to provide sensory feedback signals for closed-loop control in a functional electrical stimulation (FES) system. However, analyzing the high dimensionality of neural activity is one of the major challenges in real-time applications. This paper proposes a linear feature projection method for the real-time decoding of ankle and knee joint angles. Single-unit activity was extracted as a feature vector from proprioceptive afferent signals that were recorded from the L7 dorsal root ganglion during passive movements of ankle and knee joints. The dimensionality of this feature vector was then reduced using a linear feature projection composed of projection pursuit and negentropy maximization (PP/NEM). Finally, a time-delayed Kalman filter was used to estimate the ankle and knee joint angles. The PP/NEM approach had a better decoding performance than did other feature projection methods, and all processes were completed within the real-time constraints. These results suggested that the proposed method could be a useful decoding method to provide real-time feedback signals in closed-loop FES systems.
EEG-based mild depressive detection using feature selection methods and classifiers.
Li, Xiaowei; Hu, Bin; Sun, Shuting; Cai, Hanshu
2016-11-01
Depression has become a major health burden worldwide, and effectively detection of such disorder is a great challenge which requires latest technological tool, such as Electroencephalography (EEG). This EEG-based research seeks to find prominent frequency band and brain regions that are most related to mild depression, as well as an optimal combination of classification algorithms and feature selection methods which can be used in future mild depression detection. An experiment based on facial expression viewing task (Emo_block and Neu_block) was conducted, and EEG data of 37 university students were collected using a 128 channel HydroCel Geodesic Sensor Net (HCGSN). For discriminating mild depressive patients and normal controls, BayesNet (BN), Support Vector Machine (SVM), Logistic Regression (LR), k-nearest neighbor (KNN) and RandomForest (RF) classifiers were used. And BestFirst (BF), GreedyStepwise (GSW), GeneticSearch (GS), LinearForwordSelection (LFS) and RankSearch (RS) based on Correlation Features Selection (CFS) were applied for linear and non-linear EEG features selection. Independent Samples T-test with Bonferroni correction was used to find the significantly discriminant electrodes and features. Data mining results indicate that optimal performance is achieved using a combination of feature selection method GSW based on CFS and classifier KNN for beta frequency band. Accuracies achieved 92.00% and 98.00%, and AUC achieved 0.957 and 0.997, for Emo_block and Neu_block beta band data respectively. T-test results validate the effectiveness of selected features by search method GSW. Simplified EEG system with only FP1, FP2, F3, O2, T3 electrodes was also explored with linear features, which yielded accuracies of 91.70% and 96.00%, AUC of 0.952 and 0.972, for Emo_block and Neu_block respectively. Classification results obtained by GSW + KNN are encouraging and better than previously published results. In the spatial distribution of features, we find that left parietotemporal lobe in beta EEG frequency band has greater effect on mild depression detection. And fewer EEG channels (FP1, FP2, F3, O2 and T3) combined with linear features may be good candidates for usage in portable systems for mild depression detection. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
The global structure of hot star winds: Constraints from spectropolarimetry
NASA Astrophysics Data System (ADS)
Eversberg, Thomas
2000-11-01
Chapter 1. We present time-series of ultra-high S/N, high resolution spectra of the He II λ 4686 Å emission line in the O4I(n)f supergiant ζ Puppis, the brightest early-type O-star in the sky. These reveal stochastic, variable substructures in the line, which tend to move away from the line-center with time. Similar scaled-up features are well established in the strong winds of Wolf-Rayet stars (the presumed descendants of O stars), where they are explained by outward moving inhomogeneities (e.g., blobs, clumps, shocks) in the winds. If all hot-star winds are clumped like that of ζ Pup, as is plausible, then mass-low rates based on recombination-line intensities will have to be revised downwards. Using a standard `β' velocity law we deduce a value of β = 1.0-1.2 to account for the kinematics of these structures in the wind of ζ Pup. In addition to the small-scale stochastic variations we also find a slow systematic variation of the mean central absorption reversal. Chapter 2. We introduce a new polarimeter unit which, mounted at the Cassegrain focus of any telescope and fiber-connected to a fixed CCD spectrograph, is able to measure all Stokes parameters I, Q, U and V across spectral lines of bright stellar targets and other point sources in a quasi-simultaneous manner. Applying standard reduction techniques for linearly and circularly polarized light we are able to obtain photon-noise limited line polarization. We briefly outline the technical design of the polarimeter unit and the linear algebraic Mueller calculus for obtaining polarization parameters of any point source. In addition, practical limitations of the optical elements are outlined. We present first results obtained with our spectropolarimeter for four bright, hot-star targets: We confirm previous results for Hα in the bright Be star γ Cas and find linear depolarization features across the emission line complex C III/C IV (λ 5696/λ 5808 Å) of the WR+O binary γ2 Vel. We also find circular line polarization in the strongly magnetic Ap star 53 Cam across its Hα absorption line. No obvious line polarization features are seen across Hα in the variable O star θ1 Ori C above the σ ~ 0.2% instrumental level. Chapter 3. We present low resolution (~6 Å), high signal-to noise spectropolarimetric observations obtained with the new William-Wehlau spectropolarimeter for the apparently brightest Wolf-Rayet star in the sky, the 78.5d WR+O binary γ2 Velorum. Quasi- simultaneous monitoring of all four Stokes parameters I(λ), q(λ), u(λ) and v(λ) was carried out over an interval of 31 nights centered on periastron. All emission lines in our observed wavelength interval (5200-6000 Å) show highly stochastic variations over the whole run. The phase-dependent behavior of the excess emission in the C III λ 5696 line can be related to the wind-wind collision phenomenon. Varying features of Stokes q and u are seen across the strong lines, probably as a result of variable electron scattering of mainly continuum light. The spherical symmetry of the WR wind is thus broken by the presence of the O companion and clumping in the WR wind. Similar features in the extended red wing of the C III λ 5696 emission line remain unexplained. No obvious circular line polarization features are seen across any emission line above the 3σ ~ 0.03% instrumental level.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-03-08
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-01-01
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062
Janakiraman, Kamal; Shenoy, Shweta; Sandhu, Jaspal Singh
2011-09-01
Surface features such as uneven playing surfaces, low impact absorption capacity and inappropriate friction/traction characteristics are connected with injury prevalence whereas force impact during foot strike has been suggested to be an important mechanism of intravascular haemolysis during running. We aimed to evaluate intravascular haemolysis during running and compare the effect of running on two different types of surfaces on haemolysis. We selected two surfaces (asphalt and grass) on which these athletes usually run. Participants were randomly assigned to group A (asphalt) or group B (grass) with 10 athletes in each group. Each athlete completed one hour of running at the calculated target heart rate (60-70%). Venous blood samples were collected before and immediately after running. We measured unconjugated bilirubin (UBR) (mg · dl(-1)), lactate dehydrogenase (LDH) (μ · ml(-1)), haemoglobin (g · l(-1)) and serum ferritin (ng · ml(-1)) as indicators of haemolysis. Athletes who ran on grass demonstrated an increase in the haematological parameters (UBR: P < 0.01, LDH: P < 0.05) when compared to athletes who ran on asphalt (UBR: P < 0.05, LDH: P = 0.241). Our findings indicate that intravascular haemolysis occurs significantly after prolonged running. Furthermore, we conclude that uneven grass surface results in greater haemolysis compared to asphalt road.
A novel method for calculating the energy cost of turning during running
Hatamoto, Yoichi; Yamada, Yosuke; Fujii, Tatsuya; Higaki, Yasuki; Kiyonaga, Akira; Tanaka, Hiroaki
2013-01-01
Although changes of direction are one of the essential locomotor patterns in ball sports, the physiological demand of turning during running has not been previously investigated. We proposed a novel approach by which to evaluate the physiological demand of turning. The purposes of this study were to establish a method of measuring the energy expenditure (EE) of a 180° turn during running and to investigate the effect of two different running speeds on the EE of a 180° turn. Eleven young, male participants performed measurement sessions at two different running speeds (4.3 and 5.4 km/hour). Each measurement session consisted of five trials, and each trial had a different frequency of turns. At both running speeds, as the turn frequency increased the gross oxygen consumption (V·O2) also increased linearly (4.3 km/hour, r = 0.973; 5.4 km/hour, r = 0.996). The V·O2 of a turn at 5.4 km/hour (0.55 [SD 0.09] mL/kg) was higher than at 4.3 km/hour (0.34 [SD 0.13] mL/kg) (P < 0.001). We conclude that the gross V·O2 of running at a fixed speed with turns is proportional to turn frequency and that the EE of a turn is different at different running speeds. The Different Frequency Accumulation Method is a useful tool for assessing the physiological demands of complex locomotor activity. PMID:24379716
[Determination of trace gallium by graphite furnace atomic absorption spectrometry in urine].
Zhou, L Z; Fu, S; Gao, S Q; He, G W
2016-06-20
To establish a method for determination trace gallium in urine by graphite furnace atomic absorption spectrometry (GFAAS). The ammonium dihydrogen phosphate was matrix modifier. The temperature effect about pyrolysis (Tpyr) and atomization temperature were optimized for determination of trace gallium. The method of technical standard about within-run, between-run and recoveries of standard were optimized. The method showed a linear relationship within the range of 0.20~80.00 μg/L (r=0.998). The within-run and between-run relative standard deviations (RSD) of repetitive measurement at 5.0, 10.0, 20.0 μg/L concentration levels were 2.1%~5.5% and 2.3%~3.0%. The detection limit was 0.06 μg/L. The recoveries of gallium were 98.2%~101.1%. This method is simple, low detection limit, accurate, reliable and reproducible. It has been applied for determination of trace gallium in urine samples those who need occupation health examination or poisoning diagnosis.
Wang, Runxiao; Zhao, Wentao; Li, Shujun; Zhang, Shunqi
2016-01-01
Both the linear leg spring model and the two-segment leg model with constant spring stiffness have been broadly used as template models to investigate bouncing gaits for legged robots with compliant legs. In addition to these two models, the other stiffness leg spring models developed using inspiration from biological characteristic have the potential to improve high-speed running capacity of spring-legged robots. In this paper, we investigate the effects of "J"-curve spring stiffness inspired by biological materials on running speeds of segmented legs during high-speed locomotion. Mathematical formulation of the relationship between the virtual leg force and the virtual leg compression is established. When the SLIP model and the two-segment leg model with constant spring stiffness and with "J"-curve spring stiffness have the same dimensionless reference stiffness, the two-segment leg model with "J"-curve spring stiffness reveals that (1) both the largest tolerated range of running speeds and the tolerated maximum running speed are found and (2) at fast running speed from 25 to 40/92 m s -1 both the tolerated range of landing angle and the stability region are the largest. It is suggested that the two-segment leg model with "J"-curve spring stiffness is more advantageous for high-speed running compared with the SLIP model and with constant spring stiffness.
2016-01-01
Both the linear leg spring model and the two-segment leg model with constant spring stiffness have been broadly used as template models to investigate bouncing gaits for legged robots with compliant legs. In addition to these two models, the other stiffness leg spring models developed using inspiration from biological characteristic have the potential to improve high-speed running capacity of spring-legged robots. In this paper, we investigate the effects of “J”-curve spring stiffness inspired by biological materials on running speeds of segmented legs during high-speed locomotion. Mathematical formulation of the relationship between the virtual leg force and the virtual leg compression is established. When the SLIP model and the two-segment leg model with constant spring stiffness and with “J”-curve spring stiffness have the same dimensionless reference stiffness, the two-segment leg model with “J”-curve spring stiffness reveals that (1) both the largest tolerated range of running speeds and the tolerated maximum running speed are found and (2) at fast running speed from 25 to 40/92 m s−1 both the tolerated range of landing angle and the stability region are the largest. It is suggested that the two-segment leg model with “J”-curve spring stiffness is more advantageous for high-speed running compared with the SLIP model and with constant spring stiffness. PMID:28018127
Biswas, Dwaipayan; Cranny, Andy; Gupta, Nayaab; Maharatna, Koushik; Achner, Josy; Klemke, Jasmin; Jöbges, Michael; Ortmann, Steffen
2015-04-01
In this paper we present a methodology for recognizing three fundamental movements of the human forearm (extension, flexion and rotation) using pattern recognition applied to the data from a single wrist-worn, inertial sensor. We propose that this technique could be used as a clinical tool to assess rehabilitation progress in neurodegenerative pathologies such as stroke or cerebral palsy by tracking the number of times a patient performs specific arm movements (e.g. prescribed exercises) with their paretic arm throughout the day. We demonstrate this with healthy subjects and stroke patients in a simple proof of concept study in which these arm movements are detected during an archetypal activity of daily-living (ADL) - 'making-a-cup-of-tea'. Data is collected from a tri-axial accelerometer and a tri-axial gyroscope located proximal to the wrist. In a training phase, movements are initially performed in a controlled environment which are represented by a ranked set of 30 time-domain features. Using a sequential forward selection technique, for each set of feature combinations three clusters are formed using k-means clustering followed by 10 runs of 10-fold cross validation on the training data to determine the best feature combinations. For the testing phase, movements performed during the ADL are associated with each cluster label using a minimum distance classifier in a multi-dimensional feature space, comprised of the best ranked features, using Euclidean or Mahalanobis distance as the metric. Experiments were performed with four healthy subjects and four stroke survivors and our results show that the proposed methodology can detect the three movements performed during the ADL with an overall average accuracy of 88% using the accelerometer data and 83% using the gyroscope data across all healthy subjects and arm movement types. The average accuracy across all stroke survivors was 70% using accelerometer data and 66% using gyroscope data. We also use a Linear Discriminant Analysis (LDA) classifier and a Support Vector Machine (SVM) classifier in association with the same set of features to detect the three arm movements and compare the results to demonstrate the effectiveness of our proposed methodology. Copyright © 2014 Elsevier B.V. All rights reserved.
ACON: a multipurpose production controller for plasma physics codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snell, C.
1983-01-01
ACON is a BCON controller designed to run large production codes on the CTSS Cray-1 or the LTSS 7600 computers. ACON can also be operated interactively, with input from the user's terminal. The controller can run one code or a sequence of up to ten codes during the same job. Options are available to get and save Mass storage files, to perform Historian file updating operations, to compile and load source files, and to send out print and film files. Special features include ability to retry after Mass failures, backup options for saving files, startup messages for the various codes,more » and ability to reserve specified amounts of computer time after successive code runs. ACON's flexibility and power make it useful for running a number of different production codes.« less
SPIRE Data Evaluation and Nuclear IR Fluorescence Processes.
1982-11-30
so that all isotopes can be dealt with in a single run rather than a number of separate runs. At lower altitudes the radiance calculation needs to be...approximation can be inferred from the work of Neuendorffer (1982) on developing an analytic expression for the absorption of a single non-overlapping line...personnel by using prominent atmospheric infrared features such as the OH maximum, the HNO3 maximum, the CO3 4.3 um knee, etc. The azimuth however
NASA Technical Reports Server (NTRS)
Macdonald, H.; Waite, W.; Elachi, C.; Babcock, R.; Konig, R.; Gattis, J.; Borengasser, M.; Tolman, D.
1980-01-01
Imaging radar was evaluated as an adjunct to conventional petroleum exploration techniques, especially linear mapping. Linear features were mapped from several remote sensor data sources including stereo photography, enhanced LANDSAT imagery, SLAR radar imagery, enhanced SAR radar imagery, and SAR radar/LANDSAT combinations. Linear feature maps were compared with surface joint data, subsurface and geophysical data, and gas production in the Arkansas part of the Arkoma basin. The best LANDSAT enhanced product for linear detection was found to be a winter scene, band 7, uniform distribution stretch. Of the individual SAR data products, the VH (cross polarized) SAR radar mosaic provides for detection of most linears; however, none of the SAR enhancements is significantly better than the others. Radar/LANDSAT merges may provide better linear detection than a single sensor mapping mode, but because of operator variability, the results are inconclusive. Radar/LANDSAT combinations appear promising as an optimum linear mapping technique, if the advantages and disadvantages of each remote sensor are considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riyahi, S; Choi, W; Bhooshan, N
2016-06-15
Purpose: To compare linear and deformable registration methods for evaluation of tumor response to Chemoradiation therapy (CRT) in patients with esophageal cancer. Methods: Linear and multi-resolution BSpline deformable registration were performed on Pre-Post-CRT CT/PET images of 20 patients with esophageal cancer. For both registration methods, we registered CT using Mean Square Error (MSE) metric, however to register PET we used transformation obtained using Mutual Information (MI) from the same CT due to being multi-modality. Similarity of Warped-CT/PET was quantitatively evaluated using Normalized Mutual Information and plausibility of DF was assessed using inverse consistency Error. To evaluate tumor response four groupsmore » of tumor features were examined: (1) Conventional PET/CT e.g. SUV, diameter (2) Clinical parameters e.g. TNM stage, histology (3)spatial-temporal PET features that describe intensity, texture and geometry of tumor (4)all features combined. Dominant features were identified using 10-fold cross-validation and Support Vector Machine (SVM) was deployed for tumor response prediction while the accuracy was evaluated by ROC Area Under Curve (AUC). Results: Average and standard deviation of Normalized mutual information for deformable registration using MSE was 0.2±0.054 and for linear registration was 0.1±0.026, showing higher NMI for deformable registration. Likewise for MI metric, deformable registration had 0.13±0.035 comparing to linear counterpart with 0.12±0.037. Inverse consistency error for deformable registration for MSE metric was 4.65±2.49 and for linear was 1.32±2.3 showing smaller value for linear registration. The same conclusion was obtained for MI in terms of inverse consistency error. AUC for both linear and deformable registration was 1 showing no absolute difference in terms of response evaluation. Conclusion: Deformable registration showed better NMI comparing to linear registration, however inverse consistency of transformation was lower in linear registration. We do not expect to see significant difference when warping PET images using deformable or linear registration. This work was supported in part by the National Cancer Institute Grants R01CA172638.« less
2018-01-01
The energy-growth nexus has important policy implications for economic development. The results from many past studies that investigated the causality direction of this nexus can lead to misleading policy guidance. Using data on China from 1953 to 2013, this study shows that an application of causality test on the time series of energy consumption and national output has masked a lot of information. The Toda-Yamamoto test with bootstrapped critical values and the newly proposed non-linear causality test reveal no causal relationship. However, a further application of these tests using series in different time-frequency domain obtained from wavelet decomposition indicates that while energy consumption Granger causes economic growth in the short run, the reverse is true in the medium term. A bidirectional causal relationship is found for the long run. This approach has proven to be superior in unveiling information on the energy-growth nexus that are useful for policy planning over different time horizons. PMID:29782534
NASA Astrophysics Data System (ADS)
Arendt, V.; Shalchi, A.
2018-06-01
We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.
Regional Climate Sensitivity- and Historical-Based Projections to 2100
NASA Astrophysics Data System (ADS)
Hébert, Raphaël.; Lovejoy, Shaun
2018-05-01
Reliable climate projections at the regional scale are needed in order to evaluate climate change impacts and inform policy. We develop an alternative method for projections based on the transient climate sensitivity (TCS), which relies on a linear relationship between the forced temperature response and the strongly increasing anthropogenic forcing. The TCS is evaluated at the regional scale (5° by 5°), and projections are made accordingly to 2100 using the high and low Representative Concentration Pathways emission scenarios. We find that there are large spatial discrepancies between the regional TCS from 5 historical data sets and 32 global climate model (GCM) historical runs and furthermore that the global mean GCM TCS is about 15% too high. Given that the GCM Representative Concentration Pathway scenario runs are mostly linear with respect to their (inadequate) TCS, we conclude that historical methods of regional projection are better suited given that they are directly calibrated on the real world (historical) climate.
Da Costa, M J; Colson, G; Frost, T J; Halley, J; Pesti, G M
2017-07-01
The objective of this analysis was to evaluate the effects of raising broilers under sex separate and straight-run conditions for 2 broiler genetic lines. One-day-old Ross 308 and Ross 708 chicks (n = 1,344) were sex separated and placed in 48 pens according to rearing type: sex separate (28 males or 28 females) or straight-run (14 males + 14 females). There were 3 dietary phases: starter (zero to 17 d), grower (17 to 32 d), and finisher (32 to 48 d). Bird individual BW and group feed intakes were measured at 12, 17, 25, 32, 42, and 48 d to evaluate performance. At 33, 43, and 49 d 4 birds per pen (straight-run pens 2 males + 2 females) were sampled for carcass yield evaluation. Data were analyzed using linear and non-linear regression in order to estimate feed intake and cut-up weights at 3 separate market weights (1,700, 2,700, and 3,700 g). Returns over feed cost were estimated for a 1.8 million broiler complex for each rearing system and under 9 feed/meat price scenarios. Overall, rearing birds that were sex separated resulted in extra income that ranged from ${\\$}$48,824 to ${\\$}$330,300 per week, depending on the market targeted and feed and meat price scenarios. Sex separation was shown to be especially important in disadvantageous scenarios in which feed prices were high. Gains from sex separation were markedly higher for the Ross 708 than for the Ross 308 broilers. Bird variability also was evaluated at the 3 separate market ages under narrow ranges of BW that were targeted. Straight-run birds decreased the number of birds present in the desired range. Depending on market weight, straight-run rearing resulted in 9.1 to 16.6% fewer birds than sex separate rearing to meet marketing goals. It was concluded that sex separation can result in increased company profitability and have possible beneficial effects at the processing plant due to increased bird uniformity. © 2017 Poultry Science Association Inc.
On the zero-bias anomaly and Kondo physics in quantum point contacts near pinch-off.
Xiang, S; Xiao, S; Fuji, K; Shibuya, K; Endo, T; Yumoto, N; Morimoto, T; Aoki, N; Bird, J P; Ochiai, Y
2014-03-26
We investigate the linear and non-linear conductance of quantum point contacts (QPCs), in the region near pinch-off where Kondo physics has previously been connected to the appearance of the 0.7 feature. In studies of seven different QPCs, fabricated in the same high-mobility GaAs/AlGaAs heterojunction, the linear conductance is widely found to show the presence of the 0.7 feature. The differential conductance, on the other hand, does not generally exhibit the zero-bias anomaly (ZBA) that has been proposed to indicate the Kondo effect. Indeed, even in the small subset of QPCs found to exhibit such an anomaly, the linear conductance does not always follow the universal temperature-dependent scaling behavior expected for the Kondo effect. Taken collectively, our observations demonstrate that, unlike the 0.7 feature, the ZBA is not a generic feature of low-temperature QPC conduction. We furthermore conclude that the mere observation of the ZBA alone is insufficient evidence for concluding that Kondo physics is active. While we do not rule out the possibility that the Kondo effect may occur in QPCs, our results appear to indicate that its observation requires a very strict set of conditions to be satisfied. This should be contrasted with the case of the 0.7 feature, which has been apparent since the earliest experimental investigations of QPC transport.
Multimodal Deep Autoencoder for Human Pose Recovery.
Hong, Chaoqun; Yu, Jun; Wan, Jian; Tao, Dacheng; Wang, Meng
2015-12-01
Video-based human pose recovery is usually conducted by retrieving relevant poses using image features. In the retrieving process, the mapping between 2D images and 3D poses is assumed to be linear in most of the traditional methods. However, their relationships are inherently non-linear, which limits recovery performance of these methods. In this paper, we propose a novel pose recovery method using non-linear mapping with multi-layered deep neural network. It is based on feature extraction with multimodal fusion and back-propagation deep learning. In multimodal fusion, we construct hypergraph Laplacian with low-rank representation. In this way, we obtain a unified feature description by standard eigen-decomposition of the hypergraph Laplacian matrix. In back-propagation deep learning, we learn a non-linear mapping from 2D images to 3D poses with parameter fine-tuning. The experimental results on three data sets show that the recovery error has been reduced by 20%-25%, which demonstrates the effectiveness of the proposed method.
Flexible cue combination in the guidance of attention in visual search
Brand, John; Oriet, Chris; Johnson, Aaron P.; Wolfe, Jeremy M.
2014-01-01
Hodsoll and Humphreys (2001) have assessed the relative contributions of stimulus-driven and user-driven knowledge on linearly- and nonlinearly separable search. However, the target feature used to determine linear separability in their task (i.e., target size) was required to locate the target. In the present work, we investigated the contributions of stimulus-driven and user-driven knowledge when a linearly- or nonlinearly-separable feature is available but not required for target identification. We asked observers to complete a series of standard color X orientation conjunction searches in which target size was either linearly- or nonlinearly separable from the size of the distractors. When guidance by color X orientation and by size information are both available, observers rely on whichever information results in the best search efficiency. This is the case irrespective of whether we provide target foreknowledge by blocking stimulus conditions, suggesting that feature information is used in both a stimulus-driven and user-driven fashion. PMID:25463553
Solution to the Problem of Calibration of Low-Cost Air Quality Measurement Sensors in Networks.
Miskell, Georgia; Salmond, Jennifer A; Williams, David E
2018-04-27
We provide a simple, remote, continuous calibration technique suitable for application in a hierarchical network featuring a few well-maintained, high-quality instruments ("proxies") and a larger number of low-cost devices. The ideas are grounded in a clear definition of the purpose of a low-cost network, defined here as providing reliable information on air quality at small spatiotemporal scales. The technique assumes linearity of the sensor signal. It derives running slope and offset estimates by matching mean and standard deviations of the sensor data to values derived from proxies over the same time. The idea is extremely simple: choose an appropriate proxy and an averaging-time that is sufficiently long to remove the influence of short-term fluctuations but sufficiently short that it preserves the regular diurnal variations. The use of running statistical measures rather than cross-correlation of sites means that the method is robust against periods of missing data. Ideas are first developed using simulated data and then demonstrated using field data, at hourly and 1 min time-scales, from a real network of low-cost semiconductor-based sensors. Despite the almost naïve simplicity of the method, it was robust for both drift detection and calibration correction applications. We discuss the use of generally available geographic and environmental data as well as microscale land-use regression as means to enhance the proxy estimates and to generalize the ideas to other pollutants with high spatial variability, such as nitrogen dioxide and particulates. These improvements can also be used to minimize the required number of proxy sites.
Gender difference and age-related changes in performance at the long-distance duathlon.
Rüst, Christoph A; Knechtle, Beat; Knechtle, Patrizia; Pfeifer, Susanne; Rosemann, Thomas; Lepers, Romuald; Senn, Oliver
2013-02-01
The differences in gender- and the age-related changes in triathlon (i.e., swimming, cycling, and running) performances have been previously investigated, but data are missing for duathlon (i.e., running, cycling, and running). We investigated the participation and performance trends and the gender difference and the age-related decline in performance, at the "Powerman Zofingen" long-distance duathlon (10-km run, 150-km cycle, and 30-km run) from 2002 to 2011. During this period, there were 2,236 finishers (272 women and 1,964 men, respectively). Linear regression analyses for the 3 split times, and the total event time, demonstrated that running and cycling times were fairly stable during the last decade for both male and female elite duathletes. The top 10 overall gender differences in times were 16 ± 2, 17 ± 3, 15 ± 3, and 16 ± 5%, for the 10-km run, 150-km cycle, 30-km run and the overall race time, respectively. There was a significant (p < 0.001) age effect for each discipline and for the total race time. The fastest overall race times were achieved between the 25- and 39-year-olds. Female gender and increasing age were associated with increased performance times when additionally controlled for environmental temperatures and race year. There was only a marginal time period effect ranging between 1.3% (first run) and 9.8% (bike split) with 3.3% for overall race time. In accordance with previous observations in triathlons, the age-related decline in the duathlon performance was more pronounced in running than in cycling. Athletes and coaches can use these findings to plan the career in long-distance duathletes with the age of peak performance between 25 and 39 years for both women and men.
The valid measurement of running economy in runners.
Shaw, Andrew J; Ingham, Stephen A; Folland, Jonathan P
2014-10-01
Oxygen cost (OC) is commonly used to assess an athlete's running economy, although the validity of this measure is often overlooked. This study evaluated the validity of OC as a measure of running economy by comparison with the underlying energy cost (EC). In addition, the most appropriate method of removing the influence of body mass was determined to elucidate a measure of running economy that enables valid interindividual comparisons. One hundred and seventy-two highly trained endurance runners (males, n = 101; females, n = 71) performed a discontinuous submaximal running assessment, consisting of approximately seven 3-min stages (1 km·h increments), to determine the absolute OC (L·km) and EC (kcal·km) for the four speeds below lactate turn point. Comparisons between models revealed linear ratio scaling to be a more suitable method than power function scaling for removing the influence of body mass for both EC (males, R = 0.589 vs 0.588; females, R = 0.498 vs 0.482) and OC (males, R = 0.657 vs 0.652; females, R = 0.532 vs 0.531). There were stepwise increases in EC and RER with increments in running speed (both, P < 0.001). However, no differences were observed for OC across the four monitored speeds (P = 0.54). Although EC increased with running speed, OC was insensitive to changes in running speed and, therefore, does not appear to provide a valid index of the underlying EC of running, likely due to the inability of OC to account for variations in substrate use. Therefore, EC should be used as the primary measure of running economy, and for runners, an appropriate scaling with body mass is recommended.
Warm-up with a weighted vest improves running performance via leg stiffness and running economy.
Barnes, K R; Hopkins, W G; McGuigan, M R; Kilding, A E
2015-01-01
To determine the effects of "strides" with a weighted-vest during a warm-up on endurance performance and its potential neuromuscular and metabolic mediators. A bout of resistance exercise can enhance subsequent high-intensity performance, but little is known about such priming exercise for endurance performance. A crossover with 5-7 days between an experimental and control trial was performed by 11 well-trained distance runners. Each trial was preceded by a warm-up consisting of a 10-min self-paced jog, a 5-min submaximal run to determine running economy, and six 10-s strides with or without a weighted-vest (20% of body mass). After a 10-min recovery period, runners performed a series of jumps to determine leg stiffness and other neuromuscular characteristics, another 5-min submaximal run, and an incremental treadmill test to determine peak running speed. Clinical and non-clinical forms of magnitude-based inference were used to assess outcomes. Correlations and linear regression were used to assess relationships between performance and underlying measures. The weighted-vest condition resulted in a very-large enhancement of peak running speed (2.9%; 90% confidence limits ±0.8%), a moderate increase in leg stiffness (20.4%; ±4.2%) and a large improvement in running economy (6.0%; ±1.6%); there were also small-moderate clear reductions in cardiorespiratory measures. Relationships between change scores showed that changes in leg stiffness could explain all the improvements in performance and economy. Strides with a weighted-vest have a priming effect on leg stiffness and running economy. It is postulated the associated major effect on peak treadmill running speed will translate into enhancement of competitive endurance performance. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Fast discovery and visualization of conserved regions in DNA sequences using quasi-alignment
2013-01-01
Background Next Generation Sequencing techniques are producing enormous amounts of biological sequence data and analysis becomes a major computational problem. Currently, most analysis, especially the identification of conserved regions, relies heavily on Multiple Sequence Alignment and its various heuristics such as progressive alignment, whose run time grows with the square of the number and the length of the aligned sequences and requires significant computational resources. In this work, we present a method to efficiently discover regions of high similarity across multiple sequences without performing expensive sequence alignment. The method is based on approximating edit distance between segments of sequences using p-mer frequency counts. Then, efficient high-throughput data stream clustering is used to group highly similar segments into so called quasi-alignments. Quasi-alignments have numerous applications such as identifying species and their taxonomic class from sequences, comparing sequences for similarities, and, as in this paper, discovering conserved regions across related sequences. Results In this paper, we show that quasi-alignments can be used to discover highly similar segments across multiple sequences from related or different genomes efficiently and accurately. Experiments on a large number of unaligned 16S rRNA sequences obtained from the Greengenes database show that the method is able to identify conserved regions which agree with known hypervariable regions in 16S rRNA. Furthermore, the experiments show that the proposed method scales well for large data sets with a run time that grows only linearly with the number and length of sequences, whereas for existing multiple sequence alignment heuristics the run time grows super-linearly. Conclusion Quasi-alignment-based algorithms can detect highly similar regions and conserved areas across multiple sequences. Since the run time is linear and the sequences are converted into a compact clustering model, we are able to identify conserved regions fast or even interactively using a standard PC. Our method has many potential applications such as finding characteristic signature sequences for families of organisms and studying conserved and variable regions in, for example, 16S rRNA. PMID:24564200
Fast discovery and visualization of conserved regions in DNA sequences using quasi-alignment.
Nagar, Anurag; Hahsler, Michael
2013-01-01
Next Generation Sequencing techniques are producing enormous amounts of biological sequence data and analysis becomes a major computational problem. Currently, most analysis, especially the identification of conserved regions, relies heavily on Multiple Sequence Alignment and its various heuristics such as progressive alignment, whose run time grows with the square of the number and the length of the aligned sequences and requires significant computational resources. In this work, we present a method to efficiently discover regions of high similarity across multiple sequences without performing expensive sequence alignment. The method is based on approximating edit distance between segments of sequences using p-mer frequency counts. Then, efficient high-throughput data stream clustering is used to group highly similar segments into so called quasi-alignments. Quasi-alignments have numerous applications such as identifying species and their taxonomic class from sequences, comparing sequences for similarities, and, as in this paper, discovering conserved regions across related sequences. In this paper, we show that quasi-alignments can be used to discover highly similar segments across multiple sequences from related or different genomes efficiently and accurately. Experiments on a large number of unaligned 16S rRNA sequences obtained from the Greengenes database show that the method is able to identify conserved regions which agree with known hypervariable regions in 16S rRNA. Furthermore, the experiments show that the proposed method scales well for large data sets with a run time that grows only linearly with the number and length of sequences, whereas for existing multiple sequence alignment heuristics the run time grows super-linearly. Quasi-alignment-based algorithms can detect highly similar regions and conserved areas across multiple sequences. Since the run time is linear and the sequences are converted into a compact clustering model, we are able to identify conserved regions fast or even interactively using a standard PC. Our method has many potential applications such as finding characteristic signature sequences for families of organisms and studying conserved and variable regions in, for example, 16S rRNA.
Increase in Leg Stiffness Reduces Joint Work During Backpack Carriage Running at Slow Velocities.
Liew, Bernard; Netto, Kevin; Morris, Susan
2017-10-01
Optimal tuning of leg stiffness has been associated with better running economy. Running with a load is energetically expensive, which could have a significant impact on athletic performance where backpack carriage is involved. The purpose of this study was to investigate the impact of load magnitude and velocity on leg stiffness. We also explored the relationship between leg stiffness and running joint work. Thirty-one healthy participants ran overground at 3 velocities (3.0, 4.0, 5.0 m·s -1 ), whilst carrying 3 load magnitudes (0%, 10%, 20% weight). Leg stiffness was derived using the direct kinetic-kinematic method. Joint work data was previously reported in a separate study. Linear models were used to establish relationships between leg stiffness and load magnitude, velocity, and joint work. Our results found that leg stiffness did not increase with load magnitude. Increased leg stiffness was associated with reduced total joint work at 3.0 m·s -1 , but not at faster velocities. The association between leg stiffness and joint work at slower velocities could be due to an optimal covariation between skeletal and muscular components of leg stiffness, and limb attack angle. When running at a relatively comfortable velocity, greater leg stiffness may reflect a more energy efficient running pattern.
Correlated Observations, the Law of Small Numbers and Bank Runs
2016-01-01
Empirical descriptions and studies suggest that generally depositors observe a sample of previous decisions before deciding if to keep their funds deposited or to withdraw them. These observed decisions may exhibit different degrees of correlation across depositors. In our model depositors decide sequentially and are assumed to follow the law of small numbers in the sense that they believe that a bank run is underway if the number of observed withdrawals in their sample is large. Theoretically, with highly correlated samples and infinite depositors runs occur with certainty, while with random samples it needs not be the case, as for many parameter settings the likelihood of bank runs is zero. We investigate the intermediate cases and find that i) decreasing the correlation and ii) increasing the sample size reduces the likelihood of bank runs, ceteris paribus. Interestingly, the multiplicity of equilibria, a feature of the canonical Diamond-Dybvig model that we use also, disappears almost completely in our setup. Our results have relevant policy implications. PMID:27035435
Correlated Observations, the Law of Small Numbers and Bank Runs.
Horváth, Gergely; Kiss, Hubert János
2016-01-01
Empirical descriptions and studies suggest that generally depositors observe a sample of previous decisions before deciding if to keep their funds deposited or to withdraw them. These observed decisions may exhibit different degrees of correlation across depositors. In our model depositors decide sequentially and are assumed to follow the law of small numbers in the sense that they believe that a bank run is underway if the number of observed withdrawals in their sample is large. Theoretically, with highly correlated samples and infinite depositors runs occur with certainty, while with random samples it needs not be the case, as for many parameter settings the likelihood of bank runs is zero. We investigate the intermediate cases and find that i) decreasing the correlation and ii) increasing the sample size reduces the likelihood of bank runs, ceteris paribus. Interestingly, the multiplicity of equilibria, a feature of the canonical Diamond-Dybvig model that we use also, disappears almost completely in our setup. Our results have relevant policy implications.
Automata-Based Verification of Temporal Properties on Running Programs
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Havelund, Klaus; Lan, Sonie (Technical Monitor)
2001-01-01
This paper presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to Buchi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.
Standard random number generation for MBASIC
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1976-01-01
A machine-independent algorithm is presented and analyzed for generating pseudorandom numbers suitable for the standard MBASIC system. The algorithm used is the polynomial congruential or linear recurrence modulo 2 method. Numbers, formed as nonoverlapping adjacent 28-bit words taken from the bit stream produced by the formula a sub m + 532 = a sub m + 37 + a sub m (modulo 2), do not repeat within the projected age of the solar system, show no ensemble correlation, exhibit uniform distribution of adjacent numbers up to 19 dimensions, and do not deviate from random runs-up and runs-down behavior.
NASA Astrophysics Data System (ADS)
Plummer, M.; Armour, E. A. G.; Todd, A. C.; Franklin, C. P.; Cooper, J. N.
2009-12-01
We present a program used to calculate intricate three-particle integrals for variational calculations of solutions to the leptonic Schrödinger equation with two nuclear centres in which inter-leptonic distances (electron-electron and positron-electron) are included directly in the trial functions. The program has been used so far in calculations of He-H¯ interactions and positron H 2 scattering, however the precisely defined integrals are applicable to other situations. We include a summary discussion of how the program has been optimized from a 'legacy'-type code to a more modern high-performance code with a performance improvement factor of up to 1000. Program summaryProgram title: tripleint.cc Catalogue identifier: AEEV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12 829 No. of bytes in distributed program, including test data, etc.: 91 798 Distribution format: tar.gz Programming language: Fortran 95 (fixed format) Computer: Modern PC (tested on AMD processor) [1], IBM Power5 [2] Cray XT4 [3], similar Operating system: Red Hat Linux [1], IBM AIX [2], UNICOS [3] Has the code been vectorized or parallelized?: Serial (multi-core shared memory may be needed for some large jobs) RAM: Dependent on parameter sizes and option to use intermediate I/O. Estimates for practical use: 0.5-2 GBytes (with intermediate I/O); 1-4 GBytes (all-memory: the preferred option). Classification: 2.4, 2.6, 2.7, 2.9, 16.5, 16.10, 20 Nature of problem: The 'tripleint.cc' code evaluates three-particle integrals needed in certain variational (in particular: Rayleigh-Ritz and generalized-Kohn) matrix elements for solution of the Schrödinger equation with two fixed centres (the solutions may then be used in subsequent dynamic nuclear calculations). Specifically the integrals are defined by Eq. (16) in the main text and contain terms proportional to r×r/r,i≠j,i≠k,j≠k, with r the distance between leptons i and j. The article also briefly describes the performance optimizations used to increase the speed of evaluation of the integrals enough to allow detailed testing and mapping of the effect of varying non-linear parameters in the variational trial functions. Solution method: Each integral is solved using prolate spheroidal coordinates and series expansions (with cut-offs) of the many-lepton expressions. 1-d integrals and sub-integrals are solved analytically by various means (the program automatically chooses the most accurate of the available methods for each set of parameters and function arguments), while two of the three integrations over the prolate spheroidal coordinates ' λ' are carried out numerically. Many similar integrals with identical non-linear variational parameters may be calculated with one call of the code. Restrictions: There are limits to the number of points for the numerical integrations, to the cut-off variable itaumax for the many-lepton series expansions, and to the maximum powers of Slater-like input functions. For runs near the limit of the cut-off variable and with certain small-magnitude values of variational non-linear parameters, the code can require large amounts of memory (an option using some intermediate I/O is included to offset this). Unusual features: In addition to the program, we also present a summary description of the techniques and ideology used to optimize the code, together with accuracy tests and indications of performance improvement. Running time: The test runs take 1-15 minutes on HPCx [2] as indicated in Section 5 of the main text. A practical run with 729 integrals, 40 quadrature points per dimension and itaumax = 8 took 150 minutes on a PC (e.g., [1]): a similar run with 'medium' accuracy, e.g. for parameter optimization (see Section 2 of the main text), with 30 points per dimension and itaumax = 6 took 35 minutes. References:PC: Memory: 2.72 GB, CPU: AMD Opteron 246 dual-core, 2×2 GHz, OS: GNU/Linux, kernel: Linux 2.6.9-34.0.2.ELsmp. HPCx, IBM eServer 575 running IBM AIX, http://www.hpcx.ac.uk/ (visited May 2009). HECToR, CRAY XT4 running UNICOS/lc, http://www.hector.ac.uk/ (visited May 2009).
Cascade Classification with Adaptive Feature Extraction for Arrhythmia Detection.
Park, Juyoung; Kang, Mingon; Gao, Jean; Kim, Younghoon; Kang, Kyungtae
2017-01-01
Detecting arrhythmia from ECG data is now feasible on mobile devices, but in this environment it is necessary to trade computational efficiency against accuracy. We propose an adaptive strategy for feature extraction that only considers normalized beat morphology features when running in a resource-constrained environment; but in a high-performance environment it takes account of a wider range of ECG features. This process is augmented by a cascaded random forest classifier. Experiments on data from the MIT-BIH Arrhythmia Database showed classification accuracies from 96.59% to 98.51%, which are comparable to state-of-the art methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leff, N.H.; Sato, K.
1980-05-01
This paper presents an aggregate model for analyzing macroeconomic adjustment and short-run growth in less-developed countries. The model is built on standard theoretical lines; but an important finding of the paper is that empirically, macroeconomic adjustment in the real sector of some differs from the professional expectations that may be prevalent in more-developed countries. This observation leads us to a reconsideration of the macroeconomics of the developing economies, and particularly of some short-term features that affect the long-run expansion path. The analysis also shows why these economies are often subject to chronic inflation and macroeconmic dependence on foreign-capital inflows. 13more » references, 4 tables.« less
Alcator C-Mod Digital Plasma Control System
NASA Astrophysics Data System (ADS)
Wolfe, S. M.
2005-10-01
A new digital plasma control system (DPCS) has been implemented for Alcator C-Mod. The new system was put into service at the start of the 2005 run campaign and has been in routine operation since. The system consists of two 64-input, 16-output cPCI digitizers attached to a rack-mounted single-CPU Linux server, which performs both the I/O and the computation. During initial operation, the system was set up to directly emulate the original C-Mod ``Hybrid'' MIMO linear control system. Compatibility with the previous control system allows the existing user interface software and data structures to be used with the new hardware. The control program is written in IDL and runs under standard Linux. Interrupts are disabled during the plasma pulses to achieve real-time operation. A synchronous loop is executed with a nominal cycle rate of 10 kHz. Emulation of the original linear control algorithms requires 50 μsec per iteration, with the time evenly split between I/O and computation, so rates of about 20 KHz are achievable. Reliable vertical position control has been demonstrated with cycle rates as low as 5 KHz. Additional computations, including non-linear algorithms and adaptive response, are implemented as optional procedure calls within the main real-time loop.
NASA Astrophysics Data System (ADS)
Wei, Liu; Wei, Li; Peng, Ren; Qinglong, Lin; Shengdong, Zhang; Yangyuan, Wang
2009-09-01
A time-domain digitally controlled oscillator (DCO) is proposed. The DCO is composed of a free-running ring oscillator (FRO) and a two lap-selectors integrated flying-adder (FA). With a coiled cell array which allows uniform loading capacitances of the delay cells, the FRO produces 32 outputs with consistent tap spacing for the FA as reference clocks. The FA uses the outputs from the FRO to generate the output of the DCO according to the control number, resulting in a linear dependence of the output period, instead of the frequency on the digital controlling word input. Thus the proposed DCO ensures a good conversion linearity in a time-domain, and is suitable for time-domain all-digital phase locked loop applications. The DCO was implemented in a standard 0.13 μm digital logic CMOS process. The measurement results show that the DCO has a linear and monotonic tuning curve with gain variation of less than 10%, and a very low root mean square period jitter of 9.3 ps in the output clocks. The DCO works well at supply voltages ranging from 0.6 to 1.2 V, and consumes 4 mW of power with 500 MHz frequency output at 1.2 V supply voltage.
The Search for Missing Baryons with Linearly Polarized Photons at Jefferson Lab
NASA Astrophysics Data System (ADS)
Cole, Philip
2006-05-01
The set of experiments forming the g8 run took place in Hall B of Jefferson Lab during the summers of 2001 and 2005 These experiments made use of a beam of linearly-polarized photons produced through coherent bremsstrahlung and represent the first time such a probe has been employed at Jefferson Lab. The scientific purpose of g8 is to improve the understanding of the underlying symmetry of the quark degrees of freedom in the nucleon, the nature of the parity exchange between the incident photon and the target nucleon, and the mechanism of associated strangeness production in electromagnetic reactions. With the high-quality beam of the tagged and collimated linearly-polarized photons and the nearly complete angular coverage of the Hall-B spectrometer, we seek to extract the differential cross sections and attendant polarization observables for the photoproduction of vector mesons and kaons at photon energies ranging between 1.3 and 2.2 GeV. We achieved polarizations exceeding 90% and collected over six billion events, which, after our data cuts and analysis, should give us well over 100 times the world's data set. I shall report on the experimental details of establishing the Coherent Bremsstrahlung Facility and present some preliminary results from our first run.
Fruehwald-Pallamar, J; Hesselink, J R; Mafee, M F; Holzer-Fruehwald, L; Czerny, C; Mayerhoefer, M E
2016-02-01
To evaluate whether texture-based analysis of standard MRI sequences can help in the discrimination between benign and malignant head and neck tumors. The MR images of 100 patients with a histologically clarified head or neck mass, from two different institutions, were analyzed. Texture-based analysis was performed using texture analysis software, with region of interest measurements for 2 D and 3 D evaluation independently for all axial sequences. COC, RUN, GRA, ARM, and WAV features were calculated for all ROIs. 10 texture feature subsets were used for a linear discriminant analysis, in combination with k-nearest-neighbor classification. Benign and malignant tumors were compared with regard to texture-based values. There were differences in the images from different field-strength scanners, as well as from different vendors. For the differentiation of benign and malignant tumors, we found differences on STIR and T2-weighted images for 2 D, and on contrast-enhanced T1-TSE with fat saturation for 3 D evaluation. In a separate analysis of the subgroups 1.5 and 3 Tesla, more discriminating features were found. Texture-based analysis is a useful tool in the discrimination of benign and malignant tumors when performed on one scanner with the same protocol. We cannot recommend this technique for the use of multicenter studies with clinical data. 2 D/3 D texture-based analysis can be performed in head and neck tumors. Texture-based analysis can differentiate between benign and malignant masses. Analyzed MR images should originate from one scanner with an identical protocol. © Georg Thieme Verlag KG Stuttgart · New York.
Primordial power spectrum features and consequences
NASA Astrophysics Data System (ADS)
Goswami, G.
2014-03-01
The present Cosmic Microwave Background (CMB) temperature and polarization anisotropy data is consistent with not only a power law scalar primordial power spectrum (PPS) with a small running but also with the scalar PPS having very sharp features. This has motivated inflationary models with such sharp features. Recently, even the possibility of having nulls in the power spectrum (at certain scales) has been considered. The existence of these nulls has been shown in linear perturbation theory. What shall be the effect of higher order corrections on such nulls? Inspired by this question, we have attempted to calculate quantum radiative corrections to the Fourier transform of the 2-point function in a toy field theory and address the issue of how these corrections to the power spectrum behave in models in which the tree-level power spectrum has a sharp dip (but not a null). In particular, we have considered the possibility of the relative enhancement of radiative corrections in a model in which the tree-level spectrum goes through a dip in power at a certain scale. The mode functions of the field (whose power spectrum is to be evaluated) are chosen such that they undergo the kind of dynamics that leads to a sharp dip in the tree level power spectrum. Next, we have considered the situation in which this field has quartic self interactions, and found one loop correction in a suitably chosen renormalization scheme. Thus, we have attempted to answer the following key question in the context of this toy model (which is as important in the realistic case): In the chosen renormalization scheme, can quantum radiative corrections be enhanced relative to tree-level power spectrum at scales, at which sharp dips appear in the tree-level spectrum?
Linear decomposition approach for a class of nonconvex programming problems.
Shen, Peiping; Wang, Chunfeng
2017-01-01
This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.
Biomotor structures in elite female handball players.
Katić, Ratko; Cavala, Marijana; Srhoj, Vatromir
2007-09-01
In order to identify biomotor structures in elite female handball players, factor structures of morphological characteristics and basic motor abilities of elite female handball players (N = 53) were determined first, followed by determination of relations between the morphological-motor space factors obtained and the set of criterion variables evaluating situation motor abilities in handball. Factor analysis of 14 morphological measures produced three morphological factors, i.e. factor of absolute voluminosity (mesoendomorph), factor of longitudinal skeleton dimensionality, and factor of transverse hand dimensionality. Factor analysis of 15 motor variables yielded five basic motor dimensions, i.e. factor of agility, factor of jumping explosive strength, factor of throwing explosive strength, factor of movement frequency rate, and factor of running explosive strength (sprint). Four significant canonic correlations, i.e. linear combinations, explained the correlation between the set of eight latent variables of the morphological and basic motor space and five variables of situation motoricity. First canonic linear combination is based on the positive effect of the factors of agility/coordination on the ability of fast movement without ball. Second linear combination is based on the effect of jumping explosive strength and transverse hand dimensionality on ball manipulation, throw precision, and speed of movement with ball. Third linear combination is based on the running explosive strength determination by the speed of movement with ball, whereas fourth combination is determined by throwing and jumping explosive strength, and agility on ball pass. The results obtained were consistent with the model of selection in female handball proposed (Srhoj et al., 2006), showing the speed of movement without ball and the ability of ball manipulation to be the predominant specific abilities, as indicated by the first and second linear combination.
Muthusamy, Hariharan; Polat, Kemal; Yaacob, Sazali
2015-01-01
In the recent years, many research works have been published using speech related features for speech emotion recognition, however, recent studies show that there is a strong correlation between emotional states and glottal features. In this work, Mel-frequency cepstralcoefficients (MFCCs), linear predictive cepstral coefficients (LPCCs), perceptual linear predictive (PLP) features, gammatone filter outputs, timbral texture features, stationary wavelet transform based timbral texture features and relative wavelet packet energy and entropy features were extracted from the emotional speech (ES) signals and its glottal waveforms(GW). Particle swarm optimization based clustering (PSOC) and wrapper based particle swarm optimization (WPSO) were proposed to enhance the discerning ability of the features and to select the discriminating features respectively. Three different emotional speech databases were utilized to gauge the proposed method. Extreme learning machine (ELM) was employed to classify the different types of emotions. Different experiments were conducted and the results show that the proposed method significantly improves the speech emotion recognition performance compared to previous works published in the literature. PMID:25799141
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2017-04-01
Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the reduced model space, thereby allowing the recalculation of system matrices at every time-step necessary for non-linear models while retaining the speed of the reduced model. This makes POD-DEIM applicable for groundwater models simulating unconfined aquifers. However, in our analysis, the method struggled to reproduce variable river boundaries accurately and gave no advantage for variable Dirichlet boundaries compared to the original POD method. We have developed another extension for POD that targets to address these remaining problems by performing a second POD operation on the model matrix on the left-hand side of the equation. The method aims to at least reproduce the accuracy of the other methods where they are applicable while outperforming them for setups with changing river boundaries or variable Dirichlet boundaries. We compared the new extension with original POD and POD-DEIM for different combinations of model structures and boundary conditions. The new method shows the potential of POD extensions for applications to non-linear groundwater systems and complex boundary conditions that go beyond the current, relatively limited range of applications. References: Siade, A. J., Putti, M., and Yeh, W. W.-G. (2010). Snapshot selection for groundwater model reduction using proper orthogonal decomposition. Water Resour. Res., 46(8):W08539. Stanko, Z. P., Boyce, S. E., and Yeh, W. W.-G. (2016). Nonlinear model reduction of unconfined groundwater flow using pod and deim. Advances in Water Resources, 97:130 - 143.
Hafen, G M; Hurst, C; Yearwood, J; Smith, J; Dzalilov, Z; Robinson, P J
2008-10-05
Cystic fibrosis is the most common fatal genetic disorder in the Caucasian population. Scoring systems for assessment of Cystic fibrosis disease severity have been used for almost 50 years, without being adapted to the milder phenotype of the disease in the 21st century. The aim of this current project is to develop a new scoring system using a database and employing various statistical tools. This study protocol reports the development of the statistical tools in order to create such a scoring system. The evaluation is based on the Cystic Fibrosis database from the cohort at the Royal Children's Hospital in Melbourne. Initially, unsupervised clustering of the all data records was performed using a range of clustering algorithms. In particular incremental clustering algorithms were used. The clusters obtained were characterised using rules from decision trees and the results examined by clinicians. In order to obtain a clearer definition of classes expert opinion of each individual's clinical severity was sought. After data preparation including expert-opinion of an individual's clinical severity on a 3 point-scale (mild, moderate and severe disease), two multivariate techniques were used throughout the analysis to establish a method that would have a better success in feature selection and model derivation: 'Canonical Analysis of Principal Coordinates' and 'Linear Discriminant Analysis'. A 3-step procedure was performed with (1) selection of features, (2) extracting 5 severity classes out of a 3 severity class as defined per expert-opinion and (3) establishment of calibration datasets. (1) Feature selection: CAP has a more effective "modelling" focus than DA.(2) Extraction of 5 severity classes: after variables were identified as important in discriminating contiguous CF severity groups on the 3-point scale as mild/moderate and moderate/severe, Discriminant Function (DF) was used to determine the new groups mild, intermediate moderate, moderate, intermediate severe and severe disease. (3) Generated confusion tables showed a misclassification rate of 19.1% for males and 16.5% for females, with a majority of misallocations into adjacent severity classes particularly for males. Our preliminary data show that using CAP for detection of selection features and Linear DA to derive the actual model in a CF database might be helpful in developing a scoring system. However, there are several limitations, particularly more data entry points are needed to finalize a score and the statistical tools have further to be refined and validated, with re-running the statistical methods in the larger dataset.
Multiuser receiver for DS-CDMA signals in multipath channels: an enhanced multisurface method.
Mahendra, Chetan; Puthusserypady, Sadasivan
2006-11-01
This paper deals with the problem of multiuser detection in direct-sequence code-division multiple-access (DS-CDMA) systems in multipath environments. The existing multiuser detectors can be divided into two categories: (1) low-complexity poor-performance linear detectors and (2) high-complexity good-performance nonlinear detectors. In particular, in channels where the orthogonality of the code sequences is destroyed by multipath, detectors with linear complexity perform much worse than the nonlinear detectors. In this paper, we propose an enhanced multisurface method (EMSM) for multiuser detection in multipath channels. EMSM is an intermediate piecewise linear detection scheme with a run-time complexity linear in the number of users. Its bit error rate performance is compared with existing linear detectors, a nonlinear radial basis function detector trained by the new support vector learning algorithm, and Verdu's optimal detector. Simulations in multipath channels, for both synchronous and asynchronous cases, indicate that it always outperforms all other linear detectors, performing nearly as well as nonlinear detectors.
Errors in Tsunami Source Estimation from Tide Gauges
NASA Astrophysics Data System (ADS)
Arcas, D.
2012-12-01
Linearity of tsunami waves in deep water can be assessed as a comparison of flow speed, u to wave propagation speed √gh. In real tsunami scenarios this evaluation becomes impractical due to the absence of observational data of tsunami flow velocities in shallow water. Consequently the extent of validity of the linear regime in the ocean is unclear. Linearity is the fundamental assumption behind tsunami source inversion processes based on linear combinations of unit propagation runs from a deep water propagation database (Gica et al., 2008). The primary tsunami elevation data for such inversion is usually provided by National Oceanic and Atmospheric (NOAA) deep-water tsunami detection systems known as DART. The use of tide gauge data for such inversions is more controversial due to the uncertainty of wave linearity at the depth of the tide gauge site. This study demonstrates the inaccuracies incurred in source estimation using tide gauge data in conjunction with a linear combination procedure for tsunami source estimation.
YADCLAN: yet another digitally-controlled linear artificial neuron.
Frenger, Paul
2003-01-01
This paper updates the author's 1999 RMBS presentation on digitally controlled linear artificial neuron design. Each neuron is based on a standard operational amplifier having excitatory and inhibitory inputs, variable gain, an amplified linear analog output and an adjustable threshold comparator for digital output. This design employs a 1-wire serial network of digitally controlled potentiometers and resistors whose resistance values are set and read back under microprocessor supervision. This system embodies several unique and useful features, including: enhanced neuronal stability, dynamic reconfigurability and network extensibility. This artificial neuronal is being employed for feature extraction and pattern recognition in an advanced robotic application.
AFTER: Batch jobs on the Apollo ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hofstadler, P.
1987-07-01
This document describes AFTER, a system that allows users of an Apollo ring to submit batch jobs to run without leaving themselves logged in to the ring. Jobs may be submitted to run at a later time or on a different node. Results from the batch job are mailed to the user through some designated mail system. AFTER features an understandable user interface, good on line help, and site customization. This manual serves primarily as a user's guide to AFTER although administration and installation are covered for completeness.
Li, Jin-ming; Zheng, Huai-jing; Wang, Lu-nan; Deng, Wei
2003-04-01
To establish a model for one choosing controls with a suitable concentration for internal quality control (IQC) with qualitative ELISA detection, and a consecutive plotting method on Levey-Jennings control chart when reagent kit lot is changed. First, a series of control serum with 0.2, 0.5, 1.0, 2.0 and 5.0ng/ml HBsAg respectively were assessed for within-run and between-run precision according to NCCLs EP5 document. Then, a linear regression equation (y=bx + a) with best correlation coefficient (r > 0.99) was established based on S/CO values of the series of control serum. Finally, one could choose controls with S/CO value calculated from the equation (y = bx + a) minus the product of the S/CO value multiplying three-fold between-run CV to be still more than 1.0 for IQC use. For consecutive plotting on Levey-Jennings control chart when ELISA kit lot was changed, the new lot kits were used to detect the same series of HBsAg control serum as above. Then, a new linear regression equation (y2 = b2x2 + a2) with best correlation coefficient was obtained. The old one (y1 =b1x1 + a1) could be obtained based on the mean values from above precision assessment. The S/CO value of a control serum detected by new lot kit could be changed to that detected by old kit lot based on the factor of y2/y1. Therefore, the plotting on primary Levey-Jennings control chart could be continued. The within-run coefficient of variation CV of the ELISA method for control serum with 0.2, 0.5, 1.0, 2.0 and 5.0ng/ml HBsAg were 11.08%, 9.49%, 9.83%, 9.18% and 7.25%, respectively, and between-run CV were 13.25%, 14.03%, 15.11%, 13.29% and 9.92%. The linear regression equation with best correlation coefficient from a test at random was y = 3.509x + 0.180. The suitable concentration of control serum for IQC could be 0.5ng/ml or 1.0ng/ml. The linear regression equation from the old lot and other two new lots of the ELISA kits were y1 = 3.550(x1) + 0.226, y2 = 3.238(x2) +0.388, and y3 =3.428(x3) + 0.148, respectively. Then, the transferring factors of 0.960 (y2/y1) and 0.908 (y3/y1) were obtained. The results shows that the model established for IQC control serum concentration selecting and for consecutive plotting on control chart when the reagent lot is changed is effective and practical.
NASA Technical Reports Server (NTRS)
Benowitz, E. G.; Niessner, A. F.
2003-01-01
We have successfully demonstrated a portion of the spacecraft attitude control and fault protection, running on a standard Java platform, and are currently in the process of taking advantage of the features provided by the RTSJ.
BASINS enables users to efficiently access nationwide environmental databases and local user-specified datasets, apply assessment and planning tools, and run a variety of proven nonpoint loading and water quality models within a single GIS format.
Real-time acquisition and tracking system with multiple Kalman filters
NASA Astrophysics Data System (ADS)
Beard, Gary C.; McCarter, Timothy G.; Spodeck, Walter; Fletcher, James E.
1994-07-01
The design of a real-time, ground-based, infrared tracking system with proven field success in tracking boost vehicles through burnout is presented with emphasis on the software design. The system was originally developed to deliver relative angular positions during boost, and thrust termination time to a sensor fusion station in real-time. Autonomous target acquisition and angle-only tracking features were developed to ensure success under stressing conditions. A unique feature of the system is the incorporation of multiple copies of a Kalman filter tracking algorithm running in parallel in order to minimize run-time. The system is capable of updating the state vector for an object at measurement rates approaching 90 Hz. This paper will address the top-level software design, details of the algorithms employed, system performance history in the field, and possible future upgrades.
Modeling stock return distributions with a quantum harmonic oscillator
NASA Astrophysics Data System (ADS)
Ahn, K.; Choi, M. Y.; Dai, B.; Sohn, S.; Yang, B.
2017-11-01
We propose a quantum harmonic oscillator as a model for the market force which draws a stock return from short-run fluctuations to the long-run equilibrium. The stochastic equation governing our model is transformed into a Schrödinger equation, the solution of which features “quantized” eigenfunctions. Consequently, stock returns follow a mixed χ distribution, which describes Gaussian and non-Gaussian features. Analyzing the Financial Times Stock Exchange (FTSE) All Share Index, we demonstrate that our model outperforms traditional stochastic process models, e.g., the geometric Brownian motion and the Heston model, with smaller fitting errors and better goodness-of-fit statistics. In addition, making use of analogy, we provide an economic rationale of the physics concepts such as the eigenstate, eigenenergy, and angular frequency, which sheds light on the relationship between finance and econophysics literature.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2017-12-01
Search processes play key roles in various scientific fields. A widespread and effective search-process scheme, which we term Restart Search, is based on the following restart algorithm: i) set a timer and initiate a search task; ii) if the task was completed before the timer expired, then stop; iii) if the timer expired before the task was completed, then go back to the first step and restart the search process anew. In this paper a branching feature is added to the restart algorithm: at every transition from the algorithm's third step to its first step branching takes place, thus multiplying the search effort. This branching feature yields a search-process scheme which we term Branching Search. The running time of Branching Search is analyzed, closed-form results are established, and these results are compared to the coresponding running-time results of Restart Search.
A fast and high performance multiple data integration algorithm for identifying human disease genes
2015-01-01
Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620
NASA Astrophysics Data System (ADS)
Pindsoo, Katri; Soomere, Tarmo
2016-04-01
The water level time series and particularly temporal variations in water level extremes usually do not follow any simple rule. Still, the analysis of linear trends in extreme values of surge levels is a convenient tool to obtain a first approximation of the future projections of the risks associated with coastal floodings. We demonstrate how this tool can be used to extract essential information about concealed changes in the forcing factors of seas and oceans. A specific feature of the Baltic Sea is that sequences of even moderate storms may raise the average sea level by almost 1 m for a few weeks. Such events occur once in a few years. They substantially contribute to the extreme water levels in the eastern Baltic Sea: the most devastating coastal floodings occur when a strong storm from unfortunate direction arrives during such an event. We focus on the separation of subtidal (weekly-scale) processes from those which are caused by a single storm and on establishing how much these two kinds of events have contributed to the increase in the extreme water levels in the eastern Baltic Sea. The analysis relies on numerically reconstructed sea levels produced by the RCO (Rossby Center, Swedish Meteorological and Hydrological Institute) ocean model for 1961-2005. The reaction of sea surface to single storm events is isolated from the local water level time series using a running average over a fixed interval. The distribution of average water levels has an almost Gaussian shape for averaging lengths from a few days to a few months. The residual (total water level minus the average) can be interpreted as a proxy of the local storm surges. Interestingly, for the 8-day average this residual almost exactly follows the exponential distribution. Therefore, for this averaging length the heights of local storm surges reflect an underlying Poisson process. This feature is universal for the entire eastern Baltic Sea coast. The slopes of the exponential distribution for low and high water levels are different, vary markedly along the coast and provide a useful quantification of the vulnerability of single coastal segments with respect to coastal flooding. The formal linear trends in the extreme values of these water level components exhibit radically different spatial variations. The slopes of the trends in the weekly average are almost constant (~4 cm/decade for 8-day running average) along the entire eastern Baltic Sea coast. This first of all indicates that the duration of storm sequences has increased. The trends for maxima of local storm surge heights represent almost the entire spatial variability in the water level extremes. Their slopes are almost zero at the open Baltic Proper coasts of the Western Estonian archipelago. Therefore, an increase in wind speed in strong storms is unlikely in this area. In contrast, the slopes in question reach 5-7 cm/decade in the eastern Gulf of Finland and Gulf of Riga. This feature suggests that wind direction in strongest storms may have rotated in the northern Baltic Sea.
Jupyter Notebooks for Earth Sciences: An Interactive Training Platform for Seismology
NASA Astrophysics Data System (ADS)
Igel, H.; Chow, B.; Donner, S.; Krischer, L.; van Driel, M.; Tape, C.
2017-12-01
We have initiated a community platform (http://www.seismo-live.org) where Python-based Jupyter notebooks (https://jupyter.org) can be accessed and run without necessary downloads or local software installations. The increasingly popular Jupyter notebooks allow the combination of markup language, graphics, and equations with interactive, executable Python code examples. Jupyter notebooks are a powerful and easy-to-grasp tool for students to develop entire projects, scientists to collaborate and efficiently interchange evolving workflows, and trainers to develop efficient practical material. Utilizing the tmpnb project (https://github.com/jupyter/tmpnb), we link the power of Jupyter notebooks with an underlying server, such that notebooks can be run from anywhere, even on smart phones. We demonstrate the potential with notebooks for 1) learning the programming language Python, 2) basic signal processing, 3) an introduction to the ObsPy library (https://obspy.org) for seismology, 4) seismic noise analysis, 5) an entire suite of notebooks for computational seismology (the finite-difference method, pseudospectral methods, finite/spectral element methods, the finite-volume and the discontinuous Galerkin methods, Instaseis), 6) rotational seismology, 7) making results in papers fully reproducible, 8) a rate-and-state friction toolkit, 9) glacial seismology. The platform is run as a community project using Github. Submission of complementary Jupyter notebooks is encouraged. Extension in the near future include linear(-ized) and nonlinear inverse problems.
Reliability of heart rate measures during walking before and after running maximal efforts.
Boullosa, D A; Barros, E S; del Rosso, S; Nakamura, F Y; Leicht, A S
2014-11-01
Previous studies on HR recovery (HRR) measures have utilized the supine and the seated postures. However, the most common recovery mode in sport and clinical settings after running exercise is active walking. The aim of the current study was to examine the reliability of HR measures during walking (4 km · h(-1)) before and following a maximal test. Twelve endurance athletes performed an incremental running test on 2 days separated by 48 h. Absolute (coefficient of variation, CV, %) and relative [Intraclass correlation coefficient, (ICC)] reliability of time domain and non-linear measures of HR variability (HRV) from 3 min recordings, and HRR parameters over 5 min were assessed. Moderate to very high reliability was identified for most HRV indices with short-term components of time domain and non-linear HRV measures demonstrating the greatest reliability before (CV: 12-22%; ICC: 0.73-0.92) and after exercise (CV: 14-32%; ICC: 0.78-0.91). Most HRR indices and parameters of HRR kinetics demonstrated high to very high reliability with HR values at a given point and the asymptotic value of HR being the most reliable (CV: 2.5-10.6%; ICC: 0.81-0.97). These findings demonstrate these measures as reliable tools for the assessment of autonomic control of HR during walking before and after maximal efforts. © Georg Thieme Verlag KG Stuttgart · New York.
Landsat analysis for uranium exploration in Northeast Turkey
Lee, Keenan
1983-01-01
No uranium deposits are known in the Trabzon, Turkey region, and consequently, exploration criteria have not been defined. Nonetheless, by analogy with uranium deposits studied elsewhere, exploration guides are suggested to include dense concentrations of linear features, lineaments -- especially with northwest trend, acidic plutonic rocks, and alteration indicated by limonite. A suite of digitally processed images of a single Landsat scene served as the image base for mapping 3,376 linear features. Analysis of the linear feature data yielded two statistically significant trends, which in turn defined two sets of strong lineaments. Color composite images were used to map acidic plutonic rocks and areas of surficial limonitic materials. The Landsat interpretation yielded a map of these exploration guides that may be used to evaluate relative uranium potential. One area in particular shows a high coincidence of favorable indicators.
Wada, Yoshiro; Nishiike, Suetaka; Kitahara, Tadashi; Yamanaka, Toshiaki; Imai, Takao; Ito, Taeko; Sato, Go; Matsuda, Kazunori; Kitamura, Yoshiaki; Takeda, Noriaki
2016-11-01
After repeated snowboard exercises in the virtual reality (VR) world with increasing time lags in trials 3-8, it is suggested that the adaptation to repeated visual-vestibulosomatosensory conflict in the VR world improved dynamic posture control and motor performance in the real world without the development of motion sickness. The VR technology was used and the effects of repeated snowboard exercise examined in the VR world with time lags between visual scene and body rotation on the head stability and slalom run performance during exercise in healthy subjects. Forty-two healthy young subjects participated in the study. After trials 1 and 2 of snowboard exercise in the VR world without time lag, trials 3-8 were conducted with 0.1, 0.2, 0.3, 0.4, 0.5, and 0.6 s time lags of the visual scene that the computer creates behind board rotation, respectively. Finally, trial 9 was conducted without time lag. Head linear accelerations and subjective slalom run performance were evaluated. The standard deviations of head linear accelerations in inter-aural direction were significantly increased in trial 8, with a time lag of 0.6 s, but significantly decreased in trial 9 without a time lag, compared with those in trial 2 without a time lag. The subjective scores of slalom run performance were significantly decreased in trial 8, with a time lag of 0.6 s, but significantly increased in trial 9 without a time lag, compared with those in trial 2 without a time lag. Motion sickness was not induced in any subjects.
Daniels, Robert D.; Bertke, Stephen; Dahm, Matthew M.; Yiin, James H.; Kubale, Travis L.; Hales, Thomas R.; Baris, Dalsu; Zahm, Shelia H.; Beaumont, James J.; Waters, Kathleen M.; Pinkerton, Lynne E.
2015-01-01
Objectives To examine exposure–response relationships between surrogates of firefighting exposure and select outcomes among previously studied US career firefighters. Methods Eight cancer and four non-cancer outcomes were examined using conditional logistic regression. Incidence density sampling was used to match each case to 200 controls on attained age. Days accrued in firefighting assignments (exposed-days), run totals (fire-runs) and run times (fire-hours) were used as exposure surrogates. HRs comparing 75th and 25th centiles of lagged cumulative exposures were calculated using loglinear, linear, log-quadratic, power and restricted cubic spline general relative risk models. Piecewise constant models were used to examine risk differences by time since exposure, age at exposure and calendar period. Results Among 19 309 male firefighters eligible for the study, there were 1333 cancer deaths and 2609 cancer incidence cases. Significant positive associations between fire-hours and lung cancer mortality and incidence were evident. A similar relation between leukaemia mortality and fire-runs was also found. The lung cancer associations were nearly linear in cumulative exposure, while the association with leukaemia mortality was attenuated at higher exposure levels and greater for recent exposures. Significant negative associations were evident for the exposure surrogates and colorectal and prostate cancers, suggesting a healthy worker survivor effect possibly enhanced by medical screening. Conclusions Lung cancer and leukaemia mortality risks were modestly increasing with firefighter exposures. These findings add to evidence of a causal association between firefighting and cancer. Nevertheless, small effects merit cautious interpretation. We plan to continue to follow the occurrence of disease and injury in this cohort. PMID:25673342
Burckhardt, Bjoern B; Tins, Jutta; Laeer, Stephanie
2014-08-05
Although serum and plasma are the biological fluids of choice for pharmacokinetic determination of drugs in adults, it is desirable to elucidate noninvasive methods which can be used for investigations in vulnerable groups such as children. If the drug properties grant sufficient penetration of the drug from blood into saliva, the latter is a useful matrix for noninvasive investigations. Concerning the known physicochemical properties, the direct renin inhibitor aliskiren is one of the substances of which saliva concentrations could substitute blood concentrations for pharmacokinetic investigations in children. Therefore, a reliable bioanalytical method was successfully developed and validated according to the criteria of current international bioanalytical guidelines to enable the comparison of blood and saliva concentrations of aliskiren. After purification of the fluid by solid-phase extraction the chromatographic separation was conducted by using Xselect™ C18 CSH columns. Applying a mobile phase gradient of acidified methanol and acidified water at a flow rate of 0.4ml/min the column effluent was monitored during a total run time of 7.5min by tandem mass spectrometry with electrospray ionization. Running in positive mode the following transitions were investigated: 552.2-436.2m/z for aliskiren and 425.3-351.2m/z for benazepril (internal standard). Calibration curves were constructed in the range of 0.586-1200ng/ml and were analyzed utilizing 1/x(2) weighted linear regression. Intra-run and inter-run precision were 3.8-8.1% and 3.4-8.9%. The method provides selectivity, linearity and accuracy. The validated method was then applied to determine aliskiren concentrations in saliva and blood of three healthy volunteers after oral administration of 300mg aliskiren. Copyright © 2014 Elsevier B.V. All rights reserved.
Smoliga, James M; Wirfel, Leah Anne; Paul, Danielle; Doarnberger, Mary; Ford, Kevin R
2015-07-16
The purpose of this study was to determine how unweighted running on a lower body positive pressure treadmill (LBPPT) modifies in-shoe regional loading. Ten experienced runners were fit with pressure distribution measurement insoles and ran at 100%, 120%, and 140% of self-reported easy training pace on a LBPPT at 20%, 40%, 60%, 80%, and 100% body weight percentage settings (BWSet). Speeds and BWSet were in random order. A linear mixed effect model (p<0.05 significance level) was used to compare differences in whole foot and regional maximum in-shoe plantar force (FMAX), impulse, and relative load distribution across speeds and BWSet. There were significant main effects (p<0.001) for running speed and BWSet for whole foot Fmax and impulse. The model revealed 1.4% and 0.24% increases in whole foot FMAX (times body weight) and impulse, respectively, for every unit increase in body weight percentage. There was a significant main effect for BWSet on Fmax and relative load (p<0.05) for each of the nine foot regions examined, though four regions were not different between 80% and 100% BWSet. There was a significant (p<0.001) main effect for BWSet on forefoot to rear foot relative load. Linear relationships were found between increases in BWSet and increases in-shoe Fmax and impulse, resulting from regional changes in foot pressure which represent a shift towards forefoot loading, most evident <80% BWSet. Estimating in-shoe regional loading parameters may be useful during rehabilitation and training to appropriately prescribe specific speed and body weight levels, without exceeding certain critical peak force levels while running. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Grenfell, J. Lee; Shindell, D. T.; Koch, D.; Rind, D.; Hansen, James E. (Technical Monitor)
2002-01-01
We investigate the chemical (hydroxyl and ozone) and dynamical response to changing from present day to pre-industrial conditions in the Goddard Institute for Space Studies General Circulation Model (GISS GMC). We identify three main improvements not included by many other works. Firstly, our model includes interactive cloud calculations. Secondly we reduce sulfate aerosol which impacts NOx partitioning hence Ox distributions. Thirdly we reduce sea surface temperatures and increase ocean ice coverage which impact water vapor and ground albedo respectively. Changing the ocean data (hence water vapor and ozone) produces a potentially important feedback between the Hadley circulation and convective cloud cover. Our present day run (run 1, control run) global mean OH value was 9.8 x 10(exp 5) molecules/cc. For our best estimate of pre-industrial conditions run (run 2) which featured modified chemical emissions, sulfate aerosol and sea surface temperatures/ocean ice, this value changed to 10.2 x 10(exp 5) molecules/cc. Reducing only the chemical emissions to pre-industrial levels in run 1 (run 3) resulted in this value increasing to 10.6 x 10(exp 5) molecules/cc. Reducing the sulfate in run 3 to pre-industrial levels (run 4) resulted in a small increase in global mean OH (10.7 x 10(exp 5) molecules/cc). Changing the ocean data in run 4 to pre-industrial levels (run 5) led to a reduction in this value to 10.3 x 10(exp 5) molecules/cc. Mean tropospheric ozone burdens were 262, 181, 180, 180, and 182 Tg for runs 1-5 respectively.
NASA Technical Reports Server (NTRS)
Gerstle, Walter
1989-01-01
Engineering problems sometimes involve the numerical solution of boundary value problems over domains containing geometric feature with widely varying scales. Often, a detailed solution is required at one or more of these features. Small details in large structures may have profound effects upon global performance. Conversely, large-scale conditions may effect local performance. Many man-hours and CPU-hours are currently spent in modeling such problems. With the structural zooming technique, it is now possible to design an integrated program which allows the analyst to interactively focus upon a small region of interest, to modify the local geometry, and then to obtain highly accurate responses in that region which reflect both the properties of the overall structure and the local detail. A boundary integral equation analysis program, called BOAST, was recently developed for the stress analysis of cracks. This program can accurately analyze two-dimensional linear elastic fracture mechanics problems with far less computational effort than existing finite element codes. An interactive computer graphical interface to BOAST was written. The graphical interface would have several requirements: it would be menu-driven, with mouse input; all aspects of input would be entered graphically; the results of a BOAST analysis would be displayed pictorially but also the user would be able to probe interactively to get numerical values of displacement and stress at desired locations within the analysis domain; the entire procedure would be integrated into a single, easy to use package; and it would be written using calls to the graphic package called HOOPS. The program is nearing completion. All of the preprocessing features are working satisfactorily and were debugged. The postprocessing features are under development, and rudimentary postprocessing should be available by the end of the summer. The program was developed and run on a VAX workstation, and must be ported to the SUN workstation. This activity is currently underway.
Multiview Locally Linear Embedding for Effective Medical Image Retrieval
Shen, Hualei; Tao, Dacheng; Ma, Dianfu
2013-01-01
Content-based medical image retrieval continues to gain attention for its potential to assist radiological image interpretation and decision making. Many approaches have been proposed to improve the performance of medical image retrieval system, among which visual features such as SIFT, LBP, and intensity histogram play a critical role. Typically, these features are concatenated into a long vector to represent medical images, and thus traditional dimension reduction techniques such as locally linear embedding (LLE), principal component analysis (PCA), or laplacian eigenmaps (LE) can be employed to reduce the “curse of dimensionality”. Though these approaches show promising performance for medical image retrieval, the feature-concatenating method ignores the fact that different features have distinct physical meanings. In this paper, we propose a new method called multiview locally linear embedding (MLLE) for medical image retrieval. Following the patch alignment framework, MLLE preserves the geometric structure of the local patch in each feature space according to the LLE criterion. To explore complementary properties among a range of features, MLLE assigns different weights to local patches from different feature spaces. Finally, MLLE employs global coordinate alignment and alternating optimization techniques to learn a smooth low-dimensional embedding from different features. To justify the effectiveness of MLLE for medical image retrieval, we compare it with conventional spectral embedding methods. We conduct experiments on a subset of the IRMA medical image data set. Evaluation results show that MLLE outperforms state-of-the-art dimension reduction methods. PMID:24349277
Computer-based testing of the modified essay question: the Singapore experience.
Lim, Erle Chuen-Hian; Seet, Raymond Chee-Seong; Oh, Vernon M S; Chia, Boon-Lock; Aw, Marion; Quak, Seng-Hock; Ong, Benjamin K C
2007-11-01
The modified essay question (MEQ), featuring an evolving case scenario, tests a candidate's problem-solving and reasoning ability, rather than mere factual recall. Although it is traditionally conducted as a pen-and-paper examination, our university has run the MEQ using computer-based testing (CBT) since 2003. We describe our experience with running the MEQ examination using the IVLE, or integrated virtual learning environment (https://ivle.nus.edu.sg), provide a blueprint for universities intending to conduct computer-based testing of the MEQ, and detail how our MEQ examination has evolved since its inception. An MEQ committee, comprising specialists in key disciplines from the departments of Medicine and Paediatrics, was formed. We utilized the IVLE, developed for our university in 1998, as the online platform on which we ran the MEQ. We calculated the number of man-hours (academic and support staff) required to run the MEQ examination, using either a computer-based or pen-and-paper format. With the support of our university's information technology (IT) specialists, we have successfully run the MEQ examination online, twice a year, since 2003. Initially, we conducted the examination with short-answer questions only, but have since expanded the MEQ examination to include multiple-choice and extended matching questions. A total of 1268 man-hours was spent in preparing for, and running, the MEQ examination using CBT, compared to 236.5 man-hours to run it using a pen-and-paper format. Despite being more labour-intensive, our students and staff prefer CBT to the pen-and-paper format. The MEQ can be conducted using a computer-based testing scenario, which offers several advantages over a pen-and-paper format. We hope to increase the number of questions and incorporate audio and video files, featuring clinical vignettes, to the MEQ examination in the near future.
Abundance of adult saugers across the Wind River watershed, Wyoming
Amadio, C.J.; Hubert, W.A.; Johnson, K.; Oberlie, D.; Dufek, D.
2006-01-01
The abundance of adult saugers Sander canadensis was estimated over 179 km of continuous lotic habitat across a watershed on the western periphery of their natural distribution in Wyoming. Three-pass depletions with raft-mounted electrofishing gear were conducted in 283 pools and runs among 19 representative reaches totaling 51 km during the late summer and fall of 2002. From 2 to 239 saugers were estimated to occur among the 19 reaches of 1.6-3.8 km in length. The estimates were extrapolated to a total population estimate (mean ?? 95% confidence interval) of 4,115 ?? 308 adult saugers over 179 km of lotie habitat. Substantial variation in mean density (range = 1.0-32.5 fish/ha) and mean biomass (range = 0.5-16.8 kg/ha) of adult saugers in pools and runs was observed among the study reaches. Mean density and biomass were highest in river reaches with pools and runs that had maximum depths of more than 1 m, mean daily summer water temperatures exceeding 20??C, and alkalinity exceeding 130 mg/L. No saugers were captured in the 39 pools or runs with maximum water depths of 0.6 m or less. Multiple-regression analysis and the information-theoretic approach were used to identify watershed-scale and instream habitat features accounting for the variation in biomass among the 244 pools and runs across the watershed with maximum depths greater than 0.6 m. Sauger biomass was greater in pools than in runs and increased as mean daily summer water temperature, maximum depth, and mean summer alkalinity increased and as dominant substrate size decreased. This study provides an estimate of adult sauger abundance and identifies habitat features associated with variation in their density and biomass across a watershed, factors important to the management of both populations and habitat. ?? Copyright by the American Fisheries Society 2006.
Capturing system level activities and impacts of mental health consumer-run organizations.
Janzen, Rich; Nelson, Geoffrey; Hausfather, Nadia; Ochocka, Joanna
2007-06-01
Since the 1970s mental health consumer-run organizations have come to offer not only mutual support, but they have also adopted agendas for broader social change. Despite an awareness of the need for system level efforts that create supportive environments for their members, there has been limited research demonstrating how their system level activities can be documented or their impacts evaluated. The purpose of this paper is to feature a method of evaluating systems change activities and impacts. The paper is based on a longitudinal study evaluating four mental health consumer-run organizations in Ontario, Canada. The study tracked system level activities and impacts using both qualitative and quantitative methodologies. The article begins by describing the development and implementation of these methods. Next it offers a critical analysis of the methods used. It concludes by reflecting on three lessons learned about capturing system level activities and impacts of mental health consumer-run organizations.
Parkes full polarization spectra of OH masers - II. Galactic longitudes 240° to 350°
NASA Astrophysics Data System (ADS)
Caswell, J. L.; Green, J. A.; Phillips, C. J.
2014-04-01
Full polarization measurements of 1665 and 1667 MHz OH masers at 261 sites of massive star formation have been made with the Parkes radio telescope. Here, we present the resulting spectra for 157 southern sources, complementing our previously published 104 northerly sources. For most sites, these are the first measurements of linear polarization, with good spectral resolution and complete velocity coverage. Our spectra exhibit the well-known predominance of highly circularly polarized features, interpreted as σ components of Zeeman patterns. Focusing on the generally weaker and rarer linear polarization, we found three examples of likely full Zeeman triplets (a linearly polarized π component, straddled in velocity by σ components), adding to the solitary example previously reported. We also identify 40 examples of likely isolated π components, contradicting past beliefs that π components might be extremely rare. These were recognized at 20 sites where a feature with high linear polarization on one transition is accompanied on the other transition by a matching feature, at the same velocity and also with significant linear polarization. Large velocity ranges are rare, but we find eight exceeding 25 km s-1, some of them indicating high-velocity blue-shifted outflows. Variability was investigated on time-scales of one year and over several decades. More than 20 sites (of 200) show high variability (intensity changes by factors of 4 or more) in some prominent features. Highly stable sites are extremely rare.
Does Accumulated Knowledge Impact Academic Performance in Cost Accounting?
ERIC Educational Resources Information Center
Alanzi, Khalid A.; Alfraih, Mishari M.
2017-01-01
Purpose: This quantitative study aims to examine the impact of accumulated knowledge of accounting on the academic performance of Cost Accounting students. Design/methodology/approach The sample consisted of 89 students enrolled in the Accounting program run by a business college in Kuwait during 2015. Correlation and linear least squares…
Recent Performance Results of VPIC on Trinity
NASA Astrophysics Data System (ADS)
Nystrom, W. D.; Bergen, B.; Bird, R. F.; Bowers, K. J.; Daughton, W. S.; Guo, F.; Le, A.; Li, H.; Nam, H.; Pang, X.; Stark, D. J.; Rust, W. N., III; Yin, L.; Albright, B. J.
2017-10-01
Trinity is a new DOE compute resource now in production at Los Alamos National Laboratory. Trinity has several new and unique features including two compute partitions, one with dual socket Intel Haswell Xeon compute nodes and one with Intel Knights Landing (KNL) Xeon Phi compute nodes, use of on package high bandwidth memory (HBM) for KNL nodes, ability to configure KNL nodes with respect to HBM model and on die network topology in a variety of operational modes at run time, and use of solid state storage via burst buffer technology to reduce time required to perform I/O. An effort is in progress to optimize VPIC on Trinity by taking advantage of these new architectural features. Results of work will be presented on performance of VPIC on Haswell and KNL partitions for single node runs and runs at scale. Results include use of burst buffers at scale to optimize I/O, comparison of strategies for using MPI and threads, performance benefits using HBM and effectiveness of using intrinsics for vectorization. Work performed under auspices of U.S. Dept. of Energy by Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by LANL LDRD program.
Webster, K N; Dawson, T J
2003-09-01
The locomotory characteristics of kangaroos and wallabies are unusual, with both energetic costs and gait parameters differing from those of quadrupedal running mammals. The kangaroos and wallabies have an evolutionary history of only around 5 million years; their closest relatives, the rat-kangaroos, have a fossil record of more than 26 million years. We examined the locomotory characteristics of a rat-kangaroo, Bettongia penicillata. Locomotory energetics and gait parameters were obtained from animals exercising on a motorised treadmill at speeds from 0.6 m s(-1) to 6.2 m s(-1). Aerobic metabolic costs increased as hopping speed increased, but were significantly different from the costs for a running quadruped; at the fastest speed, the cost of hopping was 50% of the cost of running. Therefore B. penicillata can travel much faster than quadrupedal runners at similar levels of aerobic output. The maximum aerobic output of B. penicillata was 17 times its basal metabolism. Increases in speed during hopping were achieved through increases in stride length, with stride frequency remaining constant. We suggest that these unusual locomotory characteristics are a conservative feature among the hopping marsupials, with an evolutionary history of 20-30 million years.
NASA Technical Reports Server (NTRS)
Caldwell, E. C.; Cowley, M. S.; Scott-Pandorf, M. M.
2010-01-01
Develop a model that simulates a human running in 0 G using the European Space Agency s (ESA) Subject Loading System (SLS). The model provides ground reaction forces (GRF) based on speed and pull-down forces (PDF). DESIGN The theoretical basis for the Running Model was based on a simple spring-mass model. The dynamic properties of the spring-mass model express theoretical vertical GRF (GRFv) and shear GRF in the posterior-anterior direction (GRFsh) during running gait. ADAMs VIEW software was used to build the model, which has a pelvis, thigh segment, shank segment, and a spring foot (see Figure 1).the model s movement simulates the joint kinematics of a human running at Earth gravity with the aim of generating GRF data. DEVELOPMENT & VERIFICATION ESA provided parabolic flight data of subjects running while using the SLS, for further characterization of the model s GRF. Peak GRF data were fit to a linear regression line dependent on PDF and speed. Interpolation and extrapolation of the regression equation provided a theoretical data matrix, which is used to drive the model s motion equations. Verification of the model was conducted by running the model at 4 different speeds, with each speed accounting for 3 different PDF. The model s GRF data fell within a 1-standard-deviation boundary derived from the empirical ESA data. CONCLUSION The Running Model aids in conducting various simulations (potential scenarios include a fatigued runner or a powerful runner generating high loads at a fast cadence) to determine limitations for the T2 vibration isolation system (VIS) aboard the International Space Station. This model can predict how running with the ESA SLS affects the T2 VIS and may be used for other exercise analyses in the future.
Intensity related changes of running economy in recreational level distance runners.
Engeroff, Tobias; Bernardi, Andreas; Niederer, Daniel; Wilke, Jan; Vogt, Lutz; Banzer, Winfried
2017-09-01
Running economy (RE) is often described as a key demand of running performance. The variety of currently used assessment methods with different running intensities and outcomes restricts interindividual comparability of RE in recreational level runners. The purpose of this study was to compare the influence of RE, assessed as oxygen cost (OC) and caloric unit cost (CUC), on running speed at individual physiological thresholds. Eighteen recreational runners performed: 1) a graded exercise test to estimate first ventilatory threshold (VT1), respiratory compensation point (RCP) and maximal oxygen uptake (VO2max); 2) discontinuous RE assessment to determine relative OC in milliliters per kilogram per kilometer (mL/kg/km) and CUC in kilocalories per kilogram per kilometer (kcal/kg/km) at three different running intensities: VT1, RCP and at a third standardized reference point (TP) in between. OC (mL/kg/km; at VT1: 235.4±26.2; at TP: 227.8±23.4; at RCP: 224.9±21.9) and CUC (kcal/kg/km at VT1: 1.18±0.13; at TP: 1.14±0.12; at RCP: 1.13±0.11) decreased with increasing intensities (P≤0.01). Controlling for the influence of sex OC and CUC linearly correlated with running speed at RCP and VO2max (P≤0.01). RE, even assessed at low intensity, is strongly related to running performance in recreational athletes. Both calculation methods used (OC and CUC) are sensitive for monitoring intensity related changes of substrate utilization. RE values decreased with higher running intensity indicating an increase of anaerobic and subsequent decrease of aerobic substrate utilization.
Characterizing the Mechanical Properties of Running-Specific Prostheses
Beck, Owen N.; Taboga, Paolo; Grabowski, Alena M.
2016-01-01
The mechanical stiffness of running-specific prostheses likely affects the functional abilities of athletes with leg amputations. However, each prosthetic manufacturer recommends prostheses based on subjective stiffness categories rather than performance based metrics. The actual mechanical stiffness values of running-specific prostheses (i.e. kN/m) are unknown. Consequently, we sought to characterize and disseminate the stiffness values of running-specific prostheses so that researchers, clinicians, and athletes can objectively evaluate prosthetic function. We characterized the stiffness values of 55 running-specific prostheses across various models, stiffness categories, and heights using forces and angles representative of those measured from athletes with transtibial amputations during running. Characterizing prosthetic force-displacement profiles with a 2nd degree polynomial explained 4.4% more of the variance than a linear function (p<0.001). The prosthetic stiffness values of manufacturer recommended stiffness categories varied between prosthetic models (p<0.001). Also, prosthetic stiffness was 10% to 39% less at angles typical of running 3 m/s and 6 m/s (10°-25°) compared to neutral (0°) (p<0.001). Furthermore, prosthetic stiffness was inversely related to height in J-shaped (p<0.001), but not C-shaped, prostheses. Running-specific prostheses should be tested under the demands of the respective activity in order to derive relevant characterizations of stiffness and function. In all, our results indicate that when athletes with leg amputations alter prosthetic model, height, and/or sagittal plane alignment, their prosthetic stiffness profiles also change; therefore variations in comfort, performance, etc. may be indirectly due to altered stiffness. PMID:27973573
Iron oxide bands in the visible and near-infrared reflectance spectra of primitive asteroids
NASA Technical Reports Server (NTRS)
Jarvis, Kandy S.; Vilas, Faith; Gaffey, Michael J.
1993-01-01
High resolution reflectance spectra of primitive asteroids (C, P, and D class and associated subclasses) have commonly revealed an absorption feature centered at 0.7 microns attributed to an Fe(2+)-Fe(3+) charge transfer transition in iron oxides and/or oxidized iron in phyllosilicates. A smaller feature identified at 0.43 microns has been attributed to an Fe(3+) spin-forbidden transition in iron oxides. In the spectra of the two main-belt primitive asteroids 368 Haidea (D) and 877 Walkure (F), weak absorption features which were centered near the location of 0.60-0.65 microns and 0.80-0.90 microns prompted a search for features at these wavelengths and an attempt to identify their origin(s). The CCD reflectance spectra obtained between 1982-1992 were reviewed for similar absorption features located near these wavelengths. The spectra of asteroids in which these absorption features have been identified are shown. These spectra are plotted in order of increasing heliocentric distance. No division of the asteroids by class has been attempted here (although the absence of these features in the anhydrous S-class asteroids, many of which have presumably undergone full heating and differentiation should be noted). For this study, each spectrum was treated as a continuum with discrete absorption features superimposed on it. For each object, a linear least squares fit to the data points defined a simple linear continuum. The linear continuum was then divided into each spectrum, thus removing the sloped continuum and permitting the intercomparison of residual spectral features.
Tsatsishvili, Valeri; Burunat, Iballa; Cong, Fengyu; Toiviainen, Petri; Alluri, Vinoo; Ristaniemi, Tapani
2018-06-01
There has been growing interest towards naturalistic neuroimaging experiments, which deepen our understanding of how human brain processes and integrates incoming streams of multifaceted sensory information, as commonly occurs in real world. Music is a good example of such complex continuous phenomenon. In a few recent fMRI studies examining neural correlates of music in continuous listening settings, multiple perceptual attributes of music stimulus were represented by a set of high-level features, produced as the linear combination of the acoustic descriptors computationally extracted from the stimulus audio. NEW METHOD: fMRI data from naturalistic music listening experiment were employed here. Kernel principal component analysis (KPCA) was applied to acoustic descriptors extracted from the stimulus audio to generate a set of nonlinear stimulus features. Subsequently, perceptual and neural correlates of the generated high-level features were examined. The generated features captured musical percepts that were hidden from the linear PCA features, namely Rhythmic Complexity and Event Synchronicity. Neural correlates of the new features revealed activations associated to processing of complex rhythms, including auditory, motor, and frontal areas. Results were compared with the findings in the previously published study, which analyzed the same fMRI data but applied linear PCA for generating stimulus features. To enable comparison of the results, methodology for finding stimulus-driven functional maps was adopted from the previous study. Exploiting nonlinear relationships among acoustic descriptors can lead to the novel high-level stimulus features, which can in turn reveal new brain structures involved in music processing. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Pasqua, Claudio; Verdoya, Massimo
2014-05-01
The use of remote sensing techniques in the initial phase of geothermal surveys represents a very cost-effective tool, which can contribute to a successful exploration program. Remote sensing allows the analysis of large surfaces and can lead to a significant improvement of the identification of surface thermal anomalies, through the use of thermal infra red data (TIR), as well as of zones of widespread and recent faulting, which can reflect larger permeability of geological formations. Generally, the fractures analysis from remote sensing can be fundamental to clarify the structural setting of an area. In a regional volcanic framework, it can also help in defining the spatial and time evolution of the different volcanic apparatuses. This paper describes the main results of a remote sensing study, conducted in the Blawan-Ijen volcanic area (East Java), which is at present subject of geothermal exploration. This area is characterized by the presence of a 15 km wide caldera originated by a collapsed strato volcano. This event was followed by the emplacement of several peri-calderic and intra-calderic volcanoes, among which G. Raung, as testified by the frequent occurrence of shallow earthquakes and by H2S emission and sulfur deposition, and G. Kawah Ijen, occurring at the eastern rim of the caldera, are still active. The summit of G. Kawah Ijen volcano consists of two interlocking craters forming an E-W elongated depression filled up by a hyperacidic lake. Along the southern shore of the lake, a small rhyolitic dome occurs, which exhibits strong fumarolic activity with temperature of as much as 600 °C. We performed an analysis based on the combined interpretation of Landsat ETM+7, Aster and Synthetic Aperture Radar (SAR) images, focused on the identification of subsurface high permeability zones. The main trends of the linear features as derived from the fractures analysis, as well as their relation with the distribution of volcanic centres, were identified, singling out the variations of these trends as a function of the geographic location and age of volcanism. Moreover, the density of weighted linear features and nodal points were elaborated, in order to locate the zones where the effects of the fractures crossing could be more important. Two major belts of anomalously high density of linear fractures were identified: the first running E-W along the neo-volcanic axis and the second N-S in correspondence of the main structural features. The findings of this study, combined with the field observations about the position of thermal springs, allowed us to outline a zone that could be characterized by larger permeability and consequently could have hydrogeological and structural conditions suitable for the formation of an exploitable geothermal system.
Two-layer contractive encodings for learning stable nonlinear features.
Schulz, Hannes; Cho, Kyunghyun; Raiko, Tapani; Behnke, Sven
2015-04-01
Unsupervised learning of feature hierarchies is often a good strategy to initialize deep architectures for supervised learning. Most existing deep learning methods build these feature hierarchies layer by layer in a greedy fashion using either auto-encoders or restricted Boltzmann machines. Both yield encoders which compute linear projections of input followed by a smooth thresholding function. In this work, we demonstrate that these encoders fail to find stable features when the required computation is in the exclusive-or class. To overcome this limitation, we propose a two-layer encoder which is less restricted in the type of features it can learn. The proposed encoder is regularized by an extension of previous work on contractive regularization. This proposed two-layer contractive encoder potentially poses a more difficult optimization problem, and we further propose to linearly transform hidden neurons of the encoder to make learning easier. We demonstrate the advantages of the two-layer encoders qualitatively on artificially constructed datasets as well as commonly used benchmark datasets. We also conduct experiments on a semi-supervised learning task and show the benefits of the proposed two-layer encoders trained with the linear transformation of perceptrons. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hosseini-Golgoo, S. M.; Bozorgi, H.; Saberkari, A.
2015-06-01
Performances of three neural networks, consisting of a multi-layer perceptron, a radial basis function, and a neuro-fuzzy network with local linear model tree training algorithm, in modeling and extracting discriminative features from the response patterns of a temperature-modulated resistive gas sensor are quantitatively compared. For response pattern recording, a voltage staircase containing five steps each with a 20 s plateau is applied to the micro-heater of the sensor, when 12 different target gases, each at 11 concentration levels, are present. In each test, the hidden layer neuron weights are taken as the discriminatory feature vector of the target gas. These vectors are then mapped to a 3D feature space using linear discriminant analysis. The discriminative information content of the feature vectors are determined by the calculation of the Fisher’s discriminant ratio, affording quantitative comparison among the success rates achieved by the different neural network structures. The results demonstrate a superior discrimination ratio for features extracted from local linear neuro-fuzzy and radial-basis-function networks with recognition rates of 96.27% and 90.74%, respectively.
Optimal number of features as a function of sample size for various classification rules.
Hua, Jianping; Xiong, Zixiang; Lowey, James; Suh, Edward; Dougherty, Edward R
2005-04-15
Given the joint feature-label distribution, increasing the number of features always results in decreased classification error; however, this is not the case when a classifier is designed via a classification rule from sample data. Typically (but not always), for fixed sample size, the error of a designed classifier decreases and then increases as the number of features grows. The potential downside of using too many features is most critical for small samples, which are commonplace for gene-expression-based classifiers for phenotype discrimination. For fixed sample size and feature-label distribution, the issue is to find an optimal number of features. Since only in rare cases is there a known distribution of the error as a function of the number of features and sample size, this study employs simulation for various feature-label distributions and classification rules, and across a wide range of sample and feature-set sizes. To achieve the desired end, finding the optimal number of features as a function of sample size, it employs massively parallel computation. Seven classifiers are treated: 3-nearest-neighbor, Gaussian kernel, linear support vector machine, polynomial support vector machine, perceptron, regular histogram and linear discriminant analysis. Three Gaussian-based models are considered: linear, nonlinear and bimodal. In addition, real patient data from a large breast-cancer study is considered. To mitigate the combinatorial search for finding optimal feature sets, and to model the situation in which subsets of genes are co-regulated and correlation is internal to these subsets, we assume that the covariance matrix of the features is blocked, with each block corresponding to a group of correlated features. Altogether there are a large number of error surfaces for the many cases. These are provided in full on a companion website, which is meant to serve as resource for those working with small-sample classification. For the companion website, please visit http://public.tgen.org/tamu/ofs/ e-dougherty@ee.tamu.edu.
Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li
2011-01-01
Background Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Methodology/Principal Findings Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. Conclusions/Significance The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice. PMID:21359184
Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li
2011-02-16
Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.
X-ray characterization of a multichannel smart-pixel array detector.
Ross, Steve; Haji-Sheikh, Michael; Huntington, Andrew; Kline, David; Lee, Adam; Li, Yuelin; Rhee, Jehyuk; Tarpley, Mary; Walko, Donald A; Westberg, Gregg; Williams, George; Zou, Haifeng; Landahl, Eric
2016-01-01
The Voxtel VX-798 is a prototype X-ray pixel array detector (PAD) featuring a silicon sensor photodiode array of 48 × 48 pixels, each 130 µm × 130 µm × 520 µm thick, coupled to a CMOS readout application specific integrated circuit (ASIC). The first synchrotron X-ray characterization of this detector is presented, and its ability to selectively count individual X-rays within two independent arrival time windows, a programmable energy range, and localized to a single pixel is demonstrated. During our first trial run at Argonne National Laboratory's Advance Photon Source, the detector achieved a 60 ns gating time and 700 eV full width at half-maximum energy resolution in agreement with design parameters. Each pixel of the PAD holds two independent digital counters, and the discriminator for X-ray energy features both an upper and lower threshold to window the energy of interest discarding unwanted background. This smart-pixel technology allows energy and time resolution to be set and optimized in software. It is found that the detector linearity follows an isolated dead-time model, implying that megahertz count rates should be possible in each pixel. Measurement of the line and point spread functions showed negligible spatial blurring. When combined with the timing structure of the synchrotron storage ring, it is demonstrated that the area detector can perform both picosecond time-resolved X-ray diffraction and fluorescence spectroscopy measurements.
NASA Astrophysics Data System (ADS)
Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang
2017-01-01
Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods.
Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang
2017-01-01
Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods. PMID:28120883
Acoustic features of objects matched by an echolocating bottlenose dolphin.
Delong, Caroline M; Au, Whitlow W L; Lemonds, David W; Harley, Heidi E; Roitblat, Herbert L
2006-03-01
The focus of this study was to investigate how dolphins use acoustic features in returning echolocation signals to discriminate among objects. An echolocating dolphin performed a match-to-sample task with objects that varied in size, shape, material, and texture. After the task was completed, the features of the object echoes were measured (e.g., target strength, peak frequency). The dolphin's error patterns were examined in conjunction with the between-object variation in acoustic features to identify the acoustic features that the dolphin used to discriminate among the objects. The present study explored two hypotheses regarding the way dolphins use acoustic information in echoes: (1) use of a single feature, or (2) use of a linear combination of multiple features. The results suggested that dolphins do not use a single feature across all object sets or a linear combination of six echo features. Five features appeared to be important to the dolphin on four or more sets: the echo spectrum shape, the pattern of changes in target strength and number of highlights as a function of object orientation, and peak and center frequency. These data suggest that dolphins use multiple features and integrate information across echoes from a range of object orientations.
Walking Shoes: Features and Fit
... Science. 2015;27:3833. Manske RC, et al. Running injuries: Etiology and recovery-based treatment. Clinical Orthopaedic Rehabilitation: An Evidence-Based Approach. 3rd ed. Philadelphia, Pa.: Saunders Elsevier; 2011. ... American Academy of Pediatrics. https://www.healthychildren.org/ ...
The Relationship between Running Velocity and the Energy Cost of Turning during Running
Hatamoto, Yoichi; Yamada, Yosuke; Sagayama, Hiroyuki; Higaki, Yasuki; Kiyonaga, Akira; Tanaka, Hiroaki
2014-01-01
Ball game players frequently perform changes of direction (CODs) while running; however, there has been little research on the physiological impact of CODs. In particular, the effect of running velocity on the physiological and energy demands of CODs while running has not been clearly determined. The purpose of this study was to examine the relationship between running velocity and the energy cost of a 180°COD and to quantify the energy cost of a 180°COD. Nine male university students (aged 18–22 years) participated in the study. Five shuttle trials were performed in which the subjects were required to run at different velocities (3, 4, 5, 6, 7, and 8 km/h). Each trial consisted of four stages with different turn frequencies (13, 18, 24 and 30 per minute), and each stage lasted 3 minutes. Oxygen consumption was measured during the trial. The energy cost of a COD significantly increased with running velocity (except between 7 and 8 km/h, p = 0.110). The relationship between running velocity and the energy cost of a 180°COD is best represented by a quadratic function (y = −0.012+0.066x +0.008x2, [r = 0.994, p = 0.001]), but is also well represented by a linear (y = −0.228+0.152x, [r = 0.991, p<0.001]). These data suggest that even low running velocities have relatively high physiological demands if the COD frequency increases, and that running velocities affect the physiological demands of CODs. These results also showed that the energy expenditure of COD can be evaluated using only two data points. These results may be useful for estimating the energy expenditure of players during a match and designing shuttle exercise training programs. PMID:24497913
Does a crouched leg posture enhance running stability and robustness?
Blum, Yvonne; Birn-Jeffery, Aleksandra; Daley, Monica A; Seyfarth, Andre
2011-07-21
Humans and birds both walk and run bipedally on compliant legs. However, differences in leg architecture may result in species-specific leg control strategies as indicated by the observed gait patterns. In this work, control strategies for stable running are derived based on a conceptual model and compared with experimental data on running humans and pheasants (Phasianus colchicus). From a model perspective, running with compliant legs can be represented by the planar spring mass model and stabilized by applying swing leg control. Here, linear adaptations of the three leg parameters, leg angle, leg length and leg stiffness during late swing phase are assumed. Experimentally observed kinematic control parameters (leg rotation and leg length change) of human and avian running are compared, and interpreted within the context of this model, with specific focus on stability and robustness characteristics. The results suggest differences in stability characteristics and applied control strategies of human and avian running, which may relate to differences in leg posture (straight leg posture in humans, and crouched leg posture in birds). It has been suggested that crouched leg postures may improve stability. However, as the system of control strategies is overdetermined, our model findings suggest that a crouched leg posture does not necessarily enhance running stability. The model also predicts different leg stiffness adaptation rates for human and avian running, and suggests that a crouched avian leg posture, which is capable of both leg shortening and lengthening, allows for stable running without adjusting leg stiffness. In contrast, in straight-legged human running, the preparation of the ground contact seems to be more critical, requiring leg stiffness adjustment to remain stable. Finally, analysis of a simple robustness measure, the normalized maximum drop, suggests that the crouched leg posture may provide greater robustness to changes in terrain height. Copyright © 2011 Elsevier Ltd. All rights reserved.
Characteristic Lifetime Of A Polarized Feature In The V=0, J=1-0 Sio Maser VY Canis Majoris
NASA Astrophysics Data System (ADS)
Rislow, Benjamin; McIntosh, G. C.
2008-05-01
A time series cross correlation analysis has been developed for calculating the characteristic lifetime of linearly polarized features in the spectrum of silicon monoxide masers. Our observations of VY CMa in the v=0, J=1→0; transition from June 2003 to March 2006 revealed a highly linearly polarized feature at Vlsr=18.5 km s-1. Applying the cross correlation to this feature gave a characteristic lifetime of 2800 days. This time is much longer than the v=1, J=2→1; transition's lifetime of 645 days and indicates that the two transitions occur under different physical conditions. This research was supported by the University of Minnesota and the University of Minnesota, Morris.
Modeling rainfall-runoff relationship using multivariate GARCH model
NASA Astrophysics Data System (ADS)
Modarres, R.; Ouarda, T. B. M. J.
2013-08-01
The traditional hydrologic time series approaches are used for modeling, simulating and forecasting conditional mean of hydrologic variables but neglect their time varying variance or the second order moment. This paper introduces the multivariate Generalized Autoregressive Conditional Heteroscedasticity (MGARCH) modeling approach to show how the variance-covariance relationship between hydrologic variables varies in time. These approaches are also useful to estimate the dynamic conditional correlation between hydrologic variables. To illustrate the novelty and usefulness of MGARCH models in hydrology, two major types of MGARCH models, the bivariate diagonal VECH and constant conditional correlation (CCC) models are applied to show the variance-covariance structure and cdynamic correlation in a rainfall-runoff process. The bivariate diagonal VECH-GARCH(1,1) and CCC-GARCH(1,1) models indicated both short-run and long-run persistency in the conditional variance-covariance matrix of the rainfall-runoff process. The conditional variance of rainfall appears to have a stronger persistency, especially long-run persistency, than the conditional variance of streamflow which shows a short-lived drastic increasing pattern and a stronger short-run persistency. The conditional covariance and conditional correlation coefficients have different features for each bivariate rainfall-runoff process with different degrees of stationarity and dynamic nonlinearity. The spatial and temporal pattern of variance-covariance features may reflect the signature of different physical and hydrological variables such as drainage area, topography, soil moisture and ground water fluctuations on the strength, stationarity and nonlinearity of the conditional variance-covariance for a rainfall-runoff process.
Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.
ERIC Educational Resources Information Center
Alexopoulos, John; Abraham, Paul
2001-01-01
Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…
NASA Technical Reports Server (NTRS)
1972-01-01
The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.
Healthcare service quality perception in Japan.
Eleuch, Amira ep Koubaa
2011-01-01
This study aims to assess Japanese patients' healthcare service quality perceptions and to shed light on the most meaningful service features. It follows-up a study published in IJHCQA Vol. 21 No. 7. Through a non-linear approach, the study relied on the scatter model to detect healthcare service features' importance in forming overall quality judgment. Japanese patients perceive healthcare services through a linear compensatory process. Features related to technical quality and staff behavior compensate for each other to decide service quality. A limitation of the study is the limited sample size. Non-linear approaches could help researchers to better understand patients' healthcare service quality perceptions. The study highlights a need to adopt an evolution that enhances technical quality and medical practices in Japanese healthcare settings. The study relies on a non-linear approach to assess patient overall quality perceptions in order to enrich knowledge. Furthermore, the research is conducted in Japan where healthcare marketing studies are scarce owing to cultural and language barriers. Japanese culture and healthcare system characteristics are used to explain and interpret the results.
Seubert, Janina; Gregory, Kristen M.; Chamberland, Jessica; Dessirier, Jean-Marc; Lundström, Johan N.
2014-01-01
Scented cosmetic products are used across cultures as a way to favorably influence one's appearance. While crossmodal effects of odor valence on perceived attractiveness of facial features have been demonstrated experimentally, it is unknown whether they represent a phenomenon specific to affective processing. In this experiment, we presented odors in the context of a face battery with systematic feature manipulations during a speeded response task. Modulatory effects of linear increases of odor valence were investigated by juxtaposing subsequent memory-based ratings tasks – one predominantly affective (attractiveness) and a second, cognitive (age). The linear modulation pattern observed for attractiveness was consistent with additive effects of face and odor appraisal. Effects of odor valence on age perception were not linearly modulated and may be the result of cognitive interference. Affective and cognitive processing of faces thus appear to differ in their susceptibility to modulation by odors, likely as a result of privileged access of olfactory stimuli to affective brain networks. These results are critically discussed with respect to potential biases introduced by the preceding speeded response task. PMID:24874703
Point- and line-based transformation models for high resolution satellite image rectification
NASA Astrophysics Data System (ADS)
Abd Elrahman, Ahmed Mohamed Shaker
Rigorous mathematical models with the aid of satellite ephemeris data can present the relationship between the satellite image space and the object space. With government funded satellites, access to calibration and ephemeris data has allowed the development and use of these models. However, for commercial high-resolution satellites, which have been recently launched, these data are withheld from users, and therefore alternative empirical models should be used. In general, the existing empirical models are based on the use of control points and involve linking points in the image space and the corresponding points in the object space. But the lack of control points in some remote areas and the questionable accuracy of the identified discrete conjugate points provide a catalyst for the development of algorithms based on features other than control points. This research, concerned with image rectification and 3D geo-positioning determination using High-Resolution Satellite Imagery (HRSI), has two major objectives. First, the effects of satellite sensor characteristics, number of ground control points (GCPs), and terrain elevation variations on the performance of several point based empirical models are studied. Second, a new mathematical model, using only linear features as control features, or linear features with a minimum number of GCPs, is developed. To meet the first objective, several experiments for different satellites such as Ikonos, QuickBird, and IRS-1D have been conducted using different point based empirical models. Various data sets covering different terrain types are presented and results from representative sets of the experiments are shown and analyzed. The results demonstrate the effectiveness and the superiority of these models under certain conditions. From the results obtained, several alternatives to circumvent the effects of the satellite sensor characteristics, the number of GCPs, and the terrain elevation variations are introduced. To meet the second objective, a new model named the Line Based Transformation Model (LBTM) is developed for HRSI rectification. The model has the flexibility to either solely use linear features or use linear features and a number of control points to define the image transformation parameters. Unlike point features, which must be explicitly defined, linear features have the advantage that they can be implicitly defined by any segment along the line. (Abstract shortened by UMI.)
Real-time machine vision system using FPGA and soft-core processor
NASA Astrophysics Data System (ADS)
Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad
2012-06-01
This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.
MindDigger: Feature Identification and Opinion Association for Chinese Movie Reviews
NASA Astrophysics Data System (ADS)
Zhao, Lili; Li, Chunping
In this paper, we present a prototype system called MindDigger, which can be used to analyze the opinions in Chinese movie reviews. Different from previous research that employed techniques on product reviews, we focus on Chinese movie reviews, in which opinions are expressed in subtle and varied ways. The system designed in this work aims to extract the opinion expressions and assign them to the corresponding features. The core tasks include feature and opinion extraction, and feature-opinion association. To deal with Chinese effectively, several novel approaches based on syntactic analysis are proposed in this paper. Running results show the performance is satisfactory.
Random discrete linear canonical transform.
Wei, Deyun; Wang, Ruikui; Li, Yuan-Min
2016-12-01
Linear canonical transforms (LCTs) are a family of integral transforms with wide applications in optical, acoustical, electromagnetic, and other wave propagation problems. In this paper, we propose the random discrete linear canonical transform (RDLCT) by randomizing the kernel transform matrix of the discrete linear canonical transform (DLCT). The RDLCT inherits excellent mathematical properties from the DLCT along with some fantastic features of its own. It has a greater degree of randomness because of the randomization in terms of both eigenvectors and eigenvalues. Numerical simulations demonstrate that the RDLCT has an important feature that the magnitude and phase of its output are both random. As an important application of the RDLCT, it can be used for image encryption. The simulation results demonstrate that the proposed encryption method is a security-enhanced image encryption scheme.
A Method for Generating Reduced Order Linear Models of Supersonic Inlets
NASA Technical Reports Server (NTRS)
Chicatelli, Amy; Hartley, Tom T.
1997-01-01
For the modeling of high speed propulsion systems, there are at least two major categories of models. One is based on computational fluid dynamics (CFD), and the other is based on design and analysis of control systems. CFD is accurate and gives a complete view of the internal flow field, but it typically has many states and runs much slower dm real-time. Models based on control design typically run near real-time but do not always capture the fundamental dynamics. To provide improved control models, methods are needed that are based on CFD techniques but yield models that are small enough for control analysis and design.
Constraints on running vacuum model with H ( z ) and f σ{sub 8}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Chao-Qiang; Lee, Chung-Chi; Yin, Lu, E-mail: geng@phys.nthu.edu.tw, E-mail: lee.chungchi16@gmail.com, E-mail: yinlumail@foxmail.com
We examine the running vacuum model with Λ ( H ) = 3 ν H {sup 2} + Λ{sub 0}, where ν is the model parameter and Λ{sub 0} is the cosmological constant. From the data of the cosmic microwave background radiation, weak lensing and baryon acoustic oscillation along with the time dependent Hubble parameter H ( z ) and weighted linear growth f ( z )σ{sub 8}( z ) measurements, we find that ν=(1.37{sup +0.72}{sub −0.95})× 10{sup −4} with the best fitted χ{sup 2} value slightly smaller than that in the ΛCDM model.
Arch index and running biomechanics in children aged 10-14 years.
Hollander, Karsten; Stebbins, Julie; Albertsen, Inke Marie; Hamacher, Daniel; Babin, Kornelia; Hacke, Claudia; Zech, Astrid
2018-03-01
While altered foot arch characteristics (high or low) are frequently assumed to influence lower limb biomechanics and are suspected to be a contributing factor for injuries, the association between arch characteristics and lower limb running biomechanics in children is unclear. Therefore, the aim of this study was to investigate the relationship between a dynamically measured arch index and running biomechanics in healthy children. One hundred and one children aged 10-14 years were included in this study and underwent a biomechanical investigation. Plantar distribution (Novel, Emed) was used to determine the dynamic arch index and 3D motion capture (Vicon) to measure running biomechanics. Linear mixed models were established to determine the association between dynamic arch index and foot strike patterns, running kinematics, kinetics and temporal-spatial outcomes. No association was found between dynamic arch index and rate of rearfoot strikes (p = 0.072). Of all secondary outcomes, only the foot progression angle was associated with the dynamic arch index (p = 0.032) with greater external rotation in lower arched children. Overall, we found only few associations between arch characteristics and running biomechanics in children. However, altered foot arch characteristics are of clinical interest. Future studies should focus on detailed foot biomechanics and include clinically diagnosed high and low arched children. Copyright © 2018 Elsevier B.V. All rights reserved.
Mask Matching for Linear Feature Detection.
1987-01-01
decide which matched masks are part of a linear feature by sim- ple thresholding of the confidence measures. However, it is shown in a compan - ion report...Laboratory, Center for Automation Research, University of Maryland, January 1987. 3. E.M. Allen, R.H. Trigg, and R.J. Wood, The Maryland Artificial ... Intelligence Group Franz Lisp Environment, Variation 3.5, TR-1226, Department of Computer Science, University of Maryland, December 1984. 4. D.E. Knuth, The
Comparison of Laminar and Linear Eddy Model Closures for Combustion Instability Simulations
2015-07-01
14. ABSTRACT Unstable liquid rocket engines can produce highly complex dynamic flowfields with features such as rapid changes in temperature and...applicability. In the present study, the linear eddy model (LEM) is applied to an unstable single element liquid rocket engine to assess its performance and to...Sankaran‡ Air Force Research Laboratory, Edwards AFB, CA, 93524 Unstable liquid rocket engines can produce highly complex dynamic flowfields with features
Linear Polarization Properties of Parsec-Scale AGN Jets
NASA Astrophysics Data System (ADS)
Pushkarev, Alexander; Kovalev, Yuri; Lister, Matthew; Savolainen, Tuomas; Aller, Margo; Aller, Hugh; Hodge, Mary
2017-12-01
We used 15 GHz multi-epoch Very Long Baseline Array (VLBA) polarization sensitive observations of 484 sources within a time interval 1996--2016 from the MOJAVE program, and also from the NRAO data archive. We have analyzed the linear polarization characteristics of the compact core features and regions downstream, and their changes along and across the parsec-scale active galactic nuclei (AGN) jets. We detected a significant increase of fractional polarization with distance from the radio core along the jet as well as towards the jet edges. Compared to quasars, BL Lacs have a higher degree of polarization and exhibit more stable electric vector position angles (EVPAs) in their core features and a better alignment of the EVPAs with the local jet direction. The latter is accompanied by a higher degree of linear polarization, suggesting that compact bright jet features might be strong transverse shocks, which enhance magnetic field regularity by compression.
Influence of multi-depositions on the final properties of thermally evaporated TlBr films
NASA Astrophysics Data System (ADS)
Destefano, N.; Mulato, M.
2010-12-01
Thallium bromide is a promising candidate material for photodetectors in medical imaging systems. This work investigates the structural, optical and electrical properties of thermally evaporated TlBr films. The main fabrication parameter is the number of depositions. The use of sequential runs is aimed to increase the thickness of the films, as necessary, for technological applications. We deposited films using one-four runs, that led to maximum thickness of about 50 μm. Crystallographic and morphological changes were observed with varying deposition runs. Nevertheless, the optical gap and electrical resistivity in the dark remained constant at about 2.85 eV and 10 9 Ω cm, respectively. Thicker samples have a larger ratio of photo-to-dark signal under medical X-ray exposure, with a larger linear region as a function of applied voltage. The results are discussed aiming at future technological applications in medical imaging.
Hydrocarbon polymeric binder for advanced solid propellant
NASA Technical Reports Server (NTRS)
Potts, J. E. (Editor)
1972-01-01
A series of DEAB initiated isoprene polymerizations were run in the 5-gallon stirred autoclave reactor. Polymerization run parameters such as initiator concentration and feed rate were correlated with the molecular weight to provide a basis for molecular weight control in future runs. Synthetic methods were developed for the preparation of n-1,3-alkadienes. By these methods, 1,3-nonadiene was polymerized using DEAB initiator to give an ester-telechelic polynonadiene. This was subsequently hydrogenated with copper chromite catalyst to give a hydroxyl terminated saturated liquid hydrocarbon prepolymer having greatly improved viscosity characteristics and a Tg 18 degrees lower than that of the hydrogenated polyisoprenes. The hydroxyl-telechelic saturated polymers prepared by the hydrogenolysis of ester-telechelic polyisoprene were reached with diisocyanates under conditions favoring linear chain extension gel permeation chromatography was used to monitor this condensation polymerization. Fractions having molecular weights above one million were produced.
Linear response approach to active Brownian particles in time-varying activity fields
NASA Astrophysics Data System (ADS)
Merlitz, Holger; Vuijk, Hidde D.; Brader, Joseph; Sharma, Abhinav; Sommer, Jens-Uwe
2018-05-01
In a theoretical and simulation study, active Brownian particles (ABPs) in three-dimensional bulk systems are exposed to time-varying sinusoidal activity waves that are running through the system. A linear response (Green-Kubo) formalism is applied to derive fully analytical expressions for the torque-free polarization profiles of non-interacting particles. The activity waves induce fluxes that strongly depend on the particle size and may be employed to de-mix mixtures of ABPs or to drive the particles into selected areas of the system. Three-dimensional Langevin dynamics simulations are carried out to verify the accuracy of the linear response formalism, which is shown to work best when the particles are small (i.e., highly Brownian) or operating at low activity levels.
Houshyarifar, Vahid; Chehel Amirani, Mehdi
2016-08-12
In this paper we present a method to predict Sudden Cardiac Arrest (SCA) with higher order spectral (HOS) and linear (Time) features extracted from heart rate variability (HRV) signal. Predicting the occurrence of SCA is important in order to avoid the probability of Sudden Cardiac Death (SCD). This work is a challenge to predict five minutes before SCA onset. The method consists of four steps: pre-processing, feature extraction, feature reduction, and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the HRV signal is extracted. In second step, bispectrum features of HRV signal and time-domain features are obtained. Six features are extracted from bispectrum and two features from time-domain. In the next step, these features are reduced to one feature by the linear discriminant analysis (LDA) technique. Finally, KNN and support vector machine-based classifiers are used to classify the HRV signals. We used two database named, MIT/BIH Sudden Cardiac Death (SCD) Database and Physiobank Normal Sinus Rhythm (NSR). In this work we achieved prediction of SCD occurrence for six minutes before the SCA with the accuracy over 91%.
Ebrahimi, Farideh; Setarehdan, Seyed-Kamaledin; Ayala-Moyeda, Jose; Nazeran, Homer
2013-10-01
The conventional method for sleep staging is to analyze polysomnograms (PSGs) recorded in a sleep lab. The electroencephalogram (EEG) is one of the most important signals in PSGs but recording and analysis of this signal presents a number of technical challenges, especially at home. Instead, electrocardiograms (ECGs) are much easier to record and may offer an attractive alternative for home sleep monitoring. The heart rate variability (HRV) signal proves suitable for automatic sleep staging. Thirty PSGs from the Sleep Heart Health Study (SHHS) database were used. Three feature sets were extracted from 5- and 0.5-min HRV segments: time-domain features, nonlinear-dynamics features and time-frequency features. The latter was achieved by using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) methods. Normalized energies in important frequency bands of HRV signals were computed using time-frequency methods. ANOVA and t-test were used for statistical evaluations. Automatic sleep staging was based on HRV signal features. The ANOVA followed by a post hoc Bonferroni was used for individual feature assessment. Most features were beneficial for sleep staging. A t-test was used to compare the means of extracted features in 5- and 0.5-min HRV segments. The results showed that the extracted features means were statistically similar for a small number of features. A separability measure showed that time-frequency features, especially EMD features, had larger separation than others. There was not a sizable difference in separability of linear features between 5- and 0.5-min HRV segments but separability of nonlinear features, especially EMD features, decreased in 0.5-min HRV segments. HRV signal features were classified by linear discriminant (LD) and quadratic discriminant (QD) methods. Classification results based on features from 5-min segments surpassed those obtained from 0.5-min segments. The best result was obtained from features using 5-min HRV segments classified by the LD classifier. A combination of linear/nonlinear features from HRV signals is effective in automatic sleep staging. Moreover, time-frequency features are more informative than others. In addition, a separability measure and classification results showed that HRV signal features, especially nonlinear features, extracted from 5-min segments are more discriminative than those from 0.5-min segments in automatic sleep staging. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Menu-Driven Solver Of Linear-Programming Problems
NASA Technical Reports Server (NTRS)
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
Hancock, Stephanie D; Grant, Virginia L
2009-12-01
Hyperactivity of the hypothalamic-pituitary-adrenal (HPA) axis is a marked feature of anorexia nervosa. Using a modified version of the activity-based animal model of anorexia nervosa, we examine whether factors known to affect HPA axis activity influence the development of activity-based anorexia (ABA). Male and female rats were subjected to maternal separation or handling procedures during the first two postnatal weeks and tested in a mild version of the ABA paradigm, comprised of 2-hr daily running wheel access followed by 1-hr food access, either in adolescence or adulthood. Compared to handled females, maternally separated females demonstrated greater increases in wheel running and a more pronounced running-induced suppression of food intake during adolescence, but not in adulthood. In contrast, it was only in adulthood that wheel running produced more prolonged anorexic effects in maternally separated than in handled males. These findings highlight the interplay between early postnatal treatment, sex of the animal, and developmental age on running, food intake, and rate of body weight loss in a mild version of the ABA paradigm.
Entrainment range of nonidentical circadian oscillators by a light-dark cycle
NASA Astrophysics Data System (ADS)
Gu, Changgui; Xu, Jinshan; Liu, Zonghua; Rohling, Jos H. T.
2013-08-01
The suprachiasmatic nucleus (SCN) is a principal circadian clock in mammals, which controls physiological and behavioral daily rhythms. The SCN has two main features: Maintaining a rhythmic cycle of approximately 24 h in the absence of a light-dark cycle (free-running period) and the ability to entrain to external light-dark cycles. Both free-running period and range of entrainment vary from one species to another. To understand this phenomenon, we investigated the diversity of a free-running period by the distribution of coupling strengths in our previous work [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.80.030904 80, 030904(R) (2009)]. In this paper we numerically found that the dispersion of intrinsic periods among SCN neurons influence the entrainment range of the SCN, but has little influence on the free-running periods under constant darkness. This indicates that the dispersion of coupling strengths determines the diversity in free-running periods, while the dispersion of intrinsic periods determines the diversity in the entrainment range. A theoretical analysis based on two coupled neurons is presented to explain the results of numerical simulations.
The dance of the honeybee: how do honeybees dance to transfer food information effectively?
Okada, R; Ikeno, H; Sasayama, Noriko; Aonuma, H; Kurabayashi, D; Ito, E
2008-01-01
A honeybee informs her nestmates of the location of a flower she has visited by a unique behavior called a "waggle dance." On a vertical comb, the direction of the waggle run relative to gravity indicates the direction to the food source relative to the sun in the field, and the duration of the waggle run indicates the distance to the food source. To determine the detailed biological features of the waggle dance, we observed worker honeybee behavior in the field. Video analysis showed that the bee does not dance in a single or random place in the hive but waggled several times in one place and then several times in another. It also showed that the information of the waggle dance contains a substantial margin of error. Angle and duration of waggle runs varied from run to run, with the range of +/-15 degrees and +/-15%, respectively, even in a series of waggle dances of a single individual. We also found that most dance followers that listen to the waggle dance left the dancer after one or two sessions of listening.
NASA Astrophysics Data System (ADS)
Pembroke, A. D.; Colbert, J. A.
2015-12-01
The Community Coordinated Modeling Center (CCMC) provides hosting for many of the simulations used by the space weather community of scientists, educators, and forecasters. CCMC users may submit model runs through the Runs on Request system, which produces static visualizations of model output in the browser, while further analysis may be performed off-line via Kameleon, CCMC's cross-language access and interpolation library. Off-line analysis may be suitable for power-users, but storage and coding requirements present a barrier to entry for non-experts. Moreover, a lack of a consistent framework for analysis hinders reproducibility of scientific findings. To that end, we have developed Kameleon Live, a cloud based interactive analysis and visualization platform. Kameleon Live allows users to create scientific studies built around selected runs from the Runs on Request database, perform analysis on those runs, collaborate with other users, and disseminate their findings among the space weather community. In addition to showcasing these novel collaborative analysis features, we invite feedback from CCMC users as we seek to advance and improve on the new platform.
Teaching Linear Algebra: Must the Fog Always Roll In?
ERIC Educational Resources Information Center
Carlson, David
1993-01-01
Proposes methods to teach the more difficult concepts of linear algebra. Examines features of the Linear Algebra Curriculum Study Group Core Syllabus, and presents problems from the core syllabus that utilize the mathematical process skills of making conjectures, proving the results, and communicating the results to colleagues. Presents five…
Crustal Fractures of Ophir Planum
2002-05-23
This NASA Mars Odyssey image covers a tract of plateau territory called Ophir Planum. The most obvious features in this scene are the fractures ranging from 1 to 5 km wide running from the upper left to lower right.
Correcting for batch effects in case-control microbiome studies
Gibbons, Sean M.; Duvallet, Claire
2018-01-01
High-throughput data generation platforms, like mass-spectrometry, microarrays, and second-generation sequencing are susceptible to batch effects due to run-to-run variation in reagents, equipment, protocols, or personnel. Currently, batch correction methods are not commonly applied to microbiome sequencing datasets. In this paper, we compare different batch-correction methods applied to microbiome case-control studies. We introduce a model-free normalization procedure where features (i.e. bacterial taxa) in case samples are converted to percentiles of the equivalent features in control samples within a study prior to pooling data across studies. We look at how this percentile-normalization method compares to traditional meta-analysis methods for combining independent p-values and to limma and ComBat, widely used batch-correction models developed for RNA microarray data. Overall, we show that percentile-normalization is a simple, non-parametric approach for correcting batch effects and improving sensitivity in case-control meta-analyses. PMID:29684016
Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy
2017-10-06
The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.
Quantifying and visualizing variations in sets of images using continuous linear optimal transport
NASA Astrophysics Data System (ADS)
Kolouri, Soheil; Rohde, Gustavo K.
2014-03-01
Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.
NASA Astrophysics Data System (ADS)
Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.
2012-01-01
The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.
Thermodynamic output of single-atom quantum optical amplifiers and their phase-space fingerprint
NASA Astrophysics Data System (ADS)
Perl, Y.; Band, Y. B.; Boukobza, E.
2017-05-01
We analyze a resonant single-atom two-photon quantum optical amplifier both dynamically and thermodynamically. A detailed thermodynamic analysis shows that the nonlinear amplifier is thermodynamically equivalent to the linear amplifier. However, by calculating the Wigner quasiprobability distribution for various initial field states, we show that unique quantum features in optical phase space, absent in the linear amplifier, are retained for extended times, despite the fact that dissipation tends to wash out dynamical features observed at early evolution times. These features are related to the discrete nature of the two-photon matter-field interaction and fingerprint the initial field state at thermodynamic times.
Experienced and Novice Teachers' Concepts of Spatial Scale
ERIC Educational Resources Information Center
Jones, M. Gail; Tretter, Thomas; Taylor, Amy; Oppewal, Tom
2008-01-01
Scale is one of the thematic threads that runs through nearly all of the sciences and is considered one of the major prevailing ideas of science. This study explored novice and experienced teachers' concepts of spatial scale with a focus on linear sizes from very small (nanoscale) to very large (cosmic scale). Novice teachers included…
The linearly scaling 3D fragment method for large scale electronic structure calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Zhengji; Meza, Juan; Lee, Byounghak
2009-07-28
The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) atmore » OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.« less
The Linearly Scaling 3D Fragment Method for Large Scale Electronic Structure Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Zhengji; Meza, Juan; Lee, Byounghak
2009-06-26
The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) atmore » OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callister, Stephen J.; Barry, Richard C.; Adkins, Joshua N.
2006-02-01
Central tendency, linear regression, locally weighted regression, and quantile techniques were investigated for normalization of peptide abundance measurements obtained from high-throughput liquid chromatography-Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR MS). Arbitrary abundances of peptides were obtained from three sample sets, including a standard protein sample, two Deinococcus radiodurans samples taken from different growth phases, and two mouse striatum samples from control and methamphetamine-stressed mice (strain C57BL/6). The selected normalization techniques were evaluated in both the absence and presence of biological variability by estimating extraneous variability prior to and following normalization. Prior to normalization, replicate runs from each sample setmore » were observed to be statistically different, while following normalization replicate runs were no longer statistically different. Although all techniques reduced systematic bias, assigned ranks among the techniques revealed significant trends. For most LC-FTICR MS analyses, linear regression normalization ranked either first or second among the four techniques, suggesting that this technique was more generally suitable for reducing systematic biases.« less
Bolted joints in graphite-epoxy composites
NASA Technical Reports Server (NTRS)
Hart-Smith, L. J.
1976-01-01
All-graphite/epoxy laminates and hybrid graphite-glass/epoxy laminates were tested. The tests encompassed a range of geometries for each laminate pattern to cover the three basic failure modes - net section tension failure through the bolt hole, bearing and shearout. Static tensile and compressive loads were applied. A constant bolt diameter of 6.35 mm (0.25 in.) was used in the tests. The interaction of stress concentrations associated with multi-row bolted joints was investigated by testing single- and double-row bolted joints and open-hole specimens in tension. For tension loading, linear interaction was found to exist between the bearing stress reacted at a given bolt hole and the remaining tension stress running by that hole to be reacted elsewhere. The interaction under compressive loading was found to be non-linear. Comparative tests were run using single-lap bolted joints and double-lap joints with pin connection. Both of these joint types exhibited lower strengths than were demonstrated by the corresponding double-lap joints. The analysis methods developed here for single bolt joints are shown to be capable of predicting the behavior of multi-row joints.
Vectorization of transport and diffusion computations on the CDC Cyber 205
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abu-Shumays, I.K.
1986-01-01
The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less
Non-linear structure formation in the `Running FLRW' cosmological model
NASA Astrophysics Data System (ADS)
Bibiano, Antonio; Croton, Darren J.
2016-07-01
We present a suite of cosmological N-body simulations describing the `Running Friedmann-Lemaïtre-Robertson-Walker' (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends Lambda cold dark matter (ΛCDM) with a time-evolving vacuum density, Λ(z), and time-evolving gravitational Newton's coupling, G(z). In this paper, we review the model and introduce the necessary analytical treatment needed to adapt a reference N-body code. Our resulting simulations represent the first realization of the full growth history of structure in the R-FLRW cosmology into the non-linear regime, and our normalization choice makes them fully consistent with the latest cosmic microwave background data. The post-processing data products also allow, for the first time, an analysis of the properties of the halo and sub-halo populations. We explore the degeneracies of many statistical observables and discuss the steps needed to break them. Furthermore, we provide a quantitative description of the deviations of R-FLRW from ΛCDM, which could be readily exploited by future cosmological observations to test and further constrain the model.
Development of a railway wagon-track interaction model: Case studies on excited tracks
NASA Astrophysics Data System (ADS)
Xu, Lei; Chen, Xianmai; Li, Xuwei; He, Xianglin
2018-02-01
In this paper, a theoretical framework for modeling the railway wagon-ballast track interactions is presented, in which the dynamic equations of motion of wagon-track systems are constructed by effectively coupling the linear and nonlinear dynamic characteristics of system components. For the linear components, the energy-variational principle is directly used to derive their dynamic matrices, while for the nonlinear components, the dynamic equilibrium method is implemented to deduce the load vectors, based on which a novel railway wagon-ballast track interaction model is developed, and being validated by comparing with the experimental data measured from a heavy haul railway and another advanced model. With this study, extensive contributions in figuring out the critical speed of instability, limits and localizations of track irregularities over derailment accidents are presented by effectively integrating the dynamic simulation model, the track irregularity probabilistic model and time-frequency analysis method. The proposed approaches can provide crucial information to guarantee the running safety and stability of the wagon-track system when considering track geometries and various running speeds.
Elhaj, Fatin A; Salim, Naomie; Harris, Arief R; Swee, Tan Tian; Ahmed, Taqwa
2016-04-01
Arrhythmia is a cardiac condition caused by abnormal electrical activity of the heart, and an electrocardiogram (ECG) is the non-invasive method used to detect arrhythmias or heart abnormalities. Due to the presence of noise, the non-stationary nature of the ECG signal (i.e. the changing morphology of the ECG signal with respect to time) and the irregularity of the heartbeat, physicians face difficulties in the diagnosis of arrhythmias. The computer-aided analysis of ECG results assists physicians to detect cardiovascular diseases. The development of many existing arrhythmia systems has depended on the findings from linear experiments on ECG data which achieve high performance on noise-free data. However, nonlinear experiments characterize the ECG signal more effectively sense, extract hidden information in the ECG signal, and achieve good performance under noisy conditions. This paper investigates the representation ability of linear and nonlinear features and proposes a combination of such features in order to improve the classification of ECG data. In this study, five types of beat classes of arrhythmia as recommended by the Association for Advancement of Medical Instrumentation are analyzed: non-ectopic beats (N), supra-ventricular ectopic beats (S), ventricular ectopic beats (V), fusion beats (F) and unclassifiable and paced beats (U). The characterization ability of nonlinear features such as high order statistics and cumulants and nonlinear feature reduction methods such as independent component analysis are combined with linear features, namely, the principal component analysis of discrete wavelet transform coefficients. The features are tested for their ability to differentiate different classes of data using different classifiers, namely, the support vector machine and neural network methods with tenfold cross-validation. Our proposed method is able to classify the N, S, V, F and U arrhythmia classes with high accuracy (98.91%) using a combined support vector machine and radial basis function method. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Device with Functions of Linear Motor and Non-contact Power Collector for Wireless Drive
NASA Astrophysics Data System (ADS)
Fujii, Nobuo; Mizuma, Tsuyoshi
The authors propose a new apparatus with functions of propulsion and non-contact power collection for a future vehicle which can run like an electric vehicle supplied from the onboard battery source in most of the root except near stations. The batteries or power-capacitors are non-contact charged from the winding connected with commercial power on ground in the stations etc. The apparatus has both functions of linear motor and transformer, and the basic configuration is a wound-secondary type linear induction motor (LIM). In the paper, the wound type LIM with the concentrated single-phase winding for the primary member on the ground is dealt from the viewpoint of low cost arrangement. The secondary winding is changed to the single-phase connection for zero thrust in the transformer operation, and the two-phase connection for the linear motor respectively. The change of connection is done by the special converter for charge and linear drive on board. The characteristics are studied analytically.
Inductive Linear-Position Sensor/Limit-Sensor Units
NASA Technical Reports Server (NTRS)
Alhom, Dean; Howard, David; Smith, Dennis; Dutton, Kenneth
2007-01-01
A new sensor provides an absolute position measurement. A schematic view of a motorized linear-translation stage that contains, at each end, an electronic unit that functions as both (1) a non-contact sensor that measures the absolute position of the stage and (2) a non-contact equivalent of a limit switch that is tripped when the stage reaches the nominal limit position. The need for such an absolute linear position-sensor/limit-sensor unit arises in the case of a linear-translation stage that is part of a larger system in which the actual stopping position of the stage (relative to the nominal limit position) must be known. Because inertia inevitably causes the stage to run somewhat past the nominal limit position, tripping of a standard limit switch or other limit sensor does not provide the required indication of the actual stopping position. This innovative sensor unit operates on an electromagnetic-induction principle similar to that of linear variable differential transformers (LVDTs)
NASA Astrophysics Data System (ADS)
Zha, Yuanyuan; Yeh, Tian-Chyi J.; Illman, Walter A.; Zeng, Wenzhi; Zhang, Yonggen; Sun, Fangqiang; Shi, Liangsheng
2018-03-01
Hydraulic tomography (HT) is a recently developed technology for characterizing high-resolution, site-specific heterogeneity using hydraulic data (nd) from a series of cross-hole pumping tests. To properly account for the subsurface heterogeneity and to flexibly incorporate additional information, geostatistical inverse models, which permit a large number of spatially correlated unknowns (ny), are frequently used to interpret the collected data. However, the memory storage requirements for the covariance of the unknowns (ny × ny) in these models are prodigious for large-scale 3-D problems. Moreover, the sensitivity evaluation is often computationally intensive using traditional difference method (ny forward runs). Although employment of the adjoint method can reduce the cost to nd forward runs, the adjoint model requires intrusive coding effort. In order to resolve these issues, this paper presents a Reduced-Order Successive Linear Estimator (ROSLE) for analyzing HT data. This new estimator approximates the covariance of the unknowns using Karhunen-Loeve Expansion (KLE) truncated to nkl order, and it calculates the directional sensitivities (in the directions of nkl eigenvectors) to form the covariance and cross-covariance used in the Successive Linear Estimator (SLE). In addition, the covariance of unknowns is updated every iteration by updating the eigenvalues and eigenfunctions. The computational advantages of the proposed algorithm are demonstrated through numerical experiments and a 3-D transient HT analysis of data from a highly heterogeneous field site.
Run-off-road and recovery - state estimation and vehicle control strategies
NASA Astrophysics Data System (ADS)
Freeman, P.; Wagner, J.; Alexander, K.
2016-09-01
Despite many advances in vehicle safety technology, traffic fatalities remain a devastating burden on society. With over two-thirds of all fatal single-vehicle crashes occurring off the roadway, run-off-road (ROR) crashes have become the focus of much roadway safety research. Current countermeasures, including roadway infrastructure modifications and some on-board vehicle safety systems, remain limited in their approach as they do not directly address the critical factor of driver behaviour. It has been shown that ROR crashes are often the result of poor driver performance leading up to the crash. In this study, the performance of two control algorithms, sliding control and linear quadratic control, was investigated for use in an autonomous ROR vehicle recovery system. The two controllers were simulated amongst a variety of ROR conditions where typical driver performance was inadequate to safely operate the vehicle. The sliding controller recovered the fastest within the nominal conditions but exhibited large variability in performance amongst the more extreme ROR scenarios. Despite some small sacrifices in lateral error and yaw rate, the linear quadratic controller demonstrated a higher level of consistency and stability amongst the various conditions examined. Overall, the linear quadratic controller recovered the vehicle 25% faster than the sliding controller while using 70% less steering, which combined with its robust performance, indicates its high potential as an autonomous ROR countermeasure.
Spontaneous Entrainment of Running Cadence to Music Tempo.
Van Dyck, Edith; Moens, Bart; Buhmann, Jeska; Demey, Michiel; Coorevits, Esther; Dalla Bella, Simone; Leman, Marc
Since accumulating evidence suggests that step rate is strongly associated with running-related injuries, it is important for runners to exercise at an appropriate running cadence. As music tempo has been shown to be capable of impacting exercise performance of repetitive endurance activities, it might also serve as a means to (re)shape running cadence. The aim of this study was to validate the impact of music tempo on running cadence. Sixteen recreational runners ran four laps of 200 m (i.e. 800 m in total); this task was repeated 11 times with a short break in between each four-lap sequence. During the first lap of a sequence, participants ran at a self-paced tempo without musical accompaniment. Running cadence of the first lap was registered, and during the second lap, music with a tempo matching the assessed cadence was played. In the final two laps, the music tempo was either increased/decreased by 3.00, 2.50, 2.00, 1.50, or 1.00 % or was kept stable. This range was chosen since the aim of this study was to test spontaneous entrainment (an average person can distinguish tempo variations of about 4 %). Each participant performed all conditions. Imperceptible shifts in musical tempi in proportion to the runner's self-paced running tempo significantly influenced running cadence ( p < .001). Contrasts revealed a linear relation between the tempo conditions and adaptation in running cadence ( p < .001). In addition, a significant effect of condition on the level of entrainment was revealed ( p < .05), which suggests that maximal effects of music tempo on running cadence can only be obtained up to a certain level of tempo modification. Finally, significantly higher levels of tempo entrainment were found for female participants compared to their male counterparts ( p < .05). The applicable contribution of these novel findings is that music tempo could serve as an unprompted means to impact running cadence. As increases in step rate may prove beneficial in the prevention and treatment of common running-related injuries, this finding could be especially relevant for treatment purposes, such as exercise prescription and gait retraining. Music tempo can spontaneously impact running cadence.A basin for unsolicited entrainment of running cadence to music tempo was discovered.The effect of music tempo on running cadence proves to be stronger for women than for men.
Technology evaluation of man-rated acceleration test equipment for vestibular research
NASA Technical Reports Server (NTRS)
Taback, I.; Kenimer, R. L.; Butterfield, A. J.
1983-01-01
The considerations for eliminating acceleration noise cues in horizontal, linear, cyclic-motion sleds intended for both ground and shuttle-flight applications are addressed. the principal concerns are the acceleration transients associated with change in direction-of-motion for the carriage. The study presents a design limit for acceleration cues or transients based upon published measurements for thresholds of human perception to linear cyclic motion. The sources and levels for motion transients are presented based upon measurements obtained from existing sled systems. The approaches to a noise-free system recommends the use of air bearings for the carriage support and moving-coil linear induction motors operating at low frequency as the drive system. Metal belts running on air bearing pulleys provide an alternate approach to the driving system. The appendix presents a discussion of alternate testing techniques intended to provide preliminary type data by means of pendulums, linear motion devices and commercial air bearing tables.
Schaefer, J; Burckhardt, B B; Tins, J; Bartel, A; Laeer, S
2017-12-01
Heart failure is well investigated in adults, but data in children is lacking. To overcome this shortage of reliable data, appropriate bioanalytical assays are required. Development and validation of a bioanalytical assay for the determination of aldosterone concentrations in small sample volumes applicable to clinical studies under Good Clinical Laboratory Practice. An immunoassay was developed based on a commercially available enzyme-linked immunosorbent assay and validated according to current bioanalytical guidelines of the EMA and FDA. The assay (range 31.3-1000 pg/mL [86.9-2775 pmol/L]) is characterized by a between-run accuracy from - 3.8% to - 0.8% and a between-run imprecision ranging from 4.9% to 8.9% (coefficient of variation). For within-run accuracy, the relative error was between - 11.1% and + 9.0%, while within-run imprecision ranged from 1.2% to 11.8% (CV). For parallelism and dilutional linearity, the relative error of back-calculated concentrations varied from - 14.1% to + 8.4% and from - 7.4% to + 10.5%, respectively. The immunoassay is compliant with the bioanalytical guidelines of the EMA and FDA and allows accurate and precise aldosterone determinations. As the assay can run low-volume samples, it is especially valuable for pediatric investigations.
Darrall-Jones, Joshua D; Jones, Ben; Till, Kevin
2016-05-01
The purpose of this study was to evaluate the anthropometric, sprint, and high-intensity running profiles of English academy rugby union players by playing positions, and to investigate the relationships between anthropometric, sprint, and high-intensity running characteristics. Data were collected from 67 academy players after the off-season period and consisted of anthropometric (height, body mass, sum of 8 skinfolds [∑SF]), 40-m linear sprint (5-, 10-, 20-, and 40-m splits), the Yo-Yo intermittent recovery test level 1 (Yo-Yo IRTL-1), and the 30-15 intermittent fitness test (30-15 IFT). Forwards displayed greater stature, body mass, and ∑SF; sprint times and sprint momentum, with lower high-intensity running ability and sprint velocities than backs. Comparisons between age categories demonstrated body mass and sprint momentum to have the largest differences at consecutive age categories for forwards and backs; whereas 20-40-m sprint velocity was discriminate for forwards between under 16s, 18s, and 21s. Relationships between anthropometric, sprint velocity, momentum, and high-intensity running ability demonstrated body mass to negatively impact on sprint velocity (10 m; r = -0.34 to -0.46) and positively affect sprint momentum (e.g., 5 m; r = 0.85-0.93), with large to very large negative relationships with the Yo-Yo IRTL-1 (r = -0.65 to -0.74) and 30-15 IFT (r = -0.59 to -0.79). These findings suggest that there are distinct anthropometric, sprint, and high-intensity running ability differences between and within positions in junior rugby union players. The development of sprint and high-intensity running ability may be impacted by continued increases in body mass as there seems to be a trade-off between momentum, velocity, and the ability to complete high-intensity running.
Jensupakarn, Auearree; Kanitpong, Kunnawee
2018-04-01
In Thailand, red light running is considered as one of the most dangerous behaviors at intersection. Red light running (RLR) behavior is the failure to obey the traffic control signal. However, motorcycle riders and car drivers who are running through red lights could be influenced by human factors or road environment at intersection. RLR could be advertent or inadvertent behavior influenced by many factors. Little research study has been done to evaluate the contributing factors influencing the red-light violation behavior. This study aims to determine the factors influencing the red light running behavior including human characteristics, physical condition of intersection, traffic signal operation, and traffic condition. A total of 92 intersections were observed in Chiang Mai, Nakhon Ratchasima, and Chonburi, the major provinces in each region of Thailand. In addition, the socio-economic characteristics of red light runners were obtained from self-reported questionnaire survey. The Binary Logistic Regression and the Multiple Linear Regression models were used to determine the characteristics of red light runners and the factors influencing rates of red light running respectively. The results from this study can help to understand the characteristics of red light runners and factors affecting them to run red lights. For motorcycle riders and car drivers, age, gender, occupation, driving license, helmet/seatbelt use, and the probability to be penalized when running the red light significantly affect RLR behavior. In addition, the results indicated that vehicle travelling direction, time of day, existence of turning lane, number of lanes, lane width, intersection sight distance, type of traffic signal pole, type of traffic signal operation, length of yellow time interval, approaching speed, distance from intersection warning sign to stop line, and pavement roughness significantly affect RLR rates. Copyright © 2018 Elsevier Ltd. All rights reserved.
Influence of ABO blood group on sports performance.
Lippi, Giuseppe; Gandini, Giorgio; Salvagno, Gian Luca; Skafidas, Spyros; Festa, Luca; Danese, Elisa; Montagnana, Martina; Sanchis-Gomar, Fabian; Tarperi, Cantor; Schena, Federico
2017-06-01
Despite being a recessive trait, the O blood group is the most frequent worldwide among the ABO blood types. Since running performance has been recognized as a major driver of evolutionary advantage in humans, we planned a study to investigate whether the ABO blood group may have an influence on endurance running performance in middle-aged recreational athletes. The study population consisted of 52 recreational, middle-aged, Caucasian athletes (mean age: 49±13 years, body mass index, 23.4±2.3 kg/m 2 ), regularly engaged in endurance activity. The athletes participated to a scientific event called "Run for Science" (R4S), entailing the completion of a 21.1 km (half-marathon) run under competing conditions. The ABO blood type status of the participants was provided by the local Service of Transfusion Medicine. In univariate analysis, running performance was significantly associated with age and weekly training, but not with body mass index. In multiple linear regression analysis, age and weekly training remained significantly associated with running performance. The ABO blood group status was also found to be independently associated with running time, with O blood type athletes performing better than those with non-O blood groups. Overall, age, weekly training and O blood group type explained 62.2% of the total variance of running performance (age, 41.6%; training regimen, 10.5%; ABO blood group, 10.1%). The results of our study show that recreational athletes with O blood group have better endurance performance compared to those with non-O blood group types. This finding may provide additional support to the putative evolutionary advantages of carrying the O blood group.
NASA Technical Reports Server (NTRS)
Egbert, Gary D.
2001-01-01
A numerical ocean tide model has been developed and tested using highly accurate TOPEX/Poseidon (T/P) tidal solutions. The hydrodynamic model is based on time stepping a finite difference approximation to the non-linear shallow water equations. Two novel features of our implementation are a rigorous treatment of self attraction and loading (SAL), and a physically based parameterization for internal tide (IT) radiation drag. The model was run for a range of grid resolutions, and with variations in model parameters and bathymetry. For a rational treatment of SAL and IT drag, the model run at high resolution (1/12 degree) fits the T/P solutions to within 5 cm RMS in the open ocean. Both the rigorous SAL treatment and the IT drag parameterization are required to obtain solutions of this quality. The sensitivity of the solution to perturbations in bathymetry suggest that the fit to T/P is probably now limited by errors in this critical input. Since the model is not constrained by any data, we can test the effect of dropping sea-level to match estimated bathymetry from the last glacial maximum (LGM). Our results suggest that the 100 m drop in sea-level in the LGM would have significantly increased tidal amplitudes in the North Atlantic, and increased overall tidal dissipation by about 40%. However, details in tidal solutions for the past 20 ka are sensitive to the assumed stratification. IT drag accounts for a significant fraction of dissipation, especially in the LGM when large areas of present day shallow sea were exposed, and this parameter is poorly constrained at present.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Huanzhao
2015-05-16
The top quark is a very special fundamental particle in the Standard Model (SM) mainly due to its heavy mass. The top quark has extremely short lifetime and decays before hadronization. This reduces the complexity for the measurement of its mass. The top quark couples very strongly to the Higgs boson since the fermion-Higgs Yukawa coupling linearly depends on the fermion’s mass. Therefore, the top quark is also heavily involved in Higgs production and related study. A precise measurement of the top quark mass is very important, as it allows for self-consistency check of the SM, and also gives a insight about the stability of our universe in the SM context. This dissertation presents my work on the measurement of the top quark mass in dilepton final states of tmore » $$\\bar{t}$$ events in p$$\\bar{p}$$ collisions at √s = 1.96 TeV, using the full DØ Run II data corresponding to an integrated luminosity of 9.7 fb -1 at the Fermilab Tevatron. I extracted the top quark mass by reconstructing event kinematics, and integrating over expected neutrino rapidity distributions to obtain solutions over a scanned range of top quark mass hypotheses. The analysis features a comprehensive optimization that I made to minimize the expected statistical uncertainty. I also improve the calibration of jets in dilepton events by using the calibration determined in t$$\\bar{t}$$ → lepton+jets events, which reduces the otherwise limiting systematic uncertainty from the jet energy scale. The measured mass is 173.11 ± 1.34(stat) +0.83 -0.72(sys) GeV .« less
Self-Motion and the Shaping of Sensory Signals
Jenks, Robert A.; Vaziri, Ashkan; Boloori, Ali-Reza
2010-01-01
Sensory systems must form stable representations of the external environment in the presence of self-induced variations in sensory signals. It is also possible that the variations themselves may provide useful information about self-motion relative to the external environment. Rats have been shown to be capable of fine texture discrimination and object localization based on palpation by facial vibrissae, or whiskers, alone. During behavior, the facial vibrissae brush against objects and undergo deflection patterns that are influenced both by the surface features of the objects and by the animal's own motion. The extent to which behavioral variability shapes the sensory inputs to this pathway is unknown. Using high-resolution, high-speed videography of unconstrained rats running on a linear track, we measured several behavioral variables including running speed, distance to the track wall, and head angle, as well as the proximal vibrissa deflections while the distal portions of the vibrissae were in contact with periodic gratings. The measured deflections, which serve as the sensory input to this pathway, were strongly modulated both by the properties of the gratings and the trial-to-trial variations in head-motion and locomotion. Using presumed internal knowledge of locomotion and head-rotation, gratings were classified using short-duration trials (<150 ms) from high-frequency vibrissa motion, and the continuous trajectory of the animal's own motion through the track was decoded from the low frequency content. Together, these results suggest that rats have simultaneous access to low- and high-frequency information about their environment, which has been shown to be parsed into different processing streams that are likely important for accurate object localization and texture coding. PMID:20164407
Venkataraman, Vinay; Turaga, Pavan; Baran, Michael; Lehrer, Nicole; Du, Tingfang; Cheng, Long; Rikakis, Thanassis; Wolf, Steven L.
2016-01-01
In this paper, we propose a general framework for tuning component-level kinematic features using therapists’ overall impressions of movement quality, in the context of a Home-based Adaptive Mixed Reality Rehabilitation (HAMRR) system. We propose a linear combination of non-linear kinematic features to model wrist movement, and propose an approach to learn feature thresholds and weights using high-level labels of overall movement quality provided by a therapist. The kinematic features are chosen such that they correlate with the quality of wrist movements to clinical assessment scores. Further, the proposed features are designed to be reliably extracted from an inexpensive and portable motion capture system using a single reflective marker on the wrist. Using a dataset collected from ten stroke survivors, we demonstrate that the framework can be reliably used for movement quality assessment in HAMRR systems. The system is currently being deployed for large-scale evaluations, and will represent an increasingly important application area of motion capture and activity analysis. PMID:25438331
NASA Astrophysics Data System (ADS)
Soltanian-Zadeh, Hamid; Windham, Joe P.
1992-04-01
Maximizing the minimum absolute contrast-to-noise ratios (CNRs) between a desired feature and multiple interfering processes, by linear combination of images in a magnetic resonance imaging (MRI) scene sequence, is attractive for MRI analysis and interpretation. A general formulation of the problem is presented, along with a novel solution utilizing the simple and numerically stable method of Gram-Schmidt orthogonalization. We derive explicit solutions for the case of two interfering features first, then for three interfering features, and, finally, using a typical example, for an arbitrary number of interfering feature. For the case of two interfering features, we also provide simplified analytical expressions for the signal-to-noise ratios (SNRs) and CNRs of the filtered images. The technique is demonstrated through its applications to simulated and acquired MRI scene sequences of a human brain with a cerebral infarction. For these applications, a 50 to 100% improvement for the smallest absolute CNR is obtained.
Roemers, P; Mazzola, P N; De Deyn, P P; Bossers, W J; van Heuvelen, M J G; van der Zee, E A
2018-04-15
Voluntary strength training methods for rodents are necessary to investigate the effects of strength training on cognition and the brain. However, few voluntary methods are available. The current study tested functional and muscular effects of two novel voluntary strength training methods, burrowing (digging a substrate out of a tube) and unloaded tower climbing, in male C57Bl6 mice. To compare these two novel methods with existing exercise methods, resistance running and (non-resistance) running were included. Motor coordination, grip strength and muscle fatigue were measured at baseline, halfway through and near the end of a fourteen week exercise intervention. Endurance was measured by an incremental treadmill test after twelve weeks. Both burrowing and resistance running improved forelimb grip strength as compared to controls. Running and resistance running increased endurance in the treadmill test and improved motor skills as measured by the balance beam test. Post-mortem tissue analyses revealed that running and resistance running induced Soleus muscle hypertrophy and reduced epididymal fat mass. Tower climbing elicited no functional or muscular changes. As a voluntary strength exercise method, burrowing avoids the confounding effects of stress and positive reinforcers elicited in forced strength exercise methods. Compared to voluntary resistance running, burrowing likely reduces the contribution of aerobic exercise components. Burrowing qualifies as a suitable voluntary strength training method in mice. Furthermore, resistance running shares features of strength training and endurance (aerobic) exercise and should be considered a multi-modal aerobic-strength exercise method in mice. Copyright © 2017 Elsevier B.V. All rights reserved.
Special Feature: Automotive Technology.
ERIC Educational Resources Information Center
Wagner, Margaret; And Others
1993-01-01
Includes "National Trouble Shooting Contest--Training Technicians, Not Mechanics" (Wagner); "Front Wheel Drive on a Small Scale" (Waggoner); "Air Bags in Hit and Run on Rack and Pinion Technicians" (Collard); and "Future Technology--A Blind Spot Detector for Highway Driving" (Zoghi, Bellubi). (JOW)
Genetics Home Reference: Fryns syndrome
... are some genetic conditions more common in particular ethnic groups? Genetic Changes The cause of Fryns syndrome is unknown. The disorder is thought to be genetic because it tends to run in families and has features similar to those of other ...
An Evaluation of Feature Learning Methods for High Resolution Image Classification
NASA Astrophysics Data System (ADS)
Tokarczyk, P.; Montoya, J.; Schindler, K.
2012-07-01
Automatic image classification is one of the fundamental problems of remote sensing research. The classification problem is even more challenging in high-resolution images of urban areas, where the objects are small and heterogeneous. Two questions arise, namely which features to extract from the raw sensor data to capture the local radiometry and image structure at each pixel or segment, and which classification method to apply to the feature vectors. While classifiers are nowadays well understood, selecting the right features remains a largely empirical process. Here we concentrate on the features. Several methods are evaluated which allow one to learn suitable features from unlabelled image data by analysing the image statistics. In a comparative study, we evaluate unsupervised feature learning with different linear and non-linear learning methods, including principal component analysis (PCA) and deep belief networks (DBN). We also compare these automatically learned features with popular choices of ad-hoc features including raw intensity values, standard combinations like the NDVI, a few PCA channels, and texture filters. The comparison is done in a unified framework using the same images, the target classes, reference data and a Random Forest classifier.
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
Runtime Analysis of Linear Temporal Logic Specifications
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Havelund, Klaus
2001-01-01
This report presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to B chi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.
Biswas, Subir; Chattopadhyay, Monobir; Pal, Rabindranath
2011-01-01
The turbo molecular pump of the Magnetized Plasma Linear Experimental device is protected from damage by a magnetic shield. As the pump runs continuously in a magnetic field environment during a plasma physics experiment, it may get damaged owing to eddy current effect. For design and testing of the shield, first we simulate in details various aspects of magnetic shield layouts using a readily available field design code. The performance of the shield made from two half cylinders of soft iron material, is experimentally observed to agree very well with the simulation results.
Linear solvation energy relationships in normal phase chromatography based on gradient separations.
Wu, Di; Lucy, Charles A
2017-09-22
Coupling the modified Soczewiñski model and one gradient run, a gradient method was developed to build a linear solvation energy relationship (LSER) for normal phase chromatography. The gradient method was tested on dinitroanilinopropyl (DNAP) and silica columns with hexane/dichloromethane (DCM) mobile phases. LSER models built based on the gradient separation agree with those derived from a series of isocratic separations. Both models have similar LSER coefficients and comparable goodness of fit, but the LSER model based on gradient separation required fewer trial and error experiments. Copyright © 2017 Elsevier B.V. All rights reserved.
Computer simulation results of attitude estimation of earth orbiting satellites
NASA Technical Reports Server (NTRS)
Kou, S. R.
1976-01-01
Computer simulation results of attitude estimation of Earth-orbiting satellites (including Space Telescope) subjected to environmental disturbances and noises are presented. Decomposed linear recursive filter and Kalman filter were used as estimation tools. Six programs were developed for this simulation, and all were written in the basic language and were run on HP 9830A and HP 9866A computers. Simulation results show that a decomposed linear recursive filter is accurate in estimation and fast in response time. Furthermore, for higher order systems, this filter has computational advantages (i.e., less integration errors and roundoff errors) over a Kalman filter.
An Ada Linear-Algebra Software Package Modeled After HAL/S
NASA Technical Reports Server (NTRS)
Klumpp, Allan R.; Lawson, Charles L.
1990-01-01
New avionics software written more easily. Software package extends Ada programming language to include linear-algebra capabilities similar to those of HAL/S programming language. Designed for such avionics applications as Space Station flight software. In addition to built-in functions of HAL/S, package incorporates quaternion functions used in Space Shuttle and Galileo projects and routines from LINPAK solving systems of equations involving general square matrices. Contains two generic programs: one for floating-point computations and one for integer computations. Written on IBM/AT personal computer running under PC DOS, v.3.1.
2013-01-01
Background Immunoassays that employ multiplexed bead arrays produce high information content per sample. Such assays are now frequently used to evaluate humoral responses in clinical trials. Integrated software is needed for the analysis, quality control, and secure sharing of the high volume of data produced by such multiplexed assays. Software that facilitates data exchange and provides flexibility to perform customized analyses (including multiple curve fits and visualizations of assay performance over time) could increase scientists’ capacity to use these immunoassays to evaluate human clinical trials. Results The HIV Vaccine Trials Network and the Statistical Center for HIV/AIDS Research and Prevention collaborated with LabKey Software to enhance the open source LabKey Server platform to facilitate workflows for multiplexed bead assays. This system now supports the management, analysis, quality control, and secure sharing of data from multiplexed immunoassays that leverage Luminex xMAP® technology. These assays may be custom or kit-based. Newly added features enable labs to: (i) import run data from spreadsheets output by Bio-Plex Manager™ software; (ii) customize data processing, curve fits, and algorithms through scripts written in common languages, such as R; (iii) select script-defined calculation options through a graphical user interface; (iv) collect custom metadata for each titration, analyte, run and batch of runs; (v) calculate dose–response curves for titrations; (vi) interpolate unknown concentrations from curves for titrated standards; (vii) flag run data for exclusion from analysis; (viii) track quality control metrics across runs using Levey-Jennings plots; and (ix) automatically flag outliers based on expected values. Existing system features allow researchers to analyze, integrate, visualize, export and securely share their data, as well as to construct custom user interfaces and workflows. Conclusions Unlike other tools tailored for Luminex immunoassays, LabKey Server allows labs to customize their Luminex analyses using scripting while still presenting users with a single, graphical interface for processing and analyzing data. The LabKey Server system also stands out among Luminex tools for enabling smooth, secure transfer of data, quality control information, and analyses between collaborators. LabKey Server and its Luminex features are freely available as open source software at http://www.labkey.com under the Apache 2.0 license. PMID:23631706
Eckels, Josh; Nathe, Cory; Nelson, Elizabeth K; Shoemaker, Sara G; Nostrand, Elizabeth Van; Yates, Nicole L; Ashley, Vicki C; Harris, Linda J; Bollenbeck, Mark; Fong, Youyi; Tomaras, Georgia D; Piehler, Britt
2013-04-30
Immunoassays that employ multiplexed bead arrays produce high information content per sample. Such assays are now frequently used to evaluate humoral responses in clinical trials. Integrated software is needed for the analysis, quality control, and secure sharing of the high volume of data produced by such multiplexed assays. Software that facilitates data exchange and provides flexibility to perform customized analyses (including multiple curve fits and visualizations of assay performance over time) could increase scientists' capacity to use these immunoassays to evaluate human clinical trials. The HIV Vaccine Trials Network and the Statistical Center for HIV/AIDS Research and Prevention collaborated with LabKey Software to enhance the open source LabKey Server platform to facilitate workflows for multiplexed bead assays. This system now supports the management, analysis, quality control, and secure sharing of data from multiplexed immunoassays that leverage Luminex xMAP® technology. These assays may be custom or kit-based. Newly added features enable labs to: (i) import run data from spreadsheets output by Bio-Plex Manager™ software; (ii) customize data processing, curve fits, and algorithms through scripts written in common languages, such as R; (iii) select script-defined calculation options through a graphical user interface; (iv) collect custom metadata for each titration, analyte, run and batch of runs; (v) calculate dose-response curves for titrations; (vi) interpolate unknown concentrations from curves for titrated standards; (vii) flag run data for exclusion from analysis; (viii) track quality control metrics across runs using Levey-Jennings plots; and (ix) automatically flag outliers based on expected values. Existing system features allow researchers to analyze, integrate, visualize, export and securely share their data, as well as to construct custom user interfaces and workflows. Unlike other tools tailored for Luminex immunoassays, LabKey Server allows labs to customize their Luminex analyses using scripting while still presenting users with a single, graphical interface for processing and analyzing data. The LabKey Server system also stands out among Luminex tools for enabling smooth, secure transfer of data, quality control information, and analyses between collaborators. LabKey Server and its Luminex features are freely available as open source software at http://www.labkey.com under the Apache 2.0 license.
Yip, Stephen S F; Coroller, Thibaud P; Sanford, Nina N; Mamon, Harvey; Aerts, Hugo J W L; Berbeco, Ross I
2016-01-01
Although change in standardized uptake value (SUV) measures and PET-based textural features during treatment have shown promise in tumor response prediction, it is unclear which quantitative measure is the most predictive. We compared the relationship between PET-based features and pathologic response and overall survival with the SUV measures in esophageal cancer. Fifty-four esophageal cancer patients received PET/CT scans before and after chemoradiotherapy. Of these, 45 patients underwent surgery and were classified into complete, partial, and non-responders to the preoperative chemoradiation. SUVmax and SUVmean, two cooccurrence matrix (Entropy and Homogeneity), two run-length matrix (RLM) (high-gray-run emphasis and Short-run high-gray-run emphasis), and two size-zone matrix (high-gray-zone emphasis and short-zone high-gray emphasis) textures were computed. The relationship between the relative difference of each measure at different treatment time points and the pathologic response and overall survival was assessed using the area under the receiver-operating-characteristic curve (AUC) and Kaplan-Meier statistics, respectively. All Textures, except Homogeneity, were better related to pathologic response than SUVmax and SUVmean. Entropy was found to significantly distinguish non-responders from the complete (AUC = 0.79, p = 1.7 × 10(-4)) and partial (AUC = 0.71, p = 0.01) responders. Non-responders can also be significantly differentiated from partial and complete responders by the change in the run-length and size-zone matrix textures (AUC = 0.71-0.76, p ≤ 0.02). Homogeneity, SUVmax, and SUVmean failed to differentiate between any of the responders (AUC = 0.50-0.57, p ≥ 0.46). However, none of the measures were found to significantly distinguish between complete and partial responders with AUC <0.60 (p = 0.37). Median Entropy and RLM textures significantly discriminated patients with good and poor survival (log-rank p < 0.02), while all other textures and survival were poorly related (log-rank p > 0.25). For the patients studied, temporal changes in Entropy and all RLM were better correlated with pathological response and survival than the SUV measures. The hypothesis that these metrics can be used as clinical predictors of better patient outcomes will be tested in a larger patient dataset in the future.
Evaluation of the transport matrix method for simulation of ocean biogeochemical tracers
NASA Astrophysics Data System (ADS)
Kvale, Karin F.; Khatiwala, Samar; Dietze, Heiner; Kriest, Iris; Oschlies, Andreas
2017-06-01
Conventional integration of Earth system and ocean models can accrue considerable computational expenses, particularly for marine biogeochemical applications. Offline
numerical schemes in which only the biogeochemical tracers are time stepped and transported using a pre-computed circulation field can substantially reduce the burden and are thus an attractive alternative. One such scheme is the transport matrix method
(TMM), which represents tracer transport as a sequence of sparse matrix-vector products that can be performed efficiently on distributed-memory computers. While the TMM has been used for a variety of geochemical and biogeochemical studies, to date the resulting solutions have not been comprehensively assessed against their online
counterparts. Here, we present a detailed comparison of the two. It is based on simulations of the state-of-the-art biogeochemical sub-model embedded within the widely used coarse-resolution University of Victoria Earth System Climate Model (UVic ESCM). The default, non-linear advection scheme was first replaced with a linear, third-order upwind-biased advection scheme to satisfy the linearity requirement of the TMM. Transport matrices were extracted from an equilibrium run of the physical model and subsequently used to integrate the biogeochemical model offline to equilibrium. The identical biogeochemical model was also run online. Our simulations show that offline integration introduces some bias to biogeochemical quantities through the omission of the polar filtering used in UVic ESCM and in the offline application of time-dependent forcing fields, with high latitudes showing the largest differences with respect to the online model. Differences in other regions and in the seasonality of nutrients and phytoplankton distributions are found to be relatively minor, giving confidence that the TMM is a reliable tool for offline integration of complex biogeochemical models. Moreover, while UVic ESCM is a serial code, the TMM can be run on a parallel machine with no change to the underlying biogeochemical code, thus providing orders of magnitude speed-up over the online model.
Haque, Shafiul; Khan, Saif; Wahid, Mohd; Dar, Sajad A; Soni, Nipunjot; Mandal, Raju K; Singh, Vineeta; Tiwari, Dileep; Lohani, Mohtashim; Areeshi, Mohammed Y; Govender, Thavendran; Kruger, Hendrik G; Jawed, Arshad
2016-01-01
For a commercially viable recombinant intracellular protein production process, efficient cell lysis and protein release is a major bottleneck. The recovery of recombinant protein, cholesterol oxidase (COD) was studied in a continuous bead milling process. A full factorial response surface methodology (RSM) design was employed and compared to artificial neural networks coupled with genetic algorithm (ANN-GA). Significant process variables, cell slurry feed rate (A), bead load (B), cell load (C), and run time (D), were investigated and optimized for maximizing COD recovery. RSM predicted an optimum of feed rate of 310.73 mL/h, bead loading of 79.9% (v/v), cell loading OD 600 nm of 74, and run time of 29.9 min with a recovery of ~3.2 g/L. ANN-GA predicted a maximum COD recovery of ~3.5 g/L at an optimum feed rate (mL/h): 258.08, bead loading (%, v/v): 80%, cell loading (OD 600 nm ): 73.99, and run time of 32 min. An overall 3.7-fold increase in productivity is obtained when compared to a batch process. Optimization and comparison of statistical vs. artificial intelligence techniques in continuous bead milling process has been attempted for the very first time in our study. We were able to successfully represent the complex non-linear multivariable dependence of enzyme recovery on bead milling parameters. The quadratic second order response functions are not flexible enough to represent such complex non-linear dependence. ANN being a summation function of multiple layers are capable to represent complex non-linear dependence of variables in this case; enzyme recovery as a function of bead milling parameters. Since GA can even optimize discontinuous functions present study cites a perfect example of using machine learning (ANN) in combination with evolutionary optimization (GA) for representing undefined biological functions which is the case for common industrial processes involving biological moieties.
Haque, Shafiul; Khan, Saif; Wahid, Mohd; Dar, Sajad A.; Soni, Nipunjot; Mandal, Raju K.; Singh, Vineeta; Tiwari, Dileep; Lohani, Mohtashim; Areeshi, Mohammed Y.; Govender, Thavendran; Kruger, Hendrik G.; Jawed, Arshad
2016-01-01
For a commercially viable recombinant intracellular protein production process, efficient cell lysis and protein release is a major bottleneck. The recovery of recombinant protein, cholesterol oxidase (COD) was studied in a continuous bead milling process. A full factorial response surface methodology (RSM) design was employed and compared to artificial neural networks coupled with genetic algorithm (ANN-GA). Significant process variables, cell slurry feed rate (A), bead load (B), cell load (C), and run time (D), were investigated and optimized for maximizing COD recovery. RSM predicted an optimum of feed rate of 310.73 mL/h, bead loading of 79.9% (v/v), cell loading OD600 nm of 74, and run time of 29.9 min with a recovery of ~3.2 g/L. ANN-GA predicted a maximum COD recovery of ~3.5 g/L at an optimum feed rate (mL/h): 258.08, bead loading (%, v/v): 80%, cell loading (OD600 nm): 73.99, and run time of 32 min. An overall 3.7-fold increase in productivity is obtained when compared to a batch process. Optimization and comparison of statistical vs. artificial intelligence techniques in continuous bead milling process has been attempted for the very first time in our study. We were able to successfully represent the complex non-linear multivariable dependence of enzyme recovery on bead milling parameters. The quadratic second order response functions are not flexible enough to represent such complex non-linear dependence. ANN being a summation function of multiple layers are capable to represent complex non-linear dependence of variables in this case; enzyme recovery as a function of bead milling parameters. Since GA can even optimize discontinuous functions present study cites a perfect example of using machine learning (ANN) in combination with evolutionary optimization (GA) for representing undefined biological functions which is the case for common industrial processes involving biological moieties. PMID:27920762
NASA Technical Reports Server (NTRS)
Isachsen, Y. W. (Principal Investigator); Fakundiny, R. H.; Forster, S. W.
1974-01-01
The author has identified the following significant results. Linear anomalies dominate the new geological information derived from ERTS-1 imagery, total lengths now exceeding 26,500 km. Maxima on rose diagrams for ERTS-1 anomalies correspond well with those for mapped faults and topographic lineaments. Multi-scale analysis of linears shows that single topographic linears at 1:2,500,000 may become dashed linears at 1:1,000,000 aligned zones of shorter parallel, en echelon, or conjugate linears at 1:5000,000, and shorter linears lacking any conspicuous zonal alignment at 1:250,000. Field work in the Catskills suggests that the prominent new NNE lineaments may be surface manifestations of dip slip faulting in the basement, and that it may become possible to map major joint sets over extensive plateau regions directly on the imagery. Most circular features found were explained away by U-2 airfoto analysis but several remain as anomalies. Visible glacial features include individual drumlins, drumlinoids, eskers, ice-marginal drainage channels, glacial lake shorelines, sand plains, and end moraines.
Evaluation of ERTS-1 imagery for spectral geological mapping in diverse terranes of New York State
NASA Technical Reports Server (NTRS)
Isachsen, Y. W. (Principal Investigator); Fakundiny, R. H.; Forster, S. W.
1973-01-01
The author has identified the following significant results. Linear anomalies dominate the new geological information derived from ERTS-1 imagery, total lengths now exceeding 6000 km. Experimentation with a variety of viewing techniques suggest that conventional photogeologic analyses of band 7 results in the location of more than 97 percent of all linears found. Bedrock lithologic types are distinguishable only where they are topographically expressed or govern land use signatures. The maxima on rose diagrams for ERTS-1 anomalies correspond well with those for mapped faults and topographic lineaments. A multiscale analysis of linears showed that single topographic linears at 1:2,500,000 became dashed linears at 1:1,000,000 aligned zones of shorter parallel, en echelon, or conjugate linears at 1:500,00. Most circular features found were explained away by U-2 airphoto analysis but several remain as anomalies. Visible glacial features include individual drumlins, best seen in winter imagery, drumlinoids, eskers, ice-marginal drainage channels, glacial lake shorelines and sand plains, and end moraines.
Bergstra, S A; Kluitenberg, B; Dekker, R; Bredeweg, S W; Postema, K; Van den Heuvel, E R; Hijmans, J M; Sobhani, S
2015-07-01
Minimalist running shoes have been proposed as an alternative to barefoot running. However, several studies have reported cases of forefoot stress fractures after switching from standard to minimalist shoes. Therefore, the aim of the current study was to investigate the differences in plantar pressure in the forefoot region between running with a minimalist shoe and running with a standard shoe in healthy female runners during overground running. Randomized crossover design. In-shoe plantar pressure measurements were recorded from eighteen healthy female runners. Peak pressure, maximum mean pressure, pressure time integral and instant of peak pressure were assessed for seven foot areas. Force time integral, stride time, stance time, swing time, shoe comfort and landing type were assessed for both shoe types. A linear mixed model was used to analyze the data. Peak pressure and maximum mean pressure were higher in the medial forefoot (respectively 13.5% and 7.46%), central forefoot (respectively 37.5% and 29.2%) and lateral forefoot (respectively 37.9% and 20.4%) for the minimalist shoe condition. Stance time was reduced with 3.81%. No relevant differences in shoe comfort or landing strategy were found. Running with a minimalist shoe increased plantar pressure without a change in landing pattern. This increased pressure in the forefoot region might play a role in the occurrence of metatarsal stress fractures in runners who switched to minimalist shoes and warrants a cautious approach to transitioning to minimalist shoe use. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
A hybrid incremental projection method for thermal-hydraulics applications
NASA Astrophysics Data System (ADS)
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; Berndt, Markus; Francois, Marianne M.; Stagg, Alan K.; Xia, Yidong; Luo, Hong
2016-07-01
A new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya-Babuška-Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie-Chow interpolation or by using a Petrov-Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes, and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.
Mesospheric Non-Migrating Tides Generated With Planetary Waves. 1; Characteristics
NASA Technical Reports Server (NTRS)
Mayr, H. G.; Mengel, J. G.; Talaat, E. L.; Porter, H. S.; Chan, K. L.
2003-01-01
We discuss results from a modeling study with our Numerical Spectral Model (NSM) that specifically deals with the non-migrating tides generated in the mesosphere. The NSM extends from the ground to the thermosphere, incorporates Hines' Doppler Spread Parameterization for small-scale gravity waves (GWs), and it describes the major dynamical features of the atmosphere including the wave driven equatorial oscillations (QBO and SAO), and the seasonal variations of tides and planetary waves. Accounting solely for the excitation sources of the solar migrating tides, the NSM generates through dynamical interactions also non-migrating tides in the mesosphere that are comparable in magnitude to those observed. Large non-migrating tides are produced in the diurnal and semi-diurnal oscillations for the zonal mean (m = 0) and in the semidiurnal oscillation for m = 1. In general, significant eastward and westward propagating tides are generated for all the zonal wave numbers m = 1 to 4. To identify the cause, the NSM is run without the solar heating for the zonal mean (m = 0), and the amplitudes of the resulting non-migrating tides are then negligibly small. In this case, the planetary waves are artificially suppressed, which are generated in the NSM through instabilities. This leads to the conclusion that the non-migrating tides are generated through non-linear interactions between planetary waves and migrating tides, as Forbes et al. and Talaat and Liberman had proposed. In an accompanying paper, we present results from numerical experiments, which indicate that gravity wave filtering contributes significantly to produce the non-linear coupling that is involved.
A hybrid incremental projection method for thermal-hydraulics applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.
In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less
A hybrid incremental projection method for thermal-hydraulics applications
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; ...
2016-07-01
In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less
Accelerating epistasis analysis in human genetics with consumer graphics hardware.
Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H
2009-07-24
Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.
The PyCBC search for compact binary mergers in the second run of Advanced LIGO
NASA Astrophysics Data System (ADS)
Dal Canton, Tito; PyCBC Team
2017-01-01
The PyCBC software implements a matched-filter search for gravitational-wave signals associated with mergers of compact binaries. During the first observing run of Advanced LIGO, it played a fundamental role in the discovery of the binary-black-hole merger signals GW150914, GW151226 and LVT151012. In preparation for Advanced LIGO's second run, PyCBC has been modified with the goal of increasing the sensitivity of the search, reducing its computational cost and expanding the explored parameter space. The ability to report signals with a latency of tens of seconds and to perform inference on the parameters of the detected signals has also been introduced. I will give an overview of PyCBC and present the new features and their impact.
Effect of wear of bearing surfaces on elastohydrodynamic lubrication of metal-on-metal hip implants.
Liu, F; Jin, Z M; Hirt, F; Rieker, C; Roberts, P; Grigoris, P
2005-09-01
The effect of geometry change of the bearing surfaces owing to wear on the elastohydrodynamic lubrication (EHL) of metal-on-metal (MOM) hip bearings has been investigated theoretically in the present study. A particular MOM Metasul bearing (Zimmer GmbH) was considered, and was tested in a hip simulator using diluted bovine serum. The geometry of the worn bearing surface was measured using a coordinate measuring machine (CMM) and was modelled theoretically on the assumption of spherical geometries determined from the maximum linear wear depth and the angle of the worn region. Both the CMM measurement and the theoretical calculation were directly incorporated into the elastohydrodynamic lubrication analysis. It was found that the geometry of the original machined bearing surfaces, particularly of the femoral head with its out-of-roundness, could lead to a large reduction in the predicted lubricant film thickness and an increase in pressure. However, these non-spherical deviations can be expected to be smoothed out quickly during the initial running-in period. For a given worn bearing surface, the predicted lubricant film thickness and pressure distribution, based on CMM measurement, were found to be in good overall agreement with those obtained with the theoretical model based on the maximum linear wear depth and the angle of the worn region. The gradual increase in linear wear during the running-in period resulted in an improvement in the conformity and consequently an increase in the predicted lubricant film thickness and a decrease in the pressure. For the Metasul bearing tested in an AMTI hip simulator, a maximum total linear wear depth of approximately 13 microm was measured after 1 million cycles and remained unchanged up to 5 million cycles. This resulted in a threefold increase in the predicted average lubricant film thickness. Consequently, it was possible for the Metasul bearing to achieve a fluid film lubrication regime during this period, and this was consistent with the minimal wear observed between 1 and 5 million cycles. However, under adverse in vivo conditions associated with start-up and stopping and depleted lubrication, wear of the bearing surfaces can still occur. An increase in the wear depth beyond a certain limit was shown to lead to the constriction of the lubricant film around the edge of the contact conjunction and consequently to a decrease in the lubricant film thickness. Continuous cycles of a running-in wear period followed by a steady state wear period may be inevitable in MOM hip implants. This highlights the importance of minimizing the wear in these devices during the initial running-in period, particularly from design and manufacturing points of view.
Spectral-Spatial Shared Linear Regression for Hyperspectral Image Classification.
Haoliang Yuan; Yuan Yan Tang
2017-04-01
Classification of the pixels in hyperspectral image (HSI) is an important task and has been popularly applied in many practical applications. Its major challenge is the high-dimensional small-sized problem. To deal with this problem, lots of subspace learning (SL) methods are developed to reduce the dimension of the pixels while preserving the important discriminant information. Motivated by ridge linear regression (RLR) framework for SL, we propose a spectral-spatial shared linear regression method (SSSLR) for extracting the feature representation. Comparing with RLR, our proposed SSSLR has the following two advantages. First, we utilize a convex set to explore the spatial structure for computing the linear projection matrix. Second, we utilize a shared structure learning model, which is formed by original data space and a hidden feature space, to learn a more discriminant linear projection matrix for classification. To optimize our proposed method, an efficient iterative algorithm is proposed. Experimental results on two popular HSI data sets, i.e., Indian Pines and Salinas demonstrate that our proposed methods outperform many SL methods.
DNA Sequencing Using capillary Electrophoresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dr. Barry Karger
2011-05-09
The overall goal of this program was to develop capillary electrophoresis as the tool to be used to sequence for the first time the Human Genome. Our program was part of the Human Genome Project. In this work, we were highly successful and the replaceable polymer we developed, linear polyacrylamide, was used by the DOE sequencing lab in California to sequence a significant portion of the human genome using the MegaBase multiple capillary array electrophoresis instrument. In this final report, we summarize our efforts and success. We began our work by separating by capillary electrophoresis double strand oligonucleotides using cross-linkedmore » polyacrylamide gels in fused silica capillaries. This work showed the potential of the methodology. However, preparation of such cross-linked gel capillaries was difficult with poor reproducibility, and even more important, the columns were not very stable. We improved stability by using non-cross linked linear polyacrylamide. Here, the entangled linear chains could move when osmotic pressure (e.g. sample injection) was imposed on the polymer matrix. This relaxation of the polymer dissipated the stress in the column. Our next advance was to use significantly lower concentrations of the linear polyacrylamide that the polymer could be automatically blown out after each run and replaced with fresh linear polymer solution. In this way, a new column was available for each analytical run. Finally, while testing many linear polymers, we selected linear polyacrylamide as the best matrix as it was the most hydrophilic polymer available. Under our DOE program, we demonstrated initially the success of the linear polyacrylamide to separate double strand DNA. We note that the method is used even today to assay purity of double stranded DNA fragments. Our focus, of course, was on the separation of single stranded DNA for sequencing purposes. In one paper, we demonstrated the success of our approach in sequencing up to 500 bases. Other application papers of sequencing up to this level were also published in the mid 1990's. A major interest of the sequencing community has always been read length. The longer the sequence read per run the more efficient the process as well as the ability to read repeat sequences. We therefore devoted a great deal of time to studying the factors influencing read length in capillary electrophoresis, including polymer type and molecule weight, capillary column temperature, applied electric field, etc. In our initial optimization, we were able to demonstrate, for the first time, the sequencing of over 1000 bases with 90% accuracy. The run required 80 minutes for separation. Sequencing of 1000 bases per column was next demonstrated on a multiple capillary instrument. Our studies revealed that linear polyacrylamide produced the longest read lengths because the hydrophilic single strand DNA had minimal interaction with the very hydrophilic linear polyacrylamide. Any interaction of the DNA with the polymer would lead to broader peaks and lower read length. Another important parameter was the molecular weight of the linear chains. High molecular weight (> 1 MDA) was important to allow the long single strand DNA to reptate through the entangled polymer matrix. In an important paper, we showed an inverse emulsion method to prepare reproducibility linear polyacrylamide polymer with an average MWT of 9MDa. This approach was used in the polymer for sequencing the human genome. Another critical factor in the successful use of capillary electrophoresis for sequencing was the sample preparation method. In the Sanger sequencing reaction, high concentration of salts and dideoxynucleotide remained. Since the sample was introduced to the capillary column by electrokinetic injection, these salt ions would be favorably injected into the column over the sequencing fragments, thus reducing the signal for longer fragments and hence reading read length. In two papers, we examined the role of individual components from the sequencing reaction and then developed a protocol to reduce the deleterious salts. We demonstrated a robust method for achieving long read length DNA sequencing. Continuing our advances, we next demonstrated the achievement of over 1000 bases in less than one hour with a base calling accuracy of between 98 and 99%. In this work, we implemented energy transfer dyes which allowed for cleaner differentiation of the 4 dye labeled terminal nucleotides. In addition, we developed improved base calling software to help read sequencing when the separation was only minimal as occurs at long read lengths. Another critical parameter we studied was column temperature. We demonstrated that read lengths improved as the column temperature was increased from room temperature to 60 C or 70 C. The higher temperature relaxed the DNA chains under the influence of the high electric field.« less
Additional Improvements to the NASA Lewis Ice Accretion Code LEWICE
NASA Technical Reports Server (NTRS)
Wright, William B.; Bidwell, Colin S.
1995-01-01
Due to the feedback of the user community, three major features have been added to the NASA Lewis ice accretion code LEWICE. These features include: first, further improvements to the numerics of the code so that more time steps can be run and so that the code is more stable; second, inclusion and refinement of the roughness prediction model described in an earlier paper; third, inclusion of multi-element trajectory and ice accretion capabilities to LEWICE. This paper will describe each of these advancements in full and make comparisons with the experimental data available. Further refinement of these features and inclusion of additional features will be performed as more feedback is received.
Photonic integrated circuits: new challenges for lithography
NASA Astrophysics Data System (ADS)
Bolten, Jens; Wahlbrink, Thorsten; Prinzen, Andreas; Porschatis, Caroline; Lerch, Holger; Giesecke, Anna Lena
2016-10-01
In this work routes towards the fabrication of photonic integrated circuits (PICs) and the challenges their fabrication poses on lithography, such as large differences in feature dimension of adjacent device features, non-Manhattan-type features, high aspect ratios and significant topographic steps as well as tight lithographic requirements with respect to critical dimension control, line edge roughness and other key figures of merit not only for very small but also for relatively large features, are highlighted. Several ways those challenges are faced in today's low-volume fabrication of PICs, including the concept multi project wafer runs and mix and match approaches, are presented and possible paths towards a real market uptake of PICs are discussed.
Is your prescription of distance running shoes evidence-based?
Richards, C E; Magin, P J; Callister, R
2009-03-01
To determine whether the current practice of prescribing distance running shoes featuring elevated cushioned heels and pronation control systems tailored to the individual's foot type is evidence-based. MEDLINE (1950-May 2007), CINAHL (1982-May 2007), EMBASE (1980-May 2007), PsychInfo (1806-May 2007), Cochrane Database of Systematic Reviews (2(nd) Quarter 2007), Cochrane Central Register of Controlled trials (2(nd) Quarter 2007), SPORTSDiscus (1985-May 2007) and AMED (1985-May 2007). English language articles were identified via keyword and medical subject headings (MeSH) searches of the above electronic databases. With these searches and the subsequent review process, controlled trials or systematic reviews were sought in which the study population included adult recreational or competitive distance runners, the exposure was distance running, the intervention evaluated was a running shoe with an elevated cushioned heel and pronation control systems individualised to the wearer's foot type, and the outcome measures included either running injury rates, distance running performance, osteoarthritis risk, physical activity levels, or overall health and wellbeing. The quality of these studies and their findings were then evaluated. No original research that met the study criteria was identified either directly or via the findings of the six systematic reviews identified. The prescription of this shoe type to distance runners is not evidence-based.
Regional sea level variability in a high-resolution global coupled climate model
NASA Astrophysics Data System (ADS)
Palko, D.; Kirtman, B. P.
2016-12-01
The prediction of trends at regional scales is essential in order to adapt to and prepare for the effects of climate change. However, GCMs are unable to make reliable predictions at regional scales. The prediction of local sea level trends is particularly critical. The main goal of this research is to utilize high-resolution (HR) (0.1° resolution in the ocean) coupled model runs of CCSM4 to analyze regional sea surface height (SSH) trends. Unlike typical, lower resolution (1.0°) GCM runs these HR runs resolve features in the ocean, like the Gulf Stream, which may have a large effect on regional sea level. We characterize the variability of regional SSH along the Atlantic coast of the US using tide gauge observations along with fixed radiative forcing runs of CCSM4 and HR interactive ensemble runs. The interactive ensemble couples an ensemble mean atmosphere with a single ocean realization. This coupling results in a 30% decrease in the strength of the Atlantic meridional overturning circulation; therefore, the HR interactive ensemble is analogous to a HR hosing experiment. By characterizing the variability in these high-resolution GCM runs and observations we seek to understand what processes influence coastal SSH along the Eastern Coast of the United States and better predict future SLR.
Lytton, William W; Neymotin, Samuel A; Hines, Michael L
2008-06-30
In an effort to design a simulation environment that is more similar to that of neurophysiology, we introduce a virtual slice setup in the NEURON simulator. The virtual slice setup runs continuously and permits parameter changes, including changes to synaptic weights and time course and to intrinsic cell properties. The virtual slice setup permits shocks to be applied at chosen locations and activity to be sampled intra- or extracellularly from chosen locations. By default, a summed population display is shown during a run to indicate the level of activity and no states are saved. Simulations can run for hours of model time, therefore it is not practical to save all of the state variables. These, in any case, are primarily of interest at discrete times when experiments are being run: the simulation can be stopped momentarily at such times to save activity patterns. The virtual slice setup maintains an automated notebook showing shocks and parameter changes as well as user comments. We demonstrate how interaction with a continuously running simulation encourages experimental prototyping and can suggest additional dynamical features such as ligand wash-in and wash-out-alternatives to typical instantaneous parameter change. The virtual slice setup currently uses event-driven cells and runs at approximately 2 min/h on a laptop.
Tavakoli, Paniz; Campbell, Kenneth
2016-10-01
A rarely occurring and highly relevant auditory stimulus occurring outside of the current focus of attention can cause a switching of attention. Such attention capture is often studied in oddball paradigms consisting of a frequently occurring "standard" stimulus which is changed at odd times to form a "deviant". The deviant may result in the capturing of attention. An auditory ERP, the P3a, is often associated with this process. To collect a sufficient amount of data is however very time-consuming. A more multi-feature "optimal" paradigm has been proposed but it is not known if it is appropriate for the study of attention capture. An optimal paradigm was run in which 6 different rare deviants (p=.08) were separated by a standard stimulus (p=.50) and compared to results when 4 oddball paradigms were also run. A large P3a was elicited by some of the deviants in the optimal paradigm but not by others. However, very similar results were observed when separate oddball paradigms were run. The present study indicates that the optimal paradigm provides a very time-saving method to study attention capture and the P3a. Copyright © 2016 Elsevier B.V. All rights reserved.
Use of statecharts in the modelling of dynamic behaviour in the ATLAS DAQ prototype-1
NASA Astrophysics Data System (ADS)
Croll, P.; Duval, P.-Y.; Jones, R.; Kolos, S.; Sari, R. F.; Wheeler, S.
1998-08-01
Many applications within the ATLAS DAQ prototype-1 system have complicated dynamic behaviour which can be successfully modelled in terms of states and transitions between states. Previously, state diagrams implemented as finite-state machines have been used. Although effective, they become ungainly as system size increases. Harel statecharts address this problem by implementing additional features such as hierarchy and concurrency. The CHSM object-oriented language system is freeware which implements Harel statecharts as concurrent, hierarchical, finite-state machines (CHSMs). An evaluation of this language system by the ATLAS DAQ group has shown it to be suitable for describing the dynamic behaviour of typical DAQ applications. The language is currently being used to model the dynamic behaviour of the prototype-1 run-control system. The design is specified by means of a CHSM description file, and C++ code is obtained by running the CHSM compiler on the file. In parallel with the modelling work, a code generator has been developed which translates statecharts, drawn using the StP CASE tool, into the CHSM language. C++ code, describing the dynamic behaviour of the run-control system, has been successfully generated directly from StP statecharts using the CHSM generator and compiler. The validity of the design was tested using the simulation features of the Statemate CASE tool.
TGIS, TIG, Program Development, Transportation & Public Facilities, State
accessible, accurate, and controlled inventory of public roadway features and linear coordinates for the Roadway Data System (RDS) network (Alaska DOT&PF's Linear Reference System or LRS) to meet Federal and
CrocoBLAST: Running BLAST efficiently in the age of next-generation sequencing.
Tristão Ramos, Ravi José; de Azevedo Martins, Allan Cézar; da Silva Delgado, Gabrielle; Ionescu, Crina-Maria; Ürményi, Turán Peter; Silva, Rosane; Koca, Jaroslav
2017-11-15
CrocoBLAST is a tool for dramatically speeding up BLAST+ execution on any computer. Alignments that would take days or weeks with NCBI BLAST+ can be run overnight with CrocoBLAST. Additionally, CrocoBLAST provides features critical for NGS data analysis, including: results identical to those of BLAST+; compatibility with any BLAST+ version; real-time information regarding calculation progress and remaining run time; access to partial alignment results; queueing, pausing, and resuming BLAST+ calculations without information loss. CrocoBLAST is freely available online, with ample documentation (webchem.ncbr.muni.cz/Platform/App/CrocoBLAST). No installation or user registration is required. CrocoBLAST is implemented in C, while the graphical user interface is implemented in Java. CrocoBLAST is supported under Linux and Windows, and can be run under Mac OS X in a Linux virtual machine. jkoca@ceitec.cz. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Dong, Shaojiang; Sun, Dihua; Xu, Xiangyang; Tang, Baoping
2017-06-01
Aiming at the problem that it is difficult to extract the feature information from the space bearing vibration signal because of different noise, for example the running trend information, high-frequency noise and especially the existence of lot of power line interference (50Hz) and its octave ingredients of the running space simulated equipment in the ground. This article proposed a combination method to eliminate them. Firstly, the EMD is used to remove the running trend item information of the signal, the running trend that affect the signal processing accuracy is eliminated. Then the morphological filter is used to eliminate high-frequency noise. Finally, the components and characteristics of the power line interference are researched, based on the characteristics of the interference, the revised blind source separation model is used to remove the power line interferences. Through analysis of simulation and practical application, results suggest that the proposed method can effectively eliminate those noise.
Run Environment and Data Management for Earth System Models
NASA Astrophysics Data System (ADS)
Widmann, H.; Lautenschlager, M.; Fast, I.; Legutke, S.
2009-04-01
The Integrating Model and Data Infrastructure (IMDI) developed and maintained by the Model and Data Group (M&D) comprises the Standard Compile Environment (SCE) and the Standard Run Environment (SRE). The IMDI software has a modular design, which allows to combine and couple a suite of model components and as well to execute the tasks independently and on various platforms. Furthermore the modular structure enables the extension to new model combinations and new platforms. The SRE presented here enables the configuration and performance of earth system model experiments from model integration up to storage and visualization of data. We focus on recently implemented tasks such as synchronous data base filling, graphical monitoring and automatic generation of meta data in XML forms during run time. As well we address the capability to run experiments in heterogeneous IT environments with different computing systems for model integration, data processing and storage. These features are demonstrated for model configurations and on platforms used in current or upcoming projects, e.g. MILLENNIUM or IPCC AR5.
Inference of gene regulatory networks from genome-wide knockout fitness data
Wang, Liming; Wang, Xiaodong; Arkin, Adam P.; Samoilov, Michael S.
2013-01-01
Motivation: Genome-wide fitness is an emerging type of high-throughput biological data generated for individual organisms by creating libraries of knockouts, subjecting them to broad ranges of environmental conditions, and measuring the resulting clone-specific fitnesses. Since fitness is an organism-scale measure of gene regulatory network behaviour, it may offer certain advantages when insights into such phenotypical and functional features are of primary interest over individual gene expression. Previous works have shown that genome-wide fitness data can be used to uncover novel gene regulatory interactions, when compared with results of more conventional gene expression analysis. Yet, to date, few algorithms have been proposed for systematically using genome-wide mutant fitness data for gene regulatory network inference. Results: In this article, we describe a model and propose an inference algorithm for using fitness data from knockout libraries to identify underlying gene regulatory networks. Unlike most prior methods, the presented approach captures not only structural, but also dynamical and non-linear nature of biomolecular systems involved. A state–space model with non-linear basis is used for dynamically describing gene regulatory networks. Network structure is then elucidated by estimating unknown model parameters. Unscented Kalman filter is used to cope with the non-linearities introduced in the model, which also enables the algorithm to run in on-line mode for practical use. Here, we demonstrate that the algorithm provides satisfying results for both synthetic data as well as empirical measurements of GAL network in yeast Saccharomyces cerevisiae and TyrR–LiuR network in bacteria Shewanella oneidensis. Availability: MATLAB code and datasets are available to download at http://www.duke.edu/∼lw174/Fitness.zip and http://genomics.lbl.gov/supplemental/fitness-bioinf/ Contact: wangx@ee.columbia.edu or mssamoilov@lbl.gov Supplementary information: Supplementary data are available at Bioinformatics online PMID:23271269
NASA Technical Reports Server (NTRS)
Aldrich, S. A.; Aldrich, F. T.; Rudd, R. D.
1969-01-01
Weather satellite imagery provides the only routinely available orbital imagery depicting the high latitudes. Although resolution is low on this imagery, it is believed that a major natural feature, notably linear in expression, should be mappable on it. The transition zone from forest to tundra, the ecotone, is such a feature. Locational correlation is herein established between a linear signature on the imagery and several ground truth positions of the ecotone in Canada.
Faults on Skylab imagery of the Salton Trough area, Southern California
NASA Technical Reports Server (NTRS)
Merifield, P. M.; Lamar, D. L. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Large segments of the major high angle faults in the Salton Trough area are readily identifiable in Skylab images. Along active faults, distinctive topographic features such as scarps and offset drainage, and vegetation differences due to ground water blockage in alluvium are visible. Other fault-controlled features along inactive as well as active faults visible in Skylab photography include straight mountain fronts, linear valleys, and lithologic differences producing contrasting tone, color or texture. A northwestern extension of a fault in the San Andreas set, is postulated by the regional alignment of possible fault-controlled features. The suspected fault is covered by Holocene deposits, principally windblown sand. A northwest trending tonal change in cultivated fields across Mexicali Valley is visible on Skylab photos. Surface evidence for faulting was not observed; however, the linear may be caused by differences in soil conditions along an extension of a segment of the San Jacinto fault zone. No evidence of faulting could be found along linears which appear as possible extensions of the Substation and Victory Pass faults, demonstrating that the interpretation of linears as faults in small scale photography must be corroborated by field investigations.
Turning limited experimental information into 3D models of RNA.
Flores, Samuel Coulbourn; Altman, Russ B
2010-09-01
Our understanding of RNA functions in the cell is evolving rapidly. As for proteins, the detailed three-dimensional (3D) structure of RNA is often key to understanding its function. Although crystallography and nuclear magnetic resonance (NMR) can determine the atomic coordinates of some RNA structures, many 3D structures present technical challenges that make these methods difficult to apply. The great flexibility of RNA, its charged backbone, dearth of specific surface features, and propensity for kinetic traps all conspire with its long folding time, to challenge in silico methods for physics-based folding. On the other hand, base-pairing interactions (either in runs to form helices or isolated tertiary contacts) and motifs are often available from relatively low-cost experiments or informatics analyses. We present RNABuilder, a novel code that uses internal coordinate mechanics to satisfy user-specified base pairing and steric forces under chemical constraints. The code recapitulates the topology and characteristic L-shape of tRNA and obtains an accurate noncrystallographic structure of the Tetrahymena ribozyme P4/P6 domain. The algorithm scales nearly linearly with molecule size, opening the door to the modeling of significantly larger structures.
A predictive machine learning approach for microstructure optimization and materials design
NASA Astrophysics Data System (ADS)
Liu, Ruoqian; Kumar, Abhishek; Chen, Zhengzhang; Agrawal, Ankit; Sundararaghavan, Veera; Choudhary, Alok
2015-06-01
This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniqueness of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. Experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.
Features and functions of nonlinear spatial integration by retinal ganglion cells.
Gollisch, Tim
2013-11-01
Ganglion cells in the vertebrate retina integrate visual information over their receptive fields. They do so by pooling presynaptic excitatory inputs from typically many bipolar cells, which themselves collect inputs from several photoreceptors. In addition, inhibitory interactions mediated by horizontal cells and amacrine cells modulate the structure of the receptive field. In many models, this spatial integration is assumed to occur in a linear fashion. Yet, it has long been known that spatial integration by retinal ganglion cells also incurs nonlinear phenomena. Moreover, several recent examples have shown that nonlinear spatial integration is tightly connected to specific visual functions performed by different types of retinal ganglion cells. This work discusses these advances in understanding the role of nonlinear spatial integration and reviews recent efforts to quantitatively study the nature and mechanisms underlying spatial nonlinearities. These new insights point towards a critical role of nonlinearities within ganglion cell receptive fields for capturing responses of the cells to natural and behaviorally relevant visual stimuli. In the long run, nonlinear phenomena of spatial integration may also prove important for implementing the actual neural code of retinal neurons when designing visual prostheses for the eye. Copyright © 2012 Elsevier Ltd. All rights reserved.
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 2 is the User's Guide, and describes the program's general features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.
Proteus three-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Bui, Trong T.
1993-01-01
A computer code called Proteus 3D was developed to solve the three-dimensional, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This User's Guide describes the program's features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.
Flux growth of Yb(6.6)Ir(6)Sn(16) having mixed-valent ytterbium.
Peter, Sebastian C; Subbarao, Udumula; Rayaprol, Sudhindra; Martin, Joshua B; Balasubramanian, Mahalingam; Malliakas, Christos D; Kanatzidis, Mercouri G
2014-07-07
The compound Yb6.6Ir6Sn16 was obtained as single crystals in high yield from the reaction of Yb with Ir and Sn run in excess indium. Single-crystal X-ray diffraction analysis shows that Yb6.6Ir6Sn16 crystallizes in the tetragonal space group P42/nmc with a = b = 9.7105(7) Å and c = 13.7183(11) Å. The crystal structure is composed of a [Ir6Sn16] polyanionic network with cages in which the Yb atoms are embedded. The Yb sublattice features extensive vacancies on one crystallographic site. Magnetic susceptibility measurements on single crystals indicate Curie-Weiss law behavior <100 K with no magnetic ordering down to 2 K. The magnetic moment within the linear region (<100 K) is 3.21 μB/Yb, which is ∼70% of the expected value for a free Yb(3+) ion suggesting the presence of mixed-valent ytterbium atoms. X-ray absorption near edge spectroscopy confirms that Yb6.6Ir6Sn16 exhibits mixed valence. Resistivity and heat capacity measurements for Yb6.6Ir6Sn16 indicate non-Fermi liquid metallic behavior.