Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
Derivation of a regional active-optical reflectance sensor corn algorithm
USDA-ARS?s Scientific Manuscript database
Active-optical reflectance sensor (AORS) algorithms developed for in-season corn (Zea mays L.) N management have traditionally been derived using sub-regional scale information. However, studies have shown these previously developed AORS algorithms are not consistently accurate when used on a region...
Physical environment virtualization for human activities recognition
NASA Astrophysics Data System (ADS)
Poshtkar, Azin; Elangovan, Vinayak; Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen
2015-05-01
Human activity recognition research relies heavily on extensive datasets to verify and validate performance of activity recognition algorithms. However, obtaining real datasets are expensive and highly time consuming. A physics-based virtual simulation can accelerate the development of context based human activity recognition algorithms and techniques by generating relevant training and testing videos simulating diverse operational scenarios. In this paper, we discuss in detail the requisite capabilities of a virtual environment to aid as a test bed for evaluating and enhancing activity recognition algorithms. To demonstrate the numerous advantages of virtual environment development, a newly developed virtual environment simulation modeling (VESM) environment is presented here to generate calibrated multisource imagery datasets suitable for development and testing of recognition algorithms for context-based human activities. The VESM environment serves as a versatile test bed to generate a vast amount of realistic data for training and testing of sensor processing algorithms. To demonstrate the effectiveness of VESM environment, we present various simulated scenarios and processed results to infer proper semantic annotations from the high fidelity imagery data for human-vehicle activity recognition under different operational contexts.
Performance of Activity Classification Algorithms in Free-living Older Adults
Sasaki, Jeffer Eidi; Hickey, Amanda; Staudenmayer, John; John, Dinesh; Kent, Jane A.; Freedson, Patty S.
2015-01-01
Purpose To compare activity type classification rates of machine learning algorithms trained on laboratory versus free-living accelerometer data in older adults. Methods Thirty-five older adults (21F and 14M ; 70.8 ± 4.9 y) performed selected activities in the laboratory while wearing three ActiGraph GT3X+ activity monitors (dominant hip, wrist, and ankle). Monitors were initialized to collect raw acceleration data at a sampling rate of 80 Hz. Fifteen of the participants also wore the GT3X+ in free-living settings and were directly observed for 2-3 hours. Time- and frequency- domain features from acceleration signals of each monitor were used to train Random Forest (RF) and Support Vector Machine (SVM) models to classify five activity types: sedentary, standing, household, locomotion, and recreational activities. All algorithms were trained on lab data (RFLab and SVMLab) and free-living data (RFFL and SVMFL) using 20 s signal sampling windows. Classification accuracy rates of both types of algorithms were tested on free-living data using a leave-one-out technique. Results Overall classification accuracy rates for the algorithms developed from lab data were between 49% (wrist) to 55% (ankle) for the SVMLab algorithms, and 49% (wrist) to 54% (ankle) for RFLab algorithms. The classification accuracy rates for SVMFL and RFFL algorithms ranged from 58% (wrist) to 69% (ankle) and from 61% (wrist) to 67% (ankle), respectively. Conclusion Our algorithms developed on free-living accelerometer data were more accurate in classifying activity type in free-living older adults than our algorithms developed on laboratory accelerometer data. Future studies should consider using free-living accelerometer data to train machine-learning algorithms in older adults. PMID:26673129
Performance of Activity Classification Algorithms in Free-Living Older Adults.
Sasaki, Jeffer Eidi; Hickey, Amanda M; Staudenmayer, John W; John, Dinesh; Kent, Jane A; Freedson, Patty S
2016-05-01
The objective of this study is to compare activity type classification rates of machine learning algorithms trained on laboratory versus free-living accelerometer data in older adults. Thirty-five older adults (21 females and 14 males, 70.8 ± 4.9 yr) performed selected activities in the laboratory while wearing three ActiGraph GT3X+ activity monitors (in the dominant hip, wrist, and ankle; ActiGraph, LLC, Pensacola, FL). Monitors were initialized to collect raw acceleration data at a sampling rate of 80 Hz. Fifteen of the participants also wore GT3X+ in free-living settings and were directly observed for 2-3 h. Time- and frequency-domain features from acceleration signals of each monitor were used to train random forest (RF) and support vector machine (SVM) models to classify five activity types: sedentary, standing, household, locomotion, and recreational activities. All algorithms were trained on laboratory data (RFLab and SVMLab) and free-living data (RFFL and SVMFL) using 20-s signal sampling windows. Classification accuracy rates of both types of algorithms were tested on free-living data using a leave-one-out technique. Overall classification accuracy rates for the algorithms developed from laboratory data were between 49% (wrist) and 55% (ankle) for the SVMLab algorithms and 49% (wrist) to 54% (ankle) for the RFLab algorithms. The classification accuracy rates for SVMFL and RFFL algorithms ranged from 58% (wrist) to 69% (ankle) and from 61% (wrist) to 67% (ankle), respectively. Our algorithms developed on free-living accelerometer data were more accurate in classifying the activity type in free-living older adults than those on our algorithms developed on laboratory accelerometer data. Future studies should consider using free-living accelerometer data to train machine learning algorithms in older adults.
Acoustic change detection algorithm using an FM radio
NASA Astrophysics Data System (ADS)
Goldman, Geoffrey H.; Wolfe, Owen
2012-06-01
The U.S. Army is interested in developing low-cost, low-power, non-line-of-sight sensors for monitoring human activity. One modality that is often overlooked is active acoustics using sources of opportunity such as speech or music. Active acoustics can be used to detect human activity by generating acoustic images of an area at different times, then testing for changes among the imagery. A change detection algorithm was developed to detect physical changes in a building, such as a door changing positions or a large box being moved using acoustics sources of opportunity. The algorithm is based on cross correlating the acoustic signal measured from two microphones. The performance of the algorithm was shown using data generated with a hand-held FM radio as a sound source and two microphones. The algorithm could detect a door being opened in a hallway.
Infrared Algorithm Development for Ocean Observations with EOS/MODIS
NASA Technical Reports Server (NTRS)
Brown, Otis B.
1997-01-01
Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared measurements. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, development of experimental instrumentation, and participation in MODIS (project) related activities. Activities in this contract period have focused on radiative transfer modeling, evaluation of atmospheric correction methodologies, undertake field campaigns, analysis of field data, and participation in MODIS meetings.
STAR Algorithm Integration Team - Facilitating operational algorithm development
NASA Astrophysics Data System (ADS)
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Recognition of military-specific physical activities with body-fixed sensors.
Wyss, Thomas; Mäder, Urs
2010-11-01
The purpose of this study was to develop and validate an algorithm for recognizing military-specific, physically demanding activities using body-fixed sensors. To develop the algorithm, the first group of study participants (n = 15) wore body-fixed sensors capable of measuring acceleration, step frequency, and heart rate while completing six military-specific activities: walking, marching with backpack, lifting and lowering loads, lifting and carrying loads, digging, and running. The accuracy of the algorithm was tested in these isolated activities in a laboratory setting (n = 18) and in the context of daily military training routine (n = 24). The overall recognition rates during isolated activities and during daily military routine activities were 87.5% and 85.5%, respectively. We conclude that the algorithm adequately recognized six military-specific physical activities based on sensor data alone both in a laboratory setting and in the military training environment. By recognizing type of physical activities this objective method provides additional information on military-job descriptions.
Bouchard, M
2001-01-01
In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.
NASA Astrophysics Data System (ADS)
Ayyad, Yassid; Mittig, Wolfgang; Bazin, Daniel; Beceiro-Novo, Saul; Cortesi, Marco
2018-02-01
The three-dimensional reconstruction of particle tracks in a time projection chamber is a challenging task that requires advanced classification and fitting algorithms. In this work, we have developed and implemented a novel algorithm based on the Random Sample Consensus Model (RANSAC). The RANSAC is used to classify tracks including pile-up, to remove uncorrelated noise hits, as well as to reconstruct the vertex of the reaction. The algorithm, developed within the Active Target Time Projection Chamber (AT-TPC) framework, was tested and validated by analyzing the 4He+4He reaction. Results, performance and quality of the proposed algorithm are presented and discussed in detail.
NASA Astrophysics Data System (ADS)
Strippoli, L. S.; Gonzalez-Arjona, D. G.
2018-04-01
GMV extensively worked in many activities aimed at developing, validating, and verifying up to TRL-6 advanced GNC and IP algorithms for Mars Sample Return rendezvous working under different ESA contracts on the development of advanced algorithms for VBN sensor.
Webb, Samuel J; Hanser, Thierry; Howlin, Brendan; Krause, Paul; Vessey, Jonathan D
2014-03-25
A new algorithm has been developed to enable the interpretation of black box models. The developed algorithm is agnostic to learning algorithm and open to all structural based descriptors such as fragments, keys and hashed fingerprints. The algorithm has provided meaningful interpretation of Ames mutagenicity predictions from both random forest and support vector machine models built on a variety of structural fingerprints.A fragmentation algorithm is utilised to investigate the model's behaviour on specific substructures present in the query. An output is formulated summarising causes of activation and deactivation. The algorithm is able to identify multiple causes of activation or deactivation in addition to identifying localised deactivations where the prediction for the query is active overall. No loss in performance is seen as there is no change in the prediction; the interpretation is produced directly on the model's behaviour for the specific query. Models have been built using multiple learning algorithms including support vector machine and random forest. The models were built on public Ames mutagenicity data and a variety of fingerprint descriptors were used. These models produced a good performance in both internal and external validation with accuracies around 82%. The models were used to evaluate the interpretation algorithm. Interpretation was revealed that links closely with understood mechanisms for Ames mutagenicity. This methodology allows for a greater utilisation of the predictions made by black box models and can expedite further study based on the output for a (quantitative) structure activity model. Additionally the algorithm could be utilised for chemical dataset investigation and knowledge extraction/human SAR development.
Active Learning Using Hint Information.
Li, Chun-Liang; Ferng, Chun-Sung; Lin, Hsuan-Tien
2015-08-01
The abundance of real-world data and limited labeling budget calls for active learning, an important learning paradigm for reducing human labeling efforts. Many recently developed active learning algorithms consider both uncertainty and representativeness when making querying decisions. However, exploiting representativeness with uncertainty concurrently usually requires tackling sophisticated and challenging learning tasks, such as clustering. In this letter, we propose a new active learning framework, called hinted sampling, which takes both uncertainty and representativeness into account in a simpler way. We design a novel active learning algorithm within the hinted sampling framework with an extended support vector machine. Experimental results validate that the novel active learning algorithm can result in a better and more stable performance than that achieved by state-of-the-art algorithms. We also show that the hinted sampling framework allows improving another active learning algorithm designed from the transductive support vector machine.
Model and algorithmic framework for detection and correction of cognitive errors.
Feki, Mohamed Ali; Biswas, Jit; Tolstikov, Andrei
2009-01-01
This paper outlines an approach that we are taking for elder-care applications in the smart home, involving cognitive errors and their compensation. Our approach involves high level modeling of daily activities of the elderly by breaking down these activities into smaller units, which can then be automatically recognized at a low level by collections of sensors placed in the homes of the elderly. This separation allows us to employ plan recognition algorithms and systems at a high level, while developing stand-alone activity recognition algorithms and systems at a low level. It also allows the mixing and matching of multi-modality sensors of various kinds that go to support the same high level requirement. Currently our plan recognition algorithms are still at a conceptual stage, whereas a number of low level activity recognition algorithms and systems have been developed. Herein we present our model for plan recognition, providing a brief survey of the background literature. We also present some concrete results that we have achieved for activity recognition, emphasizing how these results are incorporated into the overall plan recognition system.
Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Conboy, Barbara (Technical Monitor)
1999-01-01
This separation has been logical thus far; however, as launch of AM-1 approaches, it must be recognized that many of these activities will shift emphasis from algorithm development to validation. For example, the second, third, and fifth bullets will become almost totally validation-focussed activities in the post-launch era, providing the core of our experimental validation effort. Work under the first bullet will continue into the post-launch time frame, driven in part by algorithm deficiencies revealed as a result of validation activities. Prior to the start of the 1999 fiscal year (FY99) we were requested to prepare a brief plan for our FY99 activities. This plan is included as Appendix 1. The present report describes the progress made on our planned activities.
Miyatake, Aya; Nishio, Teiji; Ogino, Takashi
2011-10-01
The purpose of this study is to develop a new calculation algorithm that is satisfactory in terms of the requirements for both accuracy and calculation time for a simulation of imaging of the proton-irradiated volume in a patient body in clinical proton therapy. The activity pencil beam algorithm (APB algorithm), which is a new technique to apply the pencil beam algorithm generally used for proton dose calculations in proton therapy to the calculation of activity distributions, was developed as a calculation algorithm of the activity distributions formed by positron emitter nuclei generated from target nuclear fragment reactions. In the APB algorithm, activity distributions are calculated using an activity pencil beam kernel. In addition, the activity pencil beam kernel is constructed using measured activity distributions in the depth direction and calculations in the lateral direction. (12)C, (16)O, and (40)Ca nuclei were determined as the major target nuclei that constitute a human body that are of relevance for calculation of activity distributions. In this study, "virtual positron emitter nuclei" was defined as the integral yield of various positron emitter nuclei generated from each target nucleus by target nuclear fragment reactions with irradiated proton beam. Compounds, namely, polyethylene, water (including some gelatin) and calcium oxide, which contain plenty of the target nuclei, were irradiated using a proton beam. In addition, depth activity distributions of virtual positron emitter nuclei generated in each compound from target nuclear fragment reactions were measured using a beam ON-LINE PET system mounted a rotating gantry port (BOLPs-RGp). The measured activity distributions depend on depth or, in other words, energy. The irradiated proton beam energies were 138, 179, and 223 MeV, and measurement time was about 5 h until the measured activity reached the background level. Furthermore, the activity pencil beam data were made using the activity pencil beam kernel, which was composed of the measured depth data and the lateral data including multiple Coulomb scattering approximated by the Gaussian function, and were used for calculating activity distributions. The data of measured depth activity distributions for every target nucleus by proton beam energy were obtained using BOLPs-RGp. The form of the depth activity distribution was verified, and the data were made in consideration of the time-dependent change of the form. Time dependence of an activity distribution form could be represented by two half-lives. Gaussian form of the lateral distribution of the activity pencil beam kernel was decided by the effect of multiple Coulomb scattering. Thus, the data of activity pencil beam involving time dependence could be obtained in this study. The simulation of imaging of the proton-irradiated volume in a patient body using target nuclear fragment reactions was feasible with the developed APB algorithm taking time dependence into account. With the use of the APB algorithm, it was suggested that a system of simulation of activity distributions that has levels of both accuracy and calculation time appropriate for clinical use can be constructed.
Problem Solving Techniques for the Design of Algorithms.
ERIC Educational Resources Information Center
Kant, Elaine; Newell, Allen
1984-01-01
Presents model of algorithm design (activity in software development) based on analysis of protocols of two subjects designing three convex hull algorithms. Automation methods, methods for studying algorithm design, role of discovery in problem solving, and comparison of different designs of case study according to model are highlighted.…
System development of the Screwworm Eradication Data System (SEDS) algorithm
NASA Technical Reports Server (NTRS)
Arp, G.; Forsberg, F.; Giddings, L.; Phinney, D.
1976-01-01
The use of remotely sensed data is reported in the eradication of the screwworm and in the study of the role of the weather in the activity and development of the screwworm fly. As a result, the Screwworm Eradication Data System (SEDS) algorithm was developed.
Li, Jun-qing; Pan, Quan-ke; Mao, Kun
2014-01-01
A hybrid algorithm which combines particle swarm optimization (PSO) and iterated local search (ILS) is proposed for solving the hybrid flowshop scheduling (HFS) problem with preventive maintenance (PM) activities. In the proposed algorithm, different crossover operators and mutation operators are investigated. In addition, an efficient multiple insert mutation operator is developed for enhancing the searching ability of the algorithm. Furthermore, an ILS-based local search procedure is embedded in the algorithm to improve the exploitation ability of the proposed algorithm. The detailed experimental parameter for the canonical PSO is tuning. The proposed algorithm is tested on the variation of 77 Carlier and Néron's benchmark problems. Detailed comparisons with the present efficient algorithms, including hGA, ILS, PSO, and IG, verify the efficiency and effectiveness of the proposed algorithm. PMID:24883414
Utilization of Ancillary Data Sets for SMAP Algorithm Development and Product Generation
NASA Technical Reports Server (NTRS)
ONeill, P.; Podest, E.; Njoku, E.
2011-01-01
Algorithms being developed for the Soil Moisture Active Passive (SMAP) mission require a variety of both static and ancillary data. The selection of the most appropriate source for each ancillary data parameter is driven by a number of considerations, including accuracy, latency, availability, and consistency across all SMAP products and with SMOS (Soil Moisture Ocean Salinity). It is anticipated that initial selection of all ancillary datasets, which are needed for ongoing algorithm development activities on the SMAP algorithm testbed at JPL, will be completed within the year. These datasets will be updated as new or improved sources become available, and all selections and changes will be documented for the benefit of the user community. Wise choices in ancillary data will help to enable SMAP to provide new global measurements of soil moisture and freeze/thaw state at the targeted accuracy necessary to tackle hydrologically-relevant societal issues.
Current Status of Japanese Global Precipitation Measurement (GPM) Research Project
NASA Astrophysics Data System (ADS)
Kachi, Misako; Oki, Riko; Kubota, Takuji; Masaki, Takeshi; Kida, Satoshi; Iguchi, Toshio; Nakamura, Kenji; Takayabu, Yukari N.
2013-04-01
The Global Precipitation Measurement (GPM) mission is a mission led by the Japan Aerospace Exploration Agency (JAXA) and the National Aeronautics and Space Administration (NASA) under collaboration with many international partners, who will provide constellation of satellites carrying microwave radiometer instruments. The GPM Core Observatory, which carries the Dual-frequency Precipitation Radar (DPR) developed by JAXA and the National Institute of Information and Communications Technology (NICT), and the GPM Microwave Imager (GMI) developed by NASA. The GPM Core Observatory is scheduled to be launched in early 2014. JAXA also provides the Global Change Observation Mission (GCOM) 1st - Water (GCOM-W1) named "SHIZUKU," as one of constellation satellites. The SHIZUKU satellite was launched in 18 May, 2012 from JAXA's Tanegashima Space Center, and public data release of the Advanced Microwave Scanning Radiometer 2 (AMSR2) on board the SHIZUKU satellite was planned that Level 1 products in January 2013, and Level 2 products including precipitation in May 2013. The Japanese GPM research project conducts scientific activities on algorithm development, ground validation, application research including production of research products. In addition, we promote collaboration studies in Japan and Asian countries, and public relations activities to extend potential users of satellite precipitation products. In pre-launch phase, most of our activities are focused on the algorithm development and the ground validation related to the algorithm development. As the GPM standard products, JAXA develops the DPR Level 1 algorithm, and the NASA-JAXA Joint Algorithm Team develops the DPR Level 2 and the DPR-GMI combined Level2 algorithms. JAXA also develops the Global Rainfall Map product as national product to distribute hourly and 0.1-degree horizontal resolution rainfall map. All standard algorithms including Japan-US joint algorithm will be reviewed by the Japan-US Joint Precipitation Measuring Mission (PMM) Science Team (JPST) before the release. DPR Level 2 algorithm has been developing by the DPR Algorithm Team led by Japan, which is under the NASA-JAXA Joint Algorithm Team. The Level-2 algorithms will provide KuPR only products, KaPR only products, and Dual-frequency Precipitation products, with estimated precipitation rate, radar reflectivity, and precipitation information such as drop size distribution and bright band height. At-launch code was developed in December 2012. In addition, JAXA and NASA have provided synthetic DPR L1 data and tests have been performed using them. Japanese Global Rainfall Map algorithm for the GPM mission has been developed by the Global Rainfall Map Algorithm Development Team in Japan. The algorithm succeeded heritages of the Global Satellite Mapping for Precipitation (GSMaP) project, which was sponsored by the Japan Science and Technology Agency (JST) under the Core Research for Evolutional Science and Technology (CREST) framework between 2002 and 2007. The GSMaP near-real-time version and reanalysis version have been in operation at JAXA, and browse images and binary data available at the GSMaP web site (http://sharaku.eorc.jaxa.jp/GSMaP/). The GSMaP algorithm for GPM is developed in collaboration with AMSR2 standard algorithm for precipitation product, and their validation studies are closely related. As JAXA GPM product, we will provide 0.1-degree grid and hourly product for standard and near-realtime processing. Outputs will include hourly rainfall, gauge-calibrated hourly rainfall, and several quality information (satellite information flag, time information flag, and gauge quality information) over global areas from 60°S to 60°N. At-launch code of GSMaP for GPM is under development, and will be delivered to JAXA GPM Mission Operation System by April 2013. At-launch code will include several updates of microwave imager and sounder algorithms and databases, and introduction of rain-gauge correction.
NASA Astrophysics Data System (ADS)
Zhou, Yali; Zhang, Qizhi; Yin, Yixin
2015-05-01
In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.
We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...
Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.
2005-01-01
The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
NASA Astrophysics Data System (ADS)
Zhang, Junzhi; Li, Yutong; Lv, Chen; Gou, Jinfang; Yuan, Ye
2017-03-01
The flexibility of the electrified powertrain system elicits a negative effect upon the cooperative control performance between regenerative and hydraulic braking and the active damping control performance. Meanwhile, the connections among sensors, controllers, and actuators are realized via network communication, i.e., controller area network (CAN), that introduces time-varying delays and deteriorates the control performances of the closed-loop control systems. As such, the goal of this paper is to develop a control algorithm to cope with all these challenges. To this end, the models of the stochastic network induced time-varying delays, based on a real in-vehicle network topology and on a flexible electrified powertrain, were firstly built. In order to further enhance the control performances of active damping and cooperative control of regenerative and hydraulic braking, the time-varying delays compensation algorithm for the electrified powertrain active damping during regenerative braking was developed based on a predictive scheme. The augmented system is constructed and the H∞ performance is analyzed. Based on this analysis, the control gains are derived by solving a nonlinear minimization problem. The simulations and hardware-in-loop (HIL) tests were carried out to validate the effectiveness of the developed algorithm. The test results show that the active damping and cooperative control performances are enhanced significantly.
Community-based benchmarking improves spike rate inference from two-photon calcium imaging data.
Berens, Philipp; Freeman, Jeremy; Deneux, Thomas; Chenkov, Nikolay; McColgan, Thomas; Speiser, Artur; Macke, Jakob H; Turaga, Srinivas C; Mineault, Patrick; Rupprecht, Peter; Gerhard, Stephan; Friedrich, Rainer W; Friedrich, Johannes; Paninski, Liam; Pachitariu, Marius; Harris, Kenneth D; Bolte, Ben; Machado, Timothy A; Ringach, Dario; Stone, Jasmine; Rogerson, Luke E; Sofroniew, Nicolas J; Reimer, Jacob; Froudarakis, Emmanouil; Euler, Thomas; Román Rosón, Miroslav; Theis, Lucas; Tolias, Andreas S; Bethge, Matthias
2018-05-01
In recent years, two-photon calcium imaging has become a standard tool to probe the function of neural circuits and to study computations in neuronal populations. However, the acquired signal is only an indirect measurement of neural activity due to the comparatively slow dynamics of fluorescent calcium indicators. Different algorithms for estimating spike rates from noisy calcium measurements have been proposed in the past, but it is an open question how far performance can be improved. Here, we report the results of the spikefinder challenge, launched to catalyze the development of new spike rate inference algorithms through crowd-sourcing. We present ten of the submitted algorithms which show improved performance compared to previously evaluated methods. Interestingly, the top-performing algorithms are based on a wide range of principles from deep neural networks to generative models, yet provide highly correlated estimates of the neural activity. The competition shows that benchmark challenges can drive algorithmic developments in neuroscience.
Alday, Erick A Perez; Colman, Michael A; Langley, Philip; Zhang, Henggui
2017-03-01
Atrial tachy-arrhytmias, such as atrial fibrillation (AF), are characterised by irregular electrical activity in the atria, generally associated with erratic excitation underlain by re-entrant scroll waves, fibrillatory conduction of multiple wavelets or rapid focal activity. Epidemiological studies have shown an increase in AF prevalence in the developed world associated with an ageing society, highlighting the need for effective treatment options. Catheter ablation therapy, commonly used in the treatment of AF, requires spatial information on atrial electrical excitation. The standard 12-lead electrocardiogram (ECG) provides a method for non-invasive identification of the presence of arrhythmia, due to irregularity in the ECG signal associated with atrial activation compared to sinus rhythm, but has limitations in providing specific spatial information. There is therefore a pressing need to develop novel methods to identify and locate the origin of arrhythmic excitation. Invasive methods provide direct information on atrial activity, but may induce clinical complications. Non-invasive methods avoid such complications, but their development presents a greater challenge due to the non-direct nature of monitoring. Algorithms based on the ECG signals in multiple leads (e.g. a 64-lead vest) may provide a viable approach. In this study, we used a biophysically detailed model of the human atria and torso to investigate the correlation between the morphology of the ECG signals from a 64-lead vest and the location of the origin of rapid atrial excitation arising from rapid focal activity and/or re-entrant scroll waves. A focus-location algorithm was then constructed from this correlation. The algorithm had success rates of 93% and 76% for correctly identifying the origin of focal and re-entrant excitation with a spatial resolution of 40 mm, respectively. The general approach allows its application to any multi-lead ECG system. This represents a significant extension to our previously developed algorithms to predict the AF origins in association with focal activities.
Phase Diversity and Polarization Augmented Techniques for Active Imaging
2007-03-01
build up a system model for use in algorithm development. 32 IV. Conventional Imaging and Atmospheric Turbulence With an understanding of scalar...28, 59, 115 Cholesky Factorization, 14, 42 C2n, see Turbulence Coherent Image Model, 36 Complete Data, see EM Algorithm Complex Coherence...Data, see EM Algorithm Homotopic, 62 Impulse Response, 34, 44 Incoherent Image Model, 36 Incomplete Data, see EM Algorithm Lo- Turbulence Outer Scale
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jamieson, Kevin; Davis, IV, Warren L.
Active learning methods automatically adapt data collection by selecting the most informative samples in order to accelerate machine learning. Because of this, real-world testing and comparing active learning algorithms requires collecting new datasets (adaptively), rather than simply applying algorithms to benchmark datasets, as is the norm in (passive) machine learning research. To facilitate the development, testing and deployment of active learning for real applications, we have built an open-source software system for large-scale active learning research and experimentation. The system, called NEXT, provides a unique platform for realworld, reproducible active learning research. This paper details the challenges of building themore » system and demonstrates its capabilities with several experiments. The results show how experimentation can help expose strengths and weaknesses of active learning algorithms, in sometimes unexpected and enlightening ways.« less
Filtered-x generalized mixed norm (FXGMN) algorithm for active noise control
NASA Astrophysics Data System (ADS)
Song, Pucha; Zhao, Haiquan
2018-07-01
The standard adaptive filtering algorithm with a single error norm exhibits slow convergence rate and poor noise reduction performance under specific environments. To overcome this drawback, a filtered-x generalized mixed norm (FXGMN) algorithm for active noise control (ANC) system is proposed. The FXGMN algorithm is developed by using a convex mixture of lp and lq norms as the cost function that it can be viewed as a generalized version of the most existing adaptive filtering algorithms, and it will reduce to a specific algorithm by choosing certain parameters. Especially, it can be used to solve the ANC under Gaussian and non-Gaussian noise environments (including impulsive noise with symmetric α -stable (SαS) distribution). To further enhance the algorithm performance, namely convergence speed and noise reduction performance, a convex combination of the FXGMN algorithm (C-FXGMN) is presented. Moreover, the computational complexity of the proposed algorithms is analyzed, and a stability condition for the proposed algorithms is provided. Simulation results show that the proposed FXGMN and C-FXGMN algorithms can achieve better convergence speed and higher noise reduction as compared to other existing algorithms under various noise input conditions, and the C-FXGMN algorithm outperforms the FXGMN.
A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.
2013-01-01
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554
Bastian, Thomas; Maire, Aurélia; Dugas, Julien; Ataya, Abbas; Villars, Clément; Gris, Florence; Perrin, Emilie; Caritu, Yanis; Doron, Maeva; Blanc, Stéphane; Jallon, Pierre; Simon, Chantal
2015-03-15
"Objective" methods to monitor physical activity and sedentary patterns in free-living conditions are necessary to further our understanding of their impacts on health. In recent years, many software solutions capable of automatically identifying activity types from portable accelerometry data have been developed, with promising results in controlled conditions, but virtually no reports on field tests. An automatic classification algorithm initially developed using laboratory-acquired data (59 subjects engaging in a set of 24 standardized activities) to discriminate between 8 activity classes (lying, slouching, sitting, standing, walking, running, and cycling) was applied to data collected in the field. Twenty volunteers equipped with a hip-worn triaxial accelerometer performed at their own pace an activity set that included, among others, activities such as walking the streets, running, cycling, and taking the bus. Performances of the laboratory-calibrated classification algorithm were compared with those of an alternative version of the same model including field-collected data in the learning set. Despite good results in laboratory conditions, the performances of the laboratory-calibrated algorithm (assessed by confusion matrices) decreased for several activities when applied to free-living data. Recalibrating the algorithm with data closer to real-life conditions and from an independent group of subjects proved useful, especially for the detection of sedentary behaviors while in transports, thereby improving the detection of overall sitting (sensitivity: laboratory model = 24.9%; recalibrated model = 95.7%). Automatic identification methods should be developed using data acquired in free-living conditions rather than data from standardized laboratory activity sets only, and their limits carefully tested before they are used in field studies. Copyright © 2015 the American Physiological Society.
Distilling the Verification Process for Prognostics Algorithms
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai
2013-01-01
The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.
D.J. Nicolsky; V.E. Romanovsky; G.G. Panteleev
2008-01-01
A variational data assimilation algorithm is developed to reconstruct thermal properties, porosity, and parametrization of the unfrozen water content for fully saturated soils. The algorithm is tested with simulated synthetic temperatures. The simulations are performed to determine the robustness and sensitivity of algorithm to estimate soil properties from in-situ...
Borghese, Michael M; Janssen, Ian
2018-03-22
Children participate in four main types of physical activity: organized sport, active travel, outdoor active play, and curriculum-based physical activity. The objective of this study was to develop a valid approach that can be used to concurrently measure time spent in each of these types of physical activity. Two samples (sample 1: n = 50; sample 2: n = 83) of children aged 10-13 wore an accelerometer and a GPS watch continuously over 7 days. They also completed a log where they recorded the start and end times of organized sport sessions. Sample 1 also completed an outdoor time log where they recorded the times they went outdoors and a description of the outdoor activity. Sample 2 also completed a curriculum log where they recorded times they participated in physical activity (e.g., physical education) during class time. We describe the development of a measurement approach that can be used to concurrently assess the time children spend participating in specific types of physical activity. The approach uses a combination of data from accelerometers, GPS, and activity logs and relies on merging and then processing these data using several manual (e.g., data checks and cleaning) and automated (e.g., algorithms) procedures. In the new measurement approach time spent in organized sport is estimated using the activity log. Time spent in active travel is estimated using an existing algorithm that uses GPS data. Time spent in outdoor active play is estimated using an algorithm (with a sensitivity and specificity of 85%) that was developed using data collected in sample 1 and which uses all of the data sources. Time spent in curriculum-based physical activity is estimated using an algorithm (with a sensitivity of 78% and specificity of 92%) that was developed using data collected in sample 2 and which uses accelerometer data collected during class time. There was evidence of excellent intra- and inter-rater reliability of the estimates for all of these types of physical activity when the manual steps were duplicated. This novel measurement approach can be used to estimate the time that children participate in different types of physical activity.
NASA Astrophysics Data System (ADS)
Telban, Robert J.
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
Physical activity classification using the GENEA wrist-worn accelerometer.
Zhang, Shaoyan; Rowlands, Alex V; Murray, Peter; Hurst, Tina L
2012-04-01
Most accelerometer-based activity monitors are worn on the waist or lower back for assessment of habitual physical activity. Output is in arbitrary counts that can be classified by activity intensity according to published thresholds. The purpose of this study was to develop methods to classify physical activities into walking, running, household, or sedentary activities based on raw acceleration data from the GENEA (Gravity Estimator of Normal Everyday Activity) and compare classification accuracy from a wrist-worn GENEA with a waist-worn GENEA. Sixty participants (age = 49.4 ± 6.5 yr, body mass index = 24.6 ± 3.4 kg·m⁻²) completed an ordered series of 10-12 semistructured activities in the laboratory and outdoor environment. Throughout, three GENEA accelerometers were worn: one at the waist, one on the left wrist, and one on the right wrist. Acceleration data were collected at 80 Hz. Features obtained from both fast Fourier transform and wavelet decomposition were extracted, and machine learning algorithms were used to classify four types of daily activities including sedentary, household, walking, and running activities. The computational results demonstrated that the algorithm we developed can accurately classify certain types of daily activities, with high overall classification accuracy for both waist-worn GENEA (0.99) and wrist-worn GENEA (right wrist = 0.97, left wrist = 0.96). We have successfully developed algorithms suitable for use with wrist-worn accelerometers for detecting certain types of physical activities; the performance is comparable to waist-worn accelerometers for assessment of physical activity.
Hip and Wrist Accelerometer Algorithms for Free-Living Behavior Classification.
Ellis, Katherine; Kerr, Jacqueline; Godbole, Suneeta; Staudenmayer, John; Lanckriet, Gert
2016-05-01
Accelerometers are a valuable tool for objective measurement of physical activity (PA). Wrist-worn devices may improve compliance over standard hip placement, but more research is needed to evaluate their validity for measuring PA in free-living settings. Traditional cut-point methods for accelerometers can be inaccurate and need testing in free living with wrist-worn devices. In this study, we developed and tested the performance of machine learning (ML) algorithms for classifying PA types from both hip and wrist accelerometer data. Forty overweight or obese women (mean age = 55.2 ± 15.3 yr; BMI = 32.0 ± 3.7) wore two ActiGraph GT3X+ accelerometers (right hip, nondominant wrist; ActiGraph, Pensacola, FL) for seven free-living days. Wearable cameras captured ground truth activity labels. A classifier consisting of a random forest and hidden Markov model classified the accelerometer data into four activities (sitting, standing, walking/running, and riding in a vehicle). Free-living wrist and hip ML classifiers were compared with each other, with traditional accelerometer cut points, and with an algorithm developed in a laboratory setting. The ML classifier obtained average values of 89.4% and 84.6% balanced accuracy over the four activities using the hip and wrist accelerometer, respectively. In our data set with average values of 28.4 min of walking or running per day, the ML classifier predicted average values of 28.5 and 24.5 min of walking or running using the hip and wrist accelerometer, respectively. Intensity-based cut points and the laboratory algorithm significantly underestimated walking minutes. Our results demonstrate the superior performance of our PA-type classification algorithm, particularly in comparison with traditional cut points. Although the hip algorithm performed better, additional compliance achieved with wrist devices might justify using a slightly lower performing algorithm.
A Human Activity Recognition System Using Skeleton Data from RGBD Sensors.
Cippitelli, Enea; Gasparrini, Samuele; Gambi, Ennio; Spinsante, Susanna
2016-01-01
The aim of Active and Assisted Living is to develop tools to promote the ageing in place of elderly people, and human activity recognition algorithms can help to monitor aged people in home environments. Different types of sensors can be used to address this task and the RGBD sensors, especially the ones used for gaming, are cost-effective and provide much information about the environment. This work aims to propose an activity recognition algorithm exploiting skeleton data extracted by RGBD sensors. The system is based on the extraction of key poses to compose a feature vector, and a multiclass Support Vector Machine to perform classification. Computation and association of key poses are carried out using a clustering algorithm, without the need of a learning algorithm. The proposed approach is evaluated on five publicly available datasets for activity recognition, showing promising results especially when applied for the recognition of AAL related actions. Finally, the current applicability of this solution in AAL scenarios and the future improvements needed are discussed.
Characterizing volcanic activity: Application of freely-available webcams
NASA Astrophysics Data System (ADS)
Dehn, J.; Harrild, M.; Webley, P. W.
2017-12-01
In recent years, freely-available web-based cameras, or webcams, have become more readily available allowing an increased level of monitoring at active volcanoes across the globe. While these cameras have been extensively used as qualitative tools, they provide a unique dataset to perform quantitative analyzes of the changing behavior of the particular volcano within the cameras field of view. We focus on the multitude of these freely-available webcams and present a new algorithm to detect changes in volcanic activity using nighttime webcam data. Our approach uses a quick, efficient, and fully automated algorithm to identify changes in webcam data in near real-time, including techniques such as edge detection, Gaussian mixture models, and temporal/spatial statistical tests, which are applied to each target image. Often the image metadata (exposure, gain settings, aperture, focal length, etc.) are unknown, meaning we developed our algorithm to identify the quantity of volcanically incandescent pixels as well as the number of specific algorithm tests needed to detect thermal activity, instead of directly correlating brightness in the webcam to eruption temperatures. We compared our algorithm results to a manual analysis of webcam data for several volcanoes and determined a false detection rate of less than 3% for the automated approach. In our presentation, we describe the different tests integrated into our algorithm, lessons learned, and how we applied our method to several volcanoes across the North Pacific during its development and implementation. We will finish with a discussion on the global applicability of our approach and how to build a 24/7, 365 day a year tool that can be used as an additional data source for real-time analysis of volcanic activity.
An error-based micro-sensor capture system for real-time motion estimation
NASA Astrophysics Data System (ADS)
Yang, Lin; Ye, Shiwei; Wang, Zhibo; Huang, Zhipei; Wu, Jiankang; Kong, Yongmei; Zhang, Li
2017-10-01
A wearable micro-sensor motion capture system with 16 IMUs and an error-compensatory complementary filter algorithm for real-time motion estimation has been developed to acquire accurate 3D orientation and displacement in real life activities. In the proposed filter algorithm, the gyroscope bias error, orientation error and magnetic disturbance error are estimated and compensated, significantly reducing the orientation estimation error due to sensor noise and drift. Displacement estimation, especially for activities such as jumping, has been the challenge in micro-sensor motion capture. An adaptive gait phase detection algorithm has been developed to accommodate accurate displacement estimation in different types of activities. The performance of this system is benchmarked with respect to the results of VICON optical capture system. The experimental results have demonstrated effectiveness of the system in daily activities tracking, with estimation error 0.16 ± 0.06 m for normal walking and 0.13 ± 0.11 m for jumping motions. Research supported by the National Natural Science Foundation of China (Nos. 61431017, 81272166).
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-28
... requested. For instance, a prize may be awarded to the solution of a challenge to develop an algorithm that enables reliable prediction of a certain event. A responder could submit the correct algorithm, but...
Development of a New De Novo Design Algorithm for Exploring Chemical Space.
Mishima, Kazuaki; Kaneko, Hiromasa; Funatsu, Kimito
2014-12-01
In the first stage of development of new drugs, various lead compounds with high activity are required. To design such compounds, we focus on chemical space defined by structural descriptors. New compounds close to areas where highly active compounds exist will show the same degree of activity. We have developed a new de novo design system to search a target area in chemical space. First, highly active compounds are manually selected as initial seeds. Then, the seeds are entered into our system, and structures slightly different from the seeds are generated and pooled. Next, seeds are selected from the new structure pool based on the distance from target coordinates on the map. To test the algorithm, we used two datasets of ligand binding affinity and showed that the proposed generator could produce diverse virtual compounds that had high activity in docking simulations. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Al-Fatlawi, Ali H; Fatlawi, Hayder K; Sai Ho Ling
2017-07-01
Daily physical activities monitoring is benefiting the health care field in several ways, in particular with the development of the wearable sensors. This paper adopts effective ways to calculate the optimal number of the necessary sensors and to build a reliable and a high accuracy monitoring system. Three data mining algorithms, namely Decision Tree, Random Forest and PART Algorithm, have been applied for the sensors selection process. Furthermore, the deep belief network (DBN) has been investigated to recognise 33 physical activities effectively. The results indicated that the proposed method is reliable with an overall accuracy of 96.52% and the number of sensors is minimised from nine to six sensors.
NASA Astrophysics Data System (ADS)
Zarchi, Milad; Attaran, Behrooz
2017-11-01
This study develops a mathematical model to investigate the behaviour of adaptable shock absorber dynamics for the six-degree-of-freedom aircraft model in the taxiing phase. The purpose of this research is to design a proportional-integral-derivative technique for control of an active vibration absorber system using a hydraulic nonlinear actuator based on the bees algorithm. This optimization algorithm is inspired by the natural intelligent foraging behaviour of honey bees. The neighbourhood search strategy is used to find better solutions around the previous one. The parameters of the controller are adjusted by minimizing the aircraft's acceleration and impact force as the multi-objective function. The major advantages of this algorithm over other optimization algorithms are its simplicity, flexibility and robustness. The results of the numerical simulation indicate that the active suspension increases the comfort of the ride for passengers and the fatigue life of the structure. This is achieved by decreasing the impact force, displacement and acceleration significantly.
Akkas, Oguz; Lee, Cheng Hsien; Hu, Yu Hen; Harris Adamson, Carisa; Rempel, David; Radwin, Robert G
2017-12-01
Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.
Suomi NPP VIIRS active fire product status
NASA Astrophysics Data System (ADS)
Ellicott, E. A.; Csiszar, I. A.; Schroeder, W.; Giglio, L.; Wind, B.; Justice, C. O.
2012-12-01
We provide an overview of the evaluation and development of the Active Fires product derived from the Visible Infrared Imager Radiometer Suite (VIIRS) sensor on the Suomi National Polar-orbiting Partnership (SNPP) satellite during the first year of on-orbit data. Results from the initial evaluation of the standard SNPP Active Fires product, generated by the SNPP Interface Data Processing System (IDPS), supported the stabilization of the VIIRS Sensor Data Record (SDR) product. This activity focused in particular on the processing of the dual-gain 4 micron VIIRS M13 radiometric measurements into 750m aggregated data, which are fundamental for active fire detection. Following the VIIRS SDR product's Beta maturity status in April 2012, correlative analysis between VIIRS and near-simultaneous fire detections from the Moderate Resolution Imaging Spectroradiometer (MODIS) on the NASA Earth Observing System Aqua satellite confirmed the expected relative detection rates driven primarily by sensor differences. The VIIRS Active Fires Product Development and Validation Team also developed a science code that is based on the latest MODIS Collection 6 algorithm and provides a full spatially explicit fire mask to replace the sparse array output of fire locations from a MODIS Collection 4 equivalent algorithm in the current IDPS product. The Algorithm Development Library (ADL) was used to support the planning for the transition of the science code into IDPS operations in the future. Product evaluation and user outreach was facilitated by a product website that provided end user access to fire data in user-friendly format over North America as well as examples of VIIRS-MODIS comparisons. The VIIRS fire team also developed an experimental product based on 375m VIIRS Imagery band measurements and provided high quality imagery of major fire events in US. By August 2012 the IDPS product achieved Beta maturity, with some known and documented shortfalls related to the processing of incorrect SDR input data and to apparent algorithm deficiencies in select observing and environmental conditions.
JPSS Cryosphere Algorithms: Integration and Testing in Algorithm Development Library (ADL)
NASA Astrophysics Data System (ADS)
Tsidulko, M.; Mahoney, R. L.; Meade, P.; Baldwin, D.; Tschudi, M. A.; Das, B.; Mikles, V. J.; Chen, W.; Tang, Y.; Sprietzer, K.; Zhao, Y.; Wolf, W.; Key, J.
2014-12-01
JPSS is a next generation satellite system that is planned to be launched in 2017. The satellites will carry a suite of sensors that are already on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite. The NOAA/NESDIS/STAR Algorithm Integration Team (AIT) works within the Algorithm Development Library (ADL) framework which mimics the operational JPSS Interface Data Processing Segment (IDPS). The AIT contributes in development, integration and testing of scientific algorithms employed in the IDPS. This presentation discusses cryosphere related activities performed in ADL. The addition of a new ancillary data set - NOAA Global Multisensor Automated Snow/Ice data (GMASI) - with ADL code modifications is described. Preliminary GMASI impact on the gridded Snow/Ice product is estimated. Several modifications to the Ice Age algorithm that demonstrates mis-classification of ice type for certain areas/time periods are tested in the ADL. Sensitivity runs for day time, night time and terminator zone are performed and presented. Comparisons between the original and modified versions of the Ice Age algorithm are also presented.
Oceanographic applications of laser technology
NASA Technical Reports Server (NTRS)
Hoge, F. E.
1988-01-01
Oceanographic activities with the Airborne Oceanographic Lidar (AOL) for the past several years have primarily been focussed on using active (laser induced pigment fluorescence) and concurrent passive ocean color spectra to improve existing ocean color algorithms for estimating primary production in the world's oceans. The most significant results were the development of a technique for selecting optimal passive wavelengths for recovering phytoplankton photopigment concentration and the application of this technique, termed active-passive correlation spectroscopy (APCS), to various forms of passive ocean color algorithms. Included in this activity is use of airborne laser and passive ocean color for development of advanced satellite ocean color sensors. Promising on-wavelength subsurface scattering layer measurements were recently obtained. A partial summary of these results are shown.
Using Active Learning for Speeding up Calibration in Simulation Models.
Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2016-07-01
Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.
Using Active Learning for Speeding up Calibration in Simulation Models
Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2015-01-01
Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190
Advanced biologically plausible algorithms for low-level image processing
NASA Astrophysics Data System (ADS)
Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan
1999-08-01
At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.
Cartes, David A; Ray, Laura R; Collier, Robert D
2002-04-01
An adaptive leaky normalized least-mean-square (NLMS) algorithm has been developed to optimize stability and performance of active noise cancellation systems. The research addresses LMS filter performance issues related to insufficient excitation, nonstationary noise fields, and time-varying signal-to-noise ratio. The adaptive leaky NLMS algorithm is based on a Lyapunov tuning approach in which three candidate algorithms, each of which is a function of the instantaneous measured reference input, measurement noise variance, and filter length, are shown to provide varying degrees of tradeoff between stability and noise reduction performance. Each algorithm is evaluated experimentally for reduction of low frequency noise in communication headsets, and stability and noise reduction performance are compared with that of traditional NLMS and fixed-leakage NLMS algorithms. Acoustic measurements are made in a specially designed acoustic test cell which is based on the original work of Ryan et al. ["Enclosure for low frequency assessment of active noise reducing circumaural headsets and hearing protection," Can. Acoust. 21, 19-20 (1993)] and which provides a highly controlled and uniform acoustic environment. The stability and performance of the active noise reduction system, including a prototype communication headset, are investigated for a variety of noise sources ranging from stationary tonal noise to highly nonstationary measured F-16 aircraft noise over a 20 dB dynamic range. Results demonstrate significant improvements in stability of Lyapunov-tuned LMS algorithms over traditional leaky or nonleaky normalized algorithms, while providing noise reduction performance equivalent to that of the NLMS algorithm for idealized noise fields.
Brain-Inspired Constructive Learning Algorithms with Evolutionally Additive Nonlinear Neurons
NASA Astrophysics Data System (ADS)
Fang, Le-Heng; Lin, Wei; Luo, Qiang
In this article, inspired partially by the physiological evidence of brain’s growth and development, we developed a new type of constructive learning algorithm with evolutionally additive nonlinear neurons. The new algorithms have remarkable ability in effective regression and accurate classification. In particular, the algorithms are able to sustain a certain reduction of the loss function when the dynamics of the trained network are bogged down in the vicinity of the local minima. The algorithm augments the neural network by adding only a few connections as well as neurons whose activation functions are nonlinear, nonmonotonic, and self-adapted to the dynamics of the loss functions. Indeed, we analytically demonstrate the reduction dynamics of the algorithm for different problems, and further modify the algorithms so as to obtain an improved generalization capability for the augmented neural networks. Finally, through comparing with the classical algorithm and architecture for neural network construction, we show that our constructive learning algorithms as well as their modified versions have better performances, such as faster training speed and smaller network size, on several representative benchmark datasets including the MNIST dataset for handwriting digits.
Scheduling language and algorithm development study. Appendix: Study approach and activity summary
NASA Technical Reports Server (NTRS)
1974-01-01
The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.
A relational learning approach to Structure-Activity Relationships in drug design toxicity studies.
Camacho, Rui; Pereira, Max; Costa, Vítor Santos; Fonseca, Nuno A; Adriano, Carlos; Simões, Carlos J V; Brito, Rui M M
2011-09-16
It has been recognized that the development of new therapeutic drugs is a complex and expensive process. A large number of factors affect the activity in vivo of putative candidate molecules and the propensity for causing adverse and toxic effects is recognized as one of the major hurdles behind the current "target-rich, lead-poor" scenario. Structure-Activity Relationship (SAR) studies, using relational Machine Learning (ML) algorithms, have already been shown to be very useful in the complex process of rational drug design. Despite the ML successes, human expertise is still of the utmost importance in the drug development process. An iterative process and tight integration between the models developed by ML algorithms and the know-how of medicinal chemistry experts would be a very useful symbiotic approach. In this paper we describe a software tool that achieves that goal--iLogCHEM. The tool allows the use of Relational Learners in the task of identifying molecules or molecular fragments with potential to produce toxic effects, and thus help in stream-lining drug design in silico. It also allows the expert to guide the search for useful molecules without the need to know the details of the algorithms used. The models produced by the algorithms may be visualized using a graphical interface, that is of common use amongst researchers in structural biology and medicinal chemistry. The graphical interface enables the expert to provide feedback to the learning system. The developed tool has also facilities to handle the similarity bias typical of large chemical databases. For that purpose the user can filter out similar compounds when assembling a data set. Additionally, we propose ways of providing background knowledge for Relational Learners using the results of Graph Mining algorithms. Copyright 2011 The Author(s). Published by Journal of Integrative Bioinformatics.
A Feature Selection Algorithm to Compute Gene Centric Methylation from Probe Level Methylation Data.
Baur, Brittany; Bozdag, Serdar
2016-01-01
DNA methylation is an important epigenetic event that effects gene expression during development and various diseases such as cancer. Understanding the mechanism of action of DNA methylation is important for downstream analysis. In the Illumina Infinium HumanMethylation 450K array, there are tens of probes associated with each gene. Given methylation intensities of all these probes, it is necessary to compute which of these probes are most representative of the gene centric methylation level. In this study, we developed a feature selection algorithm based on sequential forward selection that utilized different classification methods to compute gene centric DNA methylation using probe level DNA methylation data. We compared our algorithm to other feature selection algorithms such as support vector machines with recursive feature elimination, genetic algorithms and ReliefF. We evaluated all methods based on the predictive power of selected probes on their mRNA expression levels and found that a K-Nearest Neighbors classification using the sequential forward selection algorithm performed better than other algorithms based on all metrics. We also observed that transcriptional activities of certain genes were more sensitive to DNA methylation changes than transcriptional activities of other genes. Our algorithm was able to predict the expression of those genes with high accuracy using only DNA methylation data. Our results also showed that those DNA methylation-sensitive genes were enriched in Gene Ontology terms related to the regulation of various biological processes.
Automated system for analyzing the activity of individual neurons
NASA Technical Reports Server (NTRS)
Bankman, Isaac N.; Johnson, Kenneth O.; Menkes, Alex M.; Diamond, Steve D.; Oshaughnessy, David M.
1993-01-01
This paper presents a signal processing system that: (1) provides an efficient and reliable instrument for investigating the activity of neuronal assemblies in the brain; and (2) demonstrates the feasibility of generating the command signals of prostheses using the activity of relevant neurons in disabled subjects. The system operates online, in a fully automated manner and can recognize the transient waveforms of several neurons in extracellular neurophysiological recordings. Optimal algorithms for detection, classification, and resolution of overlapping waveforms are developed and evaluated. Full automation is made possible by an algorithm that can set appropriate decision thresholds and an algorithm that can generate templates on-line. The system is implemented with a fast IBM PC compatible processor board that allows on-line operation.
1988-03-31
radar operation and data - collection activities, a large data -analysis effort has been under way in support of automatic wind-shear detection algorithm ...REDUCTION AND ALGORITHM DEVELOPMENT 49 A. General-Purpose Software 49 B. Concurrent Computer Systems 49 C. Sun Workstations 51 D. Radar Data Analysis 52...1. Algorithm Verification 52 2. Other Studies 53 3. Translations 54 4. Outside Distributions 55 E. Mesonet/LLWAS Data Analysis 55 1. 1985 Data 55 2
Active contour based segmentation of resected livers in CT images
NASA Astrophysics Data System (ADS)
Oelmann, Simon; Oyarzun Laura, Cristina; Drechsler, Klaus; Wesarg, Stefan
2015-03-01
The majority of state of the art segmentation algorithms are able to give proper results in healthy organs but not in pathological ones. However, many clinical applications require an accurate segmentation of pathological organs. The determination of the target boundaries for radiotherapy or liver volumetry calculations are examples of this. Volumetry measurements are of special interest after tumor resection for follow up of liver regrow. The segmentation of resected livers presents additional challenges that were not addressed by state of the art algorithms. This paper presents a snakes based algorithm specially developed for the segmentation of resected livers. The algorithm is enhanced with a novel dynamic smoothing technique that allows the active contour to propagate with different speeds depending on the intensities visible in its neighborhood. The algorithm is evaluated in 6 clinical CT images as well as 18 artificial datasets generated from additional clinical CT images.
Current Status of Japan's Activity for GPM/DPR and Global Rainfall Map algorithm development
NASA Astrophysics Data System (ADS)
Kachi, M.; Kubota, T.; Yoshida, N.; Kida, S.; Oki, R.; Iguchi, T.; Nakamura, K.
2012-04-01
The Global Precipitation Measurement (GPM) mission is composed of two categories of satellites; 1) a Tropical Rainfall Measuring Mission (TRMM)-like non-sun-synchronous orbit satellite (GPM Core Observatory); and 2) constellation of satellites carrying microwave radiometer instruments. The GPM Core Observatory carries the Dual-frequency Precipitation Radar (DPR), which is being developed by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT), and microwave radiometer provided by the National Aeronautics and Space Administration (NASA). GPM Core Observatory will be launched in February 2014, and development of algorithms is underway. DPR Level 1 algorithm, which provides DPR L1B product including received power, will be developed by the JAXA. The first version was submitted in March 2011. Development of the second version of DPR L1B algorithm (Version 2) will complete in March 2012. Version 2 algorithm includes all basic functions, preliminary database, HDF5 I/F, and minimum error handling. Pre-launch code will be developed by the end of October 2012. DPR Level 2 algorithm has been developing by the DPR Algorithm Team led by Japan, which is under the NASA-JAXA Joint Algorithm Team. The first version of GPM/DPR Level-2 Algorithm Theoretical Basis Document was completed on November 2010. The second version, "Baseline code", was completed in January 2012. Baseline code includes main module, and eight basic sub-modules (Preparation module, Vertical Profile module, Classification module, SRT module, DSD module, Solver module, Input module, and Output module.) The Level-2 algorithms will provide KuPR only products, KaPR only products, and Dual-frequency Precipitation products, with estimated precipitation rate, radar reflectivity, and precipitation information such as drop size distribution and bright band height. It is important to develop algorithm applicable to both TRMM/PR and KuPR in order to produce long-term continuous data set. Pre-launch code will be developed by autumn 2012. Global Rainfall Map algorithm has been developed by the Global Rainfall Map Algorithm Development Team in Japan. The algorithm succeeded heritages of the Global Satellite Mapping for Precipitation (GSMaP) project between 2002 and 2007, and near-real-time version operating at JAXA since 2007. "Baseline code" used current operational GSMaP code (V5.222,) and development completed in January 2012. Pre-launch code will be developed by autumn 2012, including update of database for rain type classification and rain/no-rain classification, and introduction of rain-gauge correction.
GOES-R Geostationary Lightning Mapper Performance Specifications and Algorithms
NASA Technical Reports Server (NTRS)
Mach, Douglas M.; Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Petersen, William A.; Boldi, Robert A.; Carey, Lawrence D.; Bateman, Monte G.; Buchler, Dennis E.; McCaul, E. William, Jr.
2008-01-01
The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series will carry a GLM that will provide continuous day and night observations of lightning. The mission objectives for the GLM are to: (1) Provide continuous, full-disk lightning measurements for storm warning and nowcasting, (2) Provide early warning of tornadic activity, and (2) Accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997- present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms and applications. The science data will consist of lightning "events", "groups", and "flashes". The algorithm is being designed to be an efficient user of the computational resources. This may include parallelization of the code and the concept of sub-dividing the GLM FOV into regions to be processed in parallel. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama, Oklahoma, Central Florida, and the Washington DC Metropolitan area) are being used to develop the prelaunch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution.
Advanced Fiber Optic-Based Sensing Technology for Unmanned Aircraft Systems
NASA Technical Reports Server (NTRS)
Richards, Lance; Parker, Allen R.; Piazza, Anthony; Ko, William L.; Chan, Patrick; Bakalyar, John
2011-01-01
This presentation provides an overview of fiber optic sensing technology development activities performed at NASA Dryden in support of Unmanned Aircraft Systems. Examples of current and previous work are presented in the following categories: algorithm development, system development, instrumentation installation, ground R&D, and flight testing. Examples of current research and development activities are provided.
Predictive Model of Linear Antimicrobial Peptides Active against Gram-Negative Bacteria.
Vishnepolsky, Boris; Gabrielian, Andrei; Rosenthal, Alex; Hurt, Darrell E; Tartakovsky, Michael; Managadze, Grigol; Grigolava, Maya; Makhatadze, George I; Pirtskhalava, Malak
2018-05-29
Antimicrobial peptides (AMPs) have been identified as a potential new class of anti-infectives for drug development. There are a lot of computational methods that try to predict AMPs. Most of them can only predict if a peptide will show any antimicrobial potency, but to the best of our knowledge, there are no tools which can predict antimicrobial potency against particular strains. Here we present a predictive model of linear AMPs being active against particular Gram-negative strains relying on a semi-supervised machine-learning approach with a density-based clustering algorithm. The algorithm can well distinguish peptides active against particular strains from others which may also be active but not against the considered strain. The available AMP prediction tools cannot carry out this task. The prediction tool based on the algorithm suggested herein is available on https://dbaasp.org.
Dobkin, Bruce H; Xu, Xiaoyu; Batalin, Maxim; Thomas, Seth; Kaiser, William
2011-08-01
Outcome measures of mobility for large stroke trials are limited to timed walks for short distances in a laboratory, step counters and ordinal scales of disability and quality of life. Continuous monitoring and outcome measurements of the type and quantity of activity in the community would provide direct data about daily performance, including compliance with exercise and skills practice during routine care and clinical trials. Twelve adults with impaired ambulation from hemiparetic stroke and 6 healthy controls wore triaxial accelerometers on their ankles. Walking speed for repeated outdoor walks was determined by machine-learning algorithms and compared to a stopwatch calculation of speed for distances not known to the algorithm. The reliability of recognizing walking, exercise, and cycling by the algorithms was compared to activity logs. A high correlation was found between stopwatch-measured outdoor walking speed and algorithm-calculated speed (Pearson coefficient, 0.98; P=0.001) and for repeated measures of algorithm-derived walking speed (P=0.01). Bouts of walking >5 steps, variations in walking speed, cycling, stair climbing, and leg exercises were correctly identified during a day in the community. Compared to healthy subjects, those with stroke were, as expected, more sedentary and slower, and their gait revealed high paretic-to-unaffected leg swing ratios. Test-retest reliability and concurrent and construct validity are high for activity pattern-recognition Bayesian algorithms developed from inertial sensors. This ratio scale data can provide real-world monitoring and outcome measurements of lower extremity activities and walking speed for stroke and rehabilitation studies.
High content analysis of phagocytic activity and cell morphology with PuntoMorph.
Al-Ali, Hassan; Gao, Han; Dalby-Hansen, Camilla; Peters, Vanessa Ann; Shi, Yan; Brambilla, Roberta
2017-11-01
Phagocytosis is essential for maintenance of normal homeostasis and healthy tissue. As such, it is a therapeutic target for a wide range of clinical applications. The development of phenotypic screens targeting phagocytosis has lagged behind, however, due to the difficulties associated with image-based quantification of phagocytic activity. We present a robust algorithm and cell-based assay system for high content analysis of phagocytic activity. The method utilizes fluorescently labeled beads as a phagocytic substrate with defined physical properties. The algorithm employs statistical modeling to determine the mean fluorescence of individual beads within each image, and uses the information to conduct an accurate count of phagocytosed beads. In addition, the algorithm conducts detailed and sophisticated analysis of cellular morphology, making it a standalone tool for high content screening. We tested our assay system using microglial cultures. Our results recapitulated previous findings on the effects of microglial stimulation on cell morphology and phagocytic activity. Moreover, our cell-level analysis revealed that the two phenotypes associated with microglial activation, specifically cell body hypertrophy and increased phagocytic activity, are not highly correlated. This novel finding suggests the two phenotypes may be under the control of distinct signaling pathways. We demonstrate that our assay system outperforms preexisting methods for quantifying phagocytic activity in multiple dimensions including speed, accuracy, and resolution. We provide a framework to facilitate the development of high content assays suitable for drug screening. For convenience, we implemented our algorithm in a standalone software package, PuntoMorph. Copyright © 2017 Elsevier B.V. All rights reserved.
Verschueren, Sabine M. P.; Degens, Hans; Morse, Christopher I.; Onambélé, Gladys L.
2017-01-01
Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female) energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity) was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual’s physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry. PMID:29155839
Wullems, Jorgen A; Verschueren, Sabine M P; Degens, Hans; Morse, Christopher I; Onambélé, Gladys L
2017-01-01
Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female) energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity) was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual's physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry.
DNA Microarray Data Analysis: A Novel Biclustering Algorithm Approach
NASA Astrophysics Data System (ADS)
Tchagang, Alain B.; Tewfik, Ahmed H.
2006-12-01
Biclustering algorithms refer to a distinct class of clustering algorithms that perform simultaneous row-column clustering. Biclustering problems arise in DNA microarray data analysis, collaborative filtering, market research, information retrieval, text mining, electoral trends, exchange analysis, and so forth. When dealing with DNA microarray experimental data for example, the goal of biclustering algorithms is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this study, we develop novel biclustering algorithms using basic linear algebra and arithmetic tools. The proposed biclustering algorithms can be used to search for all biclusters with constant values, biclusters with constant values on rows, biclusters with constant values on columns, and biclusters with coherent values from a set of data in a timely manner and without solving any optimization problem. We also show how one of the proposed biclustering algorithms can be adapted to identify biclusters with coherent evolution. The algorithms developed in this study discover all valid biclusters of each type, while almost all previous biclustering approaches will miss some.
Recent progress in multi-electrode spike sorting methods
Lefebvre, Baptiste; Yger, Pierre; Marre, Olivier
2017-01-01
In recent years, arrays of extracellular electrodes have been developed and manufactured to record simultaneously from hundreds of electrodes packed with a high density. These recordings should allow neuroscientists to reconstruct the individual activity of the neurons spiking in the vicinity of these electrodes, with the help of signal processing algorithms. Algorithms need to solve a source separation problem, also known as spike sorting. However, these new devices challenge the classical way to do spike sorting. Here we review different methods that have been developed to sort spikes from these large-scale recordings. We describe the common properties of these algorithms, as well as their main differences. Finally, we outline the issues that remain to be solved by future spike sorting algorithms. PMID:28263793
A semi-learning algorithm for noise rejection: an fNIRS study on ADHD children
NASA Astrophysics Data System (ADS)
Sutoko, Stephanie; Funane, Tsukasa; Katura, Takusige; Sato, Hiroki; Kiguchi, Masashi; Maki, Atsushi; Monden, Yukifumi; Nagashima, Masako; Yamagata, Takanori; Dan, Ippeita
2017-02-01
In pediatrics studies, the quality of functional near infrared spectroscopy (fNIRS) signals is often reduced by motion artifacts. These artifacts likely mislead brain functionality analysis, causing false discoveries. While noise correction methods and their performance have been investigated, these methods require several parameter assumptions that apparently result in noise overfitting. In contrast, the rejection of noisy signals serves as a preferable method because it maintains the originality of the signal waveform. Here, we describe a semi-learning algorithm to detect and eliminate noisy signals. The algorithm dynamically adjusts noise detection according to the predetermined noise criteria, which are spikes, unusual activation values (averaged amplitude signals within the brain activation period), and high activation variances (among trials). Criteria were sequentially organized in the algorithm and orderly assessed signals based on each criterion. By initially setting an acceptable rejection rate, particular criteria causing excessive data rejections are neglected, whereas others with tolerable rejections practically eliminate noises. fNIRS data measured during the attention response paradigm (oddball task) in children with attention deficit/hyperactivity disorder (ADHD) were utilized to evaluate and optimize the algorithm's performance. This algorithm successfully substituted the visual noise identification done in the previous studies and consistently found significantly lower activation of the right prefrontal and parietal cortices in ADHD patients than in typical developing children. Thus, we conclude that the semi-learning algorithm confers more objective and standardized judgment for noise rejection and presents a promising alternative to visual noise rejection
Orion GN and C Model Based Development: Experience and Lessons Learned
NASA Technical Reports Server (NTRS)
Jackson, Mark C.; Henry, Joel R.
2012-01-01
The Orion Guidance Navigation and Control (GN&C) team is charged with developing GN&C algorithms for the Exploration Flight Test One (EFT-1) vehicle. The GN&C team is a joint team consisting primarily of Prime Contractor (Lockheed Martin) and NASA personnel and contractors. Early in the GN&C development cycle the team selected MATLAB/Simulink as the tool for developing GN&C algorithms and Mathworks autocode tools as the means for converting GN&C algorithms to flight software (FSW). This paper provides an assessment of the successes and problems encountered by the GN&C team from the perspective of Orion GN&C developers, integrators, FSW engineers and management. The Orion GN&C approach to graphical development, including simulation tools, standards development and autocode approaches are scored for the main activities that the team has completed through the development phases of the program.
Particle Swarm Optimization for Programming Deep Brain Stimulation Arrays
Peña, Edgar; Zhang, Simeng; Deyo, Steve; Xiao, YiZi; Johnson, Matthew D.
2017-01-01
Objective Deep brain stimulation (DBS) therapy relies on both precise neurosurgical targeting and systematic optimization of stimulation settings to achieve beneficial clinical outcomes. One recent advance to improve targeting is the development of DBS arrays (DBSAs) with electrodes segmented both along and around the DBS lead. However, increasing the number of independent electrodes creates the logistical challenge of optimizing stimulation parameters efficiently. Approach Solving such complex problems with multiple solutions and objectives is well known to occur in biology, in which complex collective behaviors emerge out of swarms of individual organisms engaged in learning through social interactions. Here, we developed a particle swarm optimization (PSO) algorithm to program DBSAs using a swarm of individual particles representing electrode configurations and stimulation amplitudes. Using a finite element model of motor thalamic DBS, we demonstrate how the PSO algorithm can efficiently optimize a multi-objective function that maximizes predictions of axonal activation in regions of interest (ROI, cerebellar-receiving area of motor thalamus), minimizes predictions of axonal activation in regions of avoidance (ROA, somatosensory thalamus), and minimizes power consumption. Main Results The algorithm solved the multi-objective problem by producing a Pareto front. ROI and ROA activation predictions were consistent across swarms (<1% median discrepancy in axon activation). The algorithm was able to accommodate for (1) lead displacement (1 mm) with relatively small ROI (≤9.2%) and ROA (≤1%) activation changes, irrespective of shift direction; (2) reduction in maximum per-electrode current (by 50% and 80%) with ROI activation decreasing by 5.6% and 16%, respectively; and (3) disabling electrodes (n=3 and 12) with ROI activation reduction by 1.8% and 14%, respectively. Additionally, comparison between PSO predictions and multi-compartment axon model simulations showed discrepancies of <1% between approaches. Significance The PSO algorithm provides a computationally efficient way to program DBS systems especially those with higher electrode counts. PMID:28068291
Particle swarm optimization for programming deep brain stimulation arrays
NASA Astrophysics Data System (ADS)
Peña, Edgar; Zhang, Simeng; Deyo, Steve; Xiao, YiZi; Johnson, Matthew D.
2017-02-01
Objective. Deep brain stimulation (DBS) therapy relies on both precise neurosurgical targeting and systematic optimization of stimulation settings to achieve beneficial clinical outcomes. One recent advance to improve targeting is the development of DBS arrays (DBSAs) with electrodes segmented both along and around the DBS lead. However, increasing the number of independent electrodes creates the logistical challenge of optimizing stimulation parameters efficiently. Approach. Solving such complex problems with multiple solutions and objectives is well known to occur in biology, in which complex collective behaviors emerge out of swarms of individual organisms engaged in learning through social interactions. Here, we developed a particle swarm optimization (PSO) algorithm to program DBSAs using a swarm of individual particles representing electrode configurations and stimulation amplitudes. Using a finite element model of motor thalamic DBS, we demonstrate how the PSO algorithm can efficiently optimize a multi-objective function that maximizes predictions of axonal activation in regions of interest (ROI, cerebellar-receiving area of motor thalamus), minimizes predictions of axonal activation in regions of avoidance (ROA, somatosensory thalamus), and minimizes power consumption. Main results. The algorithm solved the multi-objective problem by producing a Pareto front. ROI and ROA activation predictions were consistent across swarms (<1% median discrepancy in axon activation). The algorithm was able to accommodate for (1) lead displacement (1 mm) with relatively small ROI (⩽9.2%) and ROA (⩽1%) activation changes, irrespective of shift direction; (2) reduction in maximum per-electrode current (by 50% and 80%) with ROI activation decreasing by 5.6% and 16%, respectively; and (3) disabling electrodes (n = 3 and 12) with ROI activation reduction by 1.8% and 14%, respectively. Additionally, comparison between PSO predictions and multi-compartment axon model simulations showed discrepancies of <1% between approaches. Significance. The PSO algorithm provides a computationally efficient way to program DBS systems especially those with higher electrode counts.
NASA Astrophysics Data System (ADS)
Lee, Sangkyu
Illicit trafficking and smuggling of radioactive materials and special nuclear materials (SNM) are considered as one of the most important recent global nuclear threats. Monitoring the transport and safety of radioisotopes and SNM are challenging due to their weak signals and easy shielding. Great efforts worldwide are focused at developing and improving the detection technologies and algorithms, for accurate and reliable detection of radioisotopes of interest in thus better securing the borders against nuclear threats. In general, radiation portal monitors enable detection of gamma and neutron emitting radioisotopes. Passive or active interrogation techniques, present and/or under the development, are all aimed at increasing accuracy, reliability, and in shortening the time of interrogation as well as the cost of the equipment. Equally important efforts are aimed at advancing algorithms to process the imaging data in an efficient manner providing reliable "readings" of the interiors of the examined volumes of various sizes, ranging from cargos to suitcases. The main objective of this thesis is to develop two synergistic algorithms with the goal to provide highly reliable - low noise identification of radioisotope signatures. These algorithms combine analysis of passive radioactive detection technique with active interrogation imaging techniques such as gamma radiography or muon tomography. One algorithm consists of gamma spectroscopy and cosmic muon tomography, and the other algorithm is based on gamma spectroscopy and gamma radiography. The purpose of fusing two detection methodologies per algorithm is to find both heavy-Z radioisotopes and shielding materials, since radionuclides can be identified with gamma spectroscopy, and shielding materials can be detected using muon tomography or gamma radiography. These combined algorithms are created and analyzed based on numerically generated images of various cargo sizes and materials. In summary, the three detection methodologies are fused into two algorithms with mathematical functions providing: reliable identification of radioisotopes in gamma spectroscopy; noise reduction and precision enhancement in muon tomography; and the atomic number and density estimation in gamma radiography. It is expected that these new algorithms maybe implemented at portal scanning systems with the goal to enhance the accuracy and reliability in detecting nuclear materials inside the cargo containers.
Computing border bases using mutant strategies
NASA Astrophysics Data System (ADS)
Ullah, E.; Abbas Khan, S.
2014-01-01
Border bases, a generalization of Gröbner bases, have actively been addressed during recent years due to their applicability to industrial problems. In cryptography and coding theory a useful application of border based is to solve zero-dimensional systems of polynomial equations over finite fields, which motivates us for developing optimizations of the algorithms that compute border bases. In 2006, Kehrein and Kreuzer formulated the Border Basis Algorithm (BBA), an algorithm which allows the computation of border bases that relate to a degree compatible term ordering. In 2007, J. Ding et al. introduced mutant strategies bases on finding special lower degree polynomials in the ideal. The mutant strategies aim to distinguish special lower degree polynomials (mutants) from the other polynomials and give them priority in the process of generating new polynomials in the ideal. In this paper we develop hybrid algorithms that use the ideas of J. Ding et al. involving the concept of mutants to optimize the Border Basis Algorithm for solving systems of polynomial equations over finite fields. In particular, we recall a version of the Border Basis Algorithm which is actually called the Improved Border Basis Algorithm and propose two hybrid algorithms, called MBBA and IMBBA. The new mutants variants provide us space efficiency as well as time efficiency. The efficiency of these newly developed hybrid algorithms is discussed using standard cryptographic examples.
Smith, Warren D; Bagley, Anita
2010-01-01
Children with cerebral palsy may have difficulty walking and may fall frequently, resulting in a decrease in their participation in school and community activities. It is desirable to assess the effectiveness of mobility therapies for these children on their functioning during everyday living. Over 50 hours of tri-axial accelerometer and digital video recordings from 35 children with cerebral palsy and 51 typically-developing children were analyzed to develop algorithms for automatic real-time processing of the accelerometer signals to monitor a child's level of activity and to detect falls. The present fall-detection algorithm has 100% specificity and a sensitivity of 100% for falls involving trunk rotation. Sensitivities for drops to the knees and to the bottom are 72% and 78%, respectively. The activity and fall-detection algorithms were implemented in a miniature, battery-powered microcontroller-based activity/fall monitor that the child wears in a small fanny pack during everyday living. The monitor continuously logs 1-min. activity levels and the occurrence and characteristics of each fall for two-week recording sessions. Pre-therapy and post-therapy recordings from these monitors will be used to assess the efficacies of alternative treatments for gait abnormalities.
Li, Ye; Whelan, Michael; Hobbs, Leigh; Fan, Wen Qi; Fung, Cecilia; Wong, Kenny; Marchand-Austin, Alex; Badiani, Tina; Johnson, Ian
2016-06-27
In 2014/2015, Public Health Ontario developed disease-specific, cumulative sum (CUSUM)-based statistical algorithms for detecting aberrant increases in reportable infectious disease incidence in Ontario. The objective of this study was to determine whether the prospective application of these CUSUM algorithms, based on historical patterns, have improved specificity and sensitivity compared to the currently used Early Aberration Reporting System (EARS) algorithm, developed by the US Centers for Disease Control and Prevention. A total of seven algorithms were developed for the following diseases: cyclosporiasis, giardiasis, influenza (one each for type A and type B), mumps, pertussis, invasive pneumococcal disease. Historical data were used as baseline to assess known outbreaks. Regression models were used to model seasonality and CUSUM was applied to the difference between observed and expected counts. An interactive web application was developed allowing program staff to directly interact with data and tune the parameters of CUSUM algorithms using their expertise on the epidemiology of each disease. Using these parameters, a CUSUM detection system was applied prospectively and the results were compared to the outputs generated by EARS. The outcome was the detection of outbreaks, or the start of a known seasonal increase and predicting the peak in activity. The CUSUM algorithms detected provincial outbreaks earlier than the EARS algorithm, identified the start of the influenza season in advance of traditional methods, and had fewer false positive alerts. Additionally, having staff involved in the creation of the algorithms improved their understanding of the algorithms and improved use in practice. Using interactive web-based technology to tune CUSUM improved the sensitivity and specificity of detection algorithms.
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey; Mohammed, Priscilla; De Amici, Giovanni; Kim, Edward; Peng, Jinzheng; Ruf, Christopher; Hanna, Maher; Yueh, Simon; Entekhabi, Dara
2016-01-01
The purpose of the Soil Moisture Active Passive (SMAP) radiometer calibration algorithm is to convert Level 0 (L0) radiometer digital counts data into calibrated estimates of brightness temperatures referenced to the Earth's surface within the main beam. The algorithm theory in most respects is similar to what has been developed and implemented for decades for other satellite radiometers; however, SMAP includes two key features heretofore absent from most satellite borne radiometers: radio frequency interference (RFI) detection and mitigation, and measurement of the third and fourth Stokes parameters using digital correlation. The purpose of this document is to describe the SMAP radiometer and forward model, explain the SMAP calibration algorithm, including approximations, errors, and biases, provide all necessary equations for implementing the calibration algorithm and detail the RFI detection and mitigation process. Section 2 provides a summary of algorithm objectives and driving requirements. Section 3 is a description of the instrument and Section 4 covers the forward models, upon which the algorithm is based. Section 5 gives the retrieval algorithm and theory. Section 6 describes the orbit simulator, which implements the forward model and is the key for deriving antenna pattern correction coefficients and testing the overall algorithm.
Hypersonic Vehicle Propulsion System Control Model Development Roadmap and Activities
NASA Technical Reports Server (NTRS)
Stueber, Thomas J.; Le, Dzu K.; Vrnak, Daniel R.
2009-01-01
The NASA Fundamental Aeronautics Program Hypersonic project is directed towards fundamental research for two classes of hypersonic vehicles: highly reliable reusable launch systems (HRRLS) and high-mass Mars entry systems (HMMES). The objective of the hypersonic guidance, navigation, and control (GN&C) discipline team is to develop advanced guidance and control algorithms to enable efficient and effective operation of these challenging vehicles. The ongoing work at the NASA Glenn Research Center supports the hypersonic GN&C effort in developing tools to aid the design of advanced control algorithms that specifically address the propulsion system of the HRRLSclass vehicles. These tools are being developed in conjunction with complementary research and development activities in hypersonic propulsion at Glenn and elsewhere. This report is focused on obtaining control-relevant dynamic models of an HRRLS-type hypersonic vehicle propulsion system.
MACVIA clinical decision algorithm in adolescents and adults with allergic rhinitis.
Bousquet, Jean; Schünemann, Holger J; Hellings, Peter W; Arnavielhe, Sylvie; Bachert, Claus; Bedbrook, Anna; Bergmann, Karl-Christian; Bosnic-Anticevich, Sinthia; Brozek, Jan; Calderon, Moises; Canonica, G Walter; Casale, Thomas B; Chavannes, Niels H; Cox, Linda; Chrystyn, Henry; Cruz, Alvaro A; Dahl, Ronald; De Carlo, Giuseppe; Demoly, Pascal; Devillier, Phillipe; Dray, Gérard; Fletcher, Monica; Fokkens, Wytske J; Fonseca, Joao; Gonzalez-Diaz, Sandra N; Grouse, Lawrence; Keil, Thomas; Kuna, Piotr; Larenas-Linnemann, Désirée; Lodrup Carlsen, Karin C; Meltzer, Eli O; Mullol, Jaoquim; Muraro, Antonella; Naclerio, Robert N; Palkonen, Susanna; Papadopoulos, Nikolaos G; Passalacqua, Giovanni; Price, David; Ryan, Dermot; Samolinski, Boleslaw; Scadding, Glenis K; Sheikh, Aziz; Spertini, François; Valiulis, Arunas; Valovirta, Erkka; Walker, Samantha; Wickman, Magnus; Yorgancioglu, Arzu; Haahtela, Tari; Zuberbier, Torsten
2016-08-01
The selection of pharmacotherapy for patients with allergic rhinitis (AR) depends on several factors, including age, prominent symptoms, symptom severity, control of AR, patient preferences, and cost. Allergen exposure and the resulting symptoms vary, and treatment adjustment is required. Clinical decision support systems (CDSSs) might be beneficial for the assessment of disease control. CDSSs should be based on the best evidence and algorithms to aid patients and health care professionals to jointly determine treatment and its step-up or step-down strategy depending on AR control. Contre les MAladies Chroniques pour un VIeillissement Actif en Languedoc-Roussillon (MACVIA-LR [fighting chronic diseases for active and healthy ageing]), one of the reference sites of the European Innovation Partnership on Active and Healthy Ageing, has initiated an allergy sentinel network (the MACVIA-ARIA Sentinel Network). A CDSS is currently being developed to optimize AR control. An algorithm developed by consensus is presented in this article. This algorithm should be confirmed by appropriate trials. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Design of a Synthetic Aperture Array to Support Experiments in Active Control of Scattering
1990-06-01
becomes necessary to validate the theory and test the control system algorithms . While experiments in open water would be most like the anticipated...mathematical development of the beamforming algorithms used as well as an estimate of their applicability to the specifics of beamforming in a reverberant...Chebyshev array have been proposed. The method used in ARRAY, a nested product algorithm , proposed by Bresler [21] is recommended by Pozar [19] and
NASA Astrophysics Data System (ADS)
Mazur, Krzysztof; Wrona, Stanislaw; Pawelczyk, Marek
2018-01-01
The paper presents the idea and discussion on implementation of multichannel global active noise control systems. As a test plant an active casing is used. It has been developed by the authors to reduce device noise directly at the source by controlling vibration of its casing. To provide global acoustic effect in the whole environment, where the device operates, it requires a number of secondary sources and sensors for each casing wall, thus making the whole active control structure complicated, i.e. with a large number of interacting channels. The paper discloses all details concerning hardware setup and efficient implementation of control algorithms for the multichannel case. A new formulation is presented to introduce the distributed version of the Switched-error Filtered-reference Least Mean Squares (FXLMS) algorithm together with adaptation rate enhancement. The convergence rate of the proposed algorithm is compared with original Multiple-error FXLMS. A number of hints followed from many years of authors' experience on microprocessor control systems design and signal processing algorithms optimization are presented. They can be used for various active control and signal processing applications, both for academic research and commercialization.
Ghiasi, Mohammad Sadegh; Arjmand, Navid; Boroushaki, Mehrdad; Farahmand, Farzam
2016-03-01
A six-degree-of-freedom musculoskeletal model of the lumbar spine was developed to predict the activity of trunk muscles during light, moderate and heavy lifting tasks in standing posture. The model was formulated into a multi-objective optimization problem, minimizing the sum of the cubed muscle stresses and maximizing the spinal stability index. Two intelligent optimization algorithms, i.e., the vector evaluated particle swarm optimization (VEPSO) and nondominated sorting genetic algorithm (NSGA), were employed to solve the optimization problem. The optimal solution for each task was then found in the way that the corresponding in vivo intradiscal pressure could be reproduced. Results indicated that both algorithms predicted co-activity in the antagonistic abdominal muscles, as well as an increase in the stability index when going from the light to the heavy task. For all of the light, moderate and heavy tasks, the muscles' activities predictions of the VEPSO and the NSGA were generally consistent and in the same order of the in vivo electromyography data. The proposed methodology is thought to provide improved estimations for muscle activities by considering the spinal stability and incorporating the in vivo intradiscal pressure data.
A fast and accurate online sequential learning algorithm for feedforward networks.
Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N
2006-11-01
In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.
Classifying Volcanic Activity Using an Empirical Decision Making Algorithm
NASA Astrophysics Data System (ADS)
Junek, W. N.; Jones, W. L.; Woods, M. T.
2012-12-01
Detection and classification of developing volcanic activity is vital to eruption forecasting. Timely information regarding an impending eruption would aid civil authorities in determining the proper response to a developing crisis. In this presentation, volcanic activity is characterized using an event tree classifier and a suite of empirical statistical models derived through logistic regression. Forecasts are reported in terms of the United States Geological Survey (USGS) volcano alert level system. The algorithm employs multidisciplinary data (e.g., seismic, GPS, InSAR) acquired by various volcano monitoring systems and source modeling information to forecast the likelihood that an eruption, with a volcanic explosivity index (VEI) > 1, will occur within a quantitatively constrained area. Logistic models are constructed from a sparse and geographically diverse dataset assembled from a collection of historic volcanic unrest episodes. Bootstrapping techniques are applied to the training data to allow for the estimation of robust logistic model coefficients. Cross validation produced a series of receiver operating characteristic (ROC) curves with areas ranging between 0.78-0.81, which indicates the algorithm has good predictive capabilities. The ROC curves also allowed for the determination of a false positive rate and optimum detection for each stage of the algorithm. Forecasts for historic volcanic unrest episodes in North America and Iceland were computed and are consistent with the actual outcome of the events.
Decoding ensemble activity from neurophysiological recordings in the temporal cortex.
Kreiman, Gabriel
2011-01-01
We study subjects with pharmacologically intractable epilepsy who undergo semi-chronic implantation of electrodes for clinical purposes. We record physiological activity from tens to more than one hundred electrodes implanted in different parts of neocortex. These recordings provide higher spatial and temporal resolution than non-invasive measures of human brain activity. Here we discuss our efforts to develop hardware and algorithms to interact with the human brain by decoding ensemble activity in single trials. We focus our discussion on decoding visual information during a variety of visual object recognition tasks but the same technologies and algorithms can also be directly applied to other cognitive phenomena.
Alday, Erick A. Perez; Colman, Michael A.; Langley, Philip; Butters, Timothy D.; Higham, Jonathan; Workman, Antony J.; Hancox, Jules C.; Zhang, Henggui
2015-01-01
Rapid atrial arrhythmias such as atrial fibrillation (AF) predispose to ventricular arrhythmias, sudden cardiac death and stroke. Identifying the origin of atrial ectopic activity from the electrocardiogram (ECG) can help to diagnose the early onset of AF in a cost-effective manner. The complex and rapid atrial electrical activity during AF makes it difficult to obtain detailed information on atrial activation using the standard 12-lead ECG alone. Compared to conventional 12-lead ECG, more detailed ECG lead configurations may provide further information about spatio-temporal dynamics of the body surface potential (BSP) during atrial excitation. We apply a recently developed 3D human atrial model to simulate electrical activity during normal sinus rhythm and ectopic pacing. The atrial model is placed into a newly developed torso model which considers the presence of the lungs, liver and spinal cord. A boundary element method is used to compute the BSP resulting from atrial excitation. Elements of the torso mesh corresponding to the locations of the placement of the electrodes in the standard 12-lead and a more detailed 64-lead ECG configuration were selected. The ectopic focal activity was simulated at various origins across all the different regions of the atria. Simulated BSP maps during normal atrial excitation (i.e. sinoatrial node excitation) were compared to those observed experimentally (obtained from the 64-lead ECG system), showing a strong agreement between the evolution in time of the simulated and experimental data in the P-wave morphology of the ECG and dipole evolution. An algorithm to obtain the location of the stimulus from a 64-lead ECG system was developed. The algorithm presented had a success rate of 93%, meaning that it correctly identified the origin of atrial focus in 75/80 simulations, and involved a general approach relevant to any multi-lead ECG system. This represents a significant improvement over previously developed algorithms. PMID:25611350
Advancing from offline to online activity recognition with wearable sensors.
Ermes, Miikka; Parkka, Juha; Cluitmans, Luc
2008-01-01
Activity recognition with wearable sensors could motivate people to perform a variety of different sports and other physical exercises. We have earlier developed algorithms for offline analysis of activity data collected with wearable sensors. In this paper, we present our current progress in advancing the platform for the existing algorithms to an online version, onto a PDA. Acceleration data are obtained from wireless motion bands which send the 3D raw acceleration signals via a Bluetooth link to the PDA which then performs the data collection, feature extraction and activity classification. As a proof-of-concept, the online activity system was tested with three subjects. All of them performed at least 5 minutes of each of the following activities: lying, sitting, standing, walking, running and cycling with an exercise bike. The average second-by-second classification accuracies for the subjects were 99%, 97%, and 82 %. These results suggest that earlier developed offline analysis methods for the acceleration data obtained from wearable sensors can be successfully implemented in an online activity recognition application.
Bourke, Alan K; Klenk, Jochen; Schwickert, Lars; Aminian, Kamiar; Ihlen, Espen A F; Mellone, Sabato; Helbostad, Jorunn L; Chiari, Lorenzo; Becker, Clemens
2016-08-01
Automatic fall detection will promote independent living and reduce the consequences of falls in the elderly by ensuring people can confidently live safely at home for linger. In laboratory studies inertial sensor technology has been shown capable of distinguishing falls from normal activities. However less than 7% of fall-detection algorithm studies have used fall data recorded from elderly people in real life. The FARSEEING project has compiled a database of real life falls from elderly people, to gain new knowledge about fall events and to develop fall detection algorithms to combat the problems associated with falls. We have extracted 12 different kinematic, temporal and kinetic related features from a data-set of 89 real-world falls and 368 activities of daily living. Using the extracted features we applied machine learning techniques and produced a selection of algorithms based on different feature combinations. The best algorithm employs 10 different features and produced a sensitivity of 0.88 and a specificity of 0.87 in classifying falls correctly. This algorithm can be used distinguish real-world falls from normal activities of daily living in a sensor consisting of a tri-axial accelerometer and tri-axial gyroscope located at L5.
García-Massó, X; Serra-Añó, P; Gonzalez, L M; Ye-Lin, Y; Prats-Boluda, G; Garcia-Casado, J
2015-10-01
This was a cross-sectional study. The main objective of this study was to develop and test classification algorithms based on machine learning using accelerometers to identify the activity type performed by manual wheelchair users with spinal cord injury (SCI). The study was conducted in the Physical Therapy department and the Physical Education and Sports department of the University of Valencia. A total of 20 volunteers were asked to perform 10 physical activities, lying down, body transfers, moving items, mopping, working on a computer, watching TV, arm-ergometer exercises, passive propulsion, slow propulsion and fast propulsion, while fitted with four accelerometers placed on both wrists, chest and waist. The activities were grouped into five categories: sedentary, locomotion, housework, body transfers and moderate physical activity. Different machine learning algorithms were used to develop individual and group activity classifiers from the acceleration data for different combinations of number and position of the accelerometers. We found that although the accuracy of the classifiers for individual activities was moderate (55-72%), with higher values for a greater number of accelerometers, grouped activities were correctly classified in a high percentage of cases (83.2-93.6%). With only two accelerometers and the quadratic discriminant analysis algorithm we achieved a reasonably accurate group activity recognition system (>90%). Such a system with the minimum of intervention would be a valuable tool for studying physical activity in individuals with SCI.
NASA Astrophysics Data System (ADS)
Frassinetti, L.; Olofsson, K. E. J.; Brunsell, P. R.; Drake, J. R.
2011-06-01
The EXTRAP T2R feedback system (active coils, sensor coils and controller) is used to study and develop new tools for advanced control of the MHD instabilities in fusion plasmas. New feedback algorithms developed in EXTRAP T2R reversed-field pinch allow flexible and independent control of each magnetic harmonic. Methods developed in control theory and applied to EXTRAP T2R allow a closed-loop identification of the machine plant and of the resistive wall modes growth rates. The plant identification is the starting point for the development of output-tracking algorithms which enable the generation of external magnetic perturbations. These algorithms will then be used to study the effect of a resonant magnetic perturbation (RMP) on the tearing mode (TM) dynamics. It will be shown that the stationary RMP can induce oscillations in the amplitude and jumps in the phase of the rotating TM. It will be shown that the RMP strongly affects the magnetic island position.
Recent progress in multi-electrode spike sorting methods.
Lefebvre, Baptiste; Yger, Pierre; Marre, Olivier
2016-11-01
In recent years, arrays of extracellular electrodes have been developed and manufactured to record simultaneously from hundreds of electrodes packed with a high density. These recordings should allow neuroscientists to reconstruct the individual activity of the neurons spiking in the vicinity of these electrodes, with the help of signal processing algorithms. Algorithms need to solve a source separation problem, also known as spike sorting. However, these new devices challenge the classical way to do spike sorting. Here we review different methods that have been developed to sort spikes from these large-scale recordings. We describe the common properties of these algorithms, as well as their main differences. Finally, we outline the issues that remain to be solved by future spike sorting algorithms. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Montes, Marcos J.; Davis, Curtiss O.
2003-01-01
This SIMBIOS contract supports several activities over its three-year time-span. These include certain computational aspects of atmospheric correction, including the modification of our hyperspectral atmospheric correction algorithm Tafkaa for various multi-spectral instruments, such as SeaWiFS, MODIS, and GLI. Additionally, since absorbing aerosols are becoming common in many coastal areas, we are making the model calculations to incorporate various absorbing aerosol models into tables used by our Tafkaa atmospheric correction algorithm. Finally, we have developed the algorithms to use MODIS data to characterize thin cirrus effects on aerosol retrieval.
Automated isotope identification algorithm using artificial neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamuda, Mark; Stinnett, Jacob; Sullivan, Clair
There is a need to develop an algorithm that can determine the relative activities of radio-isotopes in a large dataset of low-resolution gamma-ray spectra that contains a mixture of many radio-isotopes. Low-resolution gamma-ray spectra that contain mixtures of radio-isotopes often exhibit feature over-lap, requiring algorithms that can analyze these features when overlap occurs. While machine learning and pattern recognition algorithms have shown promise for the problem of radio-isotope identification, their ability to identify and quantify mixtures of radio-isotopes has not been studied. Because machine learning algorithms use abstract features of the spectrum, such as the shape of overlapping peaks andmore » Compton continuum, they are a natural choice for analyzing radio-isotope mixtures. An artificial neural network (ANN) has be trained to calculate the relative activities of 32 radio-isotopes in a spectrum. Furthermore, the ANN is trained with simulated gamma-ray spectra, allowing easy expansion of the library of target radio-isotopes. In this paper we present our initial algorithms based on an ANN and evaluate them against a series measured and simulated spectra.« less
Automated isotope identification algorithm using artificial neural networks
Kamuda, Mark; Stinnett, Jacob; Sullivan, Clair
2017-04-12
There is a need to develop an algorithm that can determine the relative activities of radio-isotopes in a large dataset of low-resolution gamma-ray spectra that contains a mixture of many radio-isotopes. Low-resolution gamma-ray spectra that contain mixtures of radio-isotopes often exhibit feature over-lap, requiring algorithms that can analyze these features when overlap occurs. While machine learning and pattern recognition algorithms have shown promise for the problem of radio-isotope identification, their ability to identify and quantify mixtures of radio-isotopes has not been studied. Because machine learning algorithms use abstract features of the spectrum, such as the shape of overlapping peaks andmore » Compton continuum, they are a natural choice for analyzing radio-isotope mixtures. An artificial neural network (ANN) has be trained to calculate the relative activities of 32 radio-isotopes in a spectrum. Furthermore, the ANN is trained with simulated gamma-ray spectra, allowing easy expansion of the library of target radio-isotopes. In this paper we present our initial algorithms based on an ANN and evaluate them against a series measured and simulated spectra.« less
Efficient search, mapping, and optimization of multi-protein genetic systems in diverse bacteria
Farasat, Iman; Kushwaha, Manish; Collens, Jason; Easterbrook, Michael; Guido, Matthew; Salis, Howard M
2014-01-01
Developing predictive models of multi-protein genetic systems to understand and optimize their behavior remains a combinatorial challenge, particularly when measurement throughput is limited. We developed a computational approach to build predictive models and identify optimal sequences and expression levels, while circumventing combinatorial explosion. Maximally informative genetic system variants were first designed by the RBS Library Calculator, an algorithm to design sequences for efficiently searching a multi-protein expression space across a > 10,000-fold range with tailored search parameters and well-predicted translation rates. We validated the algorithm's predictions by characterizing 646 genetic system variants, encoded in plasmids and genomes, expressed in six gram-positive and gram-negative bacterial hosts. We then combined the search algorithm with system-level kinetic modeling, requiring the construction and characterization of 73 variants to build a sequence-expression-activity map (SEAMAP) for a biosynthesis pathway. Using model predictions, we designed and characterized 47 additional pathway variants to navigate its activity space, find optimal expression regions with desired activity response curves, and relieve rate-limiting steps in metabolism. Creating sequence-expression-activity maps accelerates the optimization of many protein systems and allows previous measurements to quantitatively inform future designs. PMID:24952589
Development of Personalized Urination Recognition Technology Using Smart Bands.
Eun, Sung-Jong; Whangbo, Taeg-Keun; Park, Dong Kyun; Kim, Khae-Hawn
2017-04-01
This study collected and analyzed activity data sensed through smart bands worn by patients in order to resolve the clinical issues posed by using voiding charts. By developing a smart band-based algorithm for recognizing urination activity in patients, this study aimed to explore the feasibility of urination monitoring systems. This study aimed to develop an algorithm that recognizes urination based on a patient's posture and changes in posture. Motion data was obtained from a smart band on the arm. An algorithm that recognizes the 3 stages of urination (forward movement, urination, backward movement) was developed based on data collected from a 3-axis accelerometer and from tilt angle data. Real-time data were acquired from the smart band, and for data corresponding to a certain duration, the absolute value of the signals was calculated and then compared with the set threshold value to determine the occurrence of vibration signals. In feature extraction, the most essential information describing each pattern was identified after analyzing the characteristics of the data. The results of the feature extraction process were sorted using a classifier to detect urination. An experiment was carried out to assess the performance of the recognition technology proposed in this study. The final accuracy of the algorithm was calculated based on clinical guidelines for urologists. The experiment showed a high average accuracy of 90.4%, proving the robustness of the proposed algorithm. The proposed urination recognition technology draws on acceleration data and tilt angle data collected via a smart band; these data were then analyzed using a classifier after comparative analyses with standardized feature patterns.
Ko, Gene M; Garg, Rajni; Bailey, Barbara A; Kumar, Sunil
2016-01-01
Quantitative structure-activity relationship (QSAR) models can be used as a predictive tool for virtual screening of chemical libraries to identify novel drug candidates. The aims of this paper were to report the results of a study performed for descriptor selection, QSAR model development, and virtual screening for identifying novel HIV-1 integrase inhibitor drug candidates. First, three evolutionary algorithms were compared for descriptor selection: differential evolution-binary particle swarm optimization (DE-BPSO), binary particle swarm optimization, and genetic algorithms. Next, three QSAR models were developed from an ensemble of multiple linear regression, partial least squares, and extremely randomized trees models. A comparison of the performances of three evolutionary algorithms showed that DE-BPSO has a significant improvement over the other two algorithms. QSAR models developed in this study were used in consensus as a predictive tool for virtual screening of the NCI Open Database containing 265,242 compounds to identify potential novel HIV-1 integrase inhibitors. Six compounds were predicted to be highly active (plC50 > 6) by each of the three models. The use of a hybrid evolutionary algorithm (DE-BPSO) for descriptor selection and QSAR model development in drug design is a novel approach. Consensus modeling may provide better predictivity by taking into account a broader range of chemical properties within the data set conducive for inhibition that may be missed by an individual model. The six compounds identified provide novel drug candidate leads in the design of next generation HIV- 1 integrase inhibitors targeting drug resistant mutant viruses.
Whittington, James C. R.; Bogacz, Rafal
2017-01-01
To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output. PMID:28333583
Whittington, James C R; Bogacz, Rafal
2017-05-01
To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output.
Hwang, J Y; Kang, J M; Jang, Y W; Kim, H
2004-01-01
Novel algorithm and real-time ambulatory monitoring system for fall detection in elderly people is described. Our system is comprised of accelerometer, tilt sensor and gyroscope. For real-time monitoring, we used Bluetooth. Accelerometer measures kinetic force, tilt sensor and gyroscope estimates body posture. Also, we suggested algorithm using signals which obtained from the system attached to the chest for fall detection. To evaluate our system and algorithm, we experimented on three people aged over 26 years. The experiment of four cases such as forward fall, backward fall, side fall and sit-stand was repeated ten times and the experiment in daily life activity was performed one time to each subject. These experiments showed that our system and algorithm could distinguish between falling and daily life activity. Moreover, the accuracy of fall detection is 96.7%. Our system is especially adapted for long-time and real-time ambulatory monitoring of elderly people in emergency situation.
Development of a two wheeled self balancing robot with speech recognition and navigation algorithm
NASA Astrophysics Data System (ADS)
Rahman, Md. Muhaimin; Ashik-E-Rasul, Haq, Nowab. Md. Aminul; Hassan, Mehedi; Hasib, Irfan Mohammad Al; Hassan, K. M. Rafidh
2016-07-01
This paper is aimed to discuss modeling, construction and development of navigation algorithm of a two wheeled self balancing mobile robot in an enclosure. In this paper, we have discussed the design of two of the main controller algorithms, namely PID algorithms, on the robot model. Simulation is performed in the SIMULINK environment. The controller is developed primarily for self-balancing of the robot and also it's positioning. As for the navigation in an enclosure, template matching algorithm is proposed for precise measurement of the robot position. The navigation system needs to be calibrated before navigation process starts. Almost all of the earlier template matching algorithms that can be found in the open literature can only trace the robot. But the proposed algorithm here can also locate the position of other objects in an enclosure, like furniture, tables etc. This will enable the robot to know the exact location of every stationary object in the enclosure. Moreover, some additional features, such as Speech Recognition and Object Detection, are added. For Object Detection, the single board Computer Raspberry Pi is used. The system is programmed to analyze images captured via the camera, which are then processed through background subtraction, followed by active noise reduction.
Li, Meina; Kim, Youn Tae
2017-01-01
Athlete evaluation systems can effectively monitor daily training and boost performance to reduce injuries. Conventional heart-rate measurement systems can be easily affected by artifact movement, especially in the case of athletes. Significant noise can be generated owing to high-intensity activities. To improve the comfort for athletes and the accuracy of monitoring, we have proposed to combine robust heart rate and agility index monitoring algorithms into a small, light, and single node. A band-pass-filter-based R-wave detection algorithm was developed. The agility index was calculated by preprocessing with band-pass filtering and employing the zero-crossing detection method. The evaluation was conducted under both laboratory and field environments to verify the accuracy and reliability of the algorithm. The heart rate and agility index measurements can be wirelessly transmitted to a personal computer in real time by the ZigBee telecommunication system. The results show that the error rate of measurement of the heart rate is within 2%, which is comparable with that of the traditional wired measurement method. The sensitivity of the agility index, which could be distinguished as the activity speed, changed slightly. Thus, we confirmed that the developed algorithm could be used in an effective and safe exercise-evaluation system for athletes. PMID:29039763
Triggering Interventions for Influenza: The ALERT Algorithm
Reich, Nicholas G.; Cummings, Derek A. T.; Lauer, Stephen A.; Zorn, Martha; Robinson, Christine; Nyquist, Ann-Christine; Price, Connie S.; Simberkoff, Michael; Radonovich, Lewis J.; Perl, Trish M.
2015-01-01
Background. Early, accurate predictions of the onset of influenza season enable targeted implementation of control efforts. Our objective was to develop a tool to assist public health practitioners, researchers, and clinicians in defining the community-level onset of seasonal influenza epidemics. Methods. Using recent surveillance data on virologically confirmed infections of influenza, we developed the Above Local Elevated Respiratory Illness Threshold (ALERT) algorithm, a method to identify the period of highest seasonal influenza activity. We used data from 2 large hospitals that serve Baltimore, Maryland and Denver, Colorado, and the surrounding geographic areas. The data used by ALERT are routinely collected surveillance data: weekly case counts of laboratory-confirmed influenza A virus. The main outcome is the percentage of prospective seasonal influenza cases identified by the ALERT algorithm. Results. When ALERT thresholds designed to capture 90% of all cases were applied prospectively to the 2011–2012 and 2012–2013 influenza seasons in both hospitals, 71%–91% of all reported cases fell within the ALERT period. Conclusions. The ALERT algorithm provides a simple, robust, and accurate metric for determining the onset of elevated influenza activity at the community level. This new algorithm provides valuable information that can impact infection prevention recommendations, public health practice, and healthcare delivery. PMID:25414260
Computational Fluid Dynamics. [numerical methods and algorithm development
NASA Technical Reports Server (NTRS)
1992-01-01
This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.
Isaacson, M D; Srinivasan, S; Lloyd, L L
2010-01-01
MathSpeak is a set of rules for non speaking of mathematical expressions. These rules have been incorporated into a computerised module that translates printed mathematics into the non-ambiguous MathSpeak form for synthetic speech rendering. Differences between individual utterances produced with the translator module are difficult to discern because of insufficient pausing between utterances; hence, the purpose of this study was to develop an algorithm for improving the synthetic speech rendering of MathSpeak. To improve synthetic speech renderings, an algorithm for inserting pauses was developed based upon recordings of middle and high school math teachers speaking mathematic expressions. Efficacy testing of this algorithm was conducted with college students without disabilities and high school/college students with visual impairments. Parameters measured included reception accuracy, short-term memory retention, MathSpeak processing capacity and various rankings concerning the quality of synthetic speech renderings. All parameters measured showed statistically significant improvements when the algorithm was used. The algorithm improves the quality and information processing capacity of synthetic speech renderings of MathSpeak. This increases the capacity of individuals with print disabilities to perform mathematical activities and to successfully fulfill science, technology, engineering and mathematics academic and career objectives.
Brier, Jessica; Carolyn, Moalem; Haverly, Marsha; Januario, Mary Ellen; Padula, Cynthia; Tal, Ahuva; Triosh, Henia
2015-03-01
To develop a clinical algorithm to guide nurses' critical thinking through systematic surveillance, assessment, actions required and communication strategies. To achieve this, an international, multiphase project was initiated. Patients receive hospital care postoperatively because they require the skilled surveillance of nurses. Effective assessment of postoperative patients is essential for early detection of clinical deterioration and optimal care management. Despite the significant amount of time devoted to surveillance activities, there is lack of evidence that nurses use a consistent, systematic approach in surveillance, management and communication, potentially leading to less optimal outcomes. Several explanations for the lack of consistency have been suggested in the literature. Mixed methods approach. Retrospective chart review; semi-structured interviews conducted with expert nurses (n = 10); algorithm development. Themes developed from the semi-structured interviews, including (1) complete, systematic assessment, (2) something is not right (3) validating with others, (4) influencing factors and (5) frustration with lack of response when communicating findings were used as the basis for development of the Surveillance Algorithm for Post-Surgical Patients. The algorithm proved beneficial based on limited use in clinical settings. Further work is needed to fully test it in education and practice. The Surveillance Algorithm for Post-Surgical Patients represents the approach of expert nurses, and serves to guide less expert nurses' observations, critical thinking, actions and communication. Based on this approach, the algorithm assists nurses to develop skills promoting early detection, intervention and communication in cases of patient deterioration. © 2014 John Wiley & Sons Ltd.
Predicting Activity Energy Expenditure Using the Actical[R] Activity Monitor
ERIC Educational Resources Information Center
Heil, Daniel P.
2006-01-01
This study developed algorithms for predicting activity energy expenditure (AEE) in children (n = 24) and adults (n = 24) from the Actical[R] activity monitor. Each participant performed 10 activities (supine resting, three sitting, three house cleaning, and three locomotion) while wearing monitors on the ankle, hip, and wrist; AEE was computed…
Integrated identification, modeling and control with applications
NASA Astrophysics Data System (ADS)
Shi, Guojun
This thesis deals with the integration of system design, identification, modeling and control. In particular, six interdisciplinary engineering problems are addressed and investigated. Theoretical results are established and applied to structural vibration reduction and engine control problems. First, the data-based LQG control problem is formulated and solved. It is shown that a state space model is not necessary to solve this problem; rather a finite sequence from the impulse response is the only model data required to synthesize an optimal controller. The new theory avoids unnecessary reliance on a model, required in the conventional design procedure. The infinite horizon model predictive control problem is addressed for multivariable systems. The basic properties of the receding horizon implementation strategy is investigated and the complete framework for solving the problem is established. The new theory allows the accommodation of hard input constraints and time delays. The developed control algorithms guarantee the closed loop stability. A closed loop identification and infinite horizon model predictive control design procedure is established for engine speed regulation. The developed algorithms are tested on the Cummins Engine Simulator and desired results are obtained. A finite signal-to-noise ratio model is considered for noise signals. An information quality index is introduced which measures the essential information precision required for stabilization. The problems of minimum variance control and covariance control are formulated and investigated. Convergent algorithms are developed for solving the problems of interest. The problem of the integrated passive and active control design is addressed in order to improve the overall system performance. A design algorithm is developed, which simultaneously finds: (i) the optimal values of the stiffness and damping ratios for the structure, and (ii) an optimal output variance constrained stabilizing controller such that the active control energy is minimized. A weighted q-Markov COVER method is introduced for identification with measurement noise. The result is use to develop an iterative closed loop identification/control design algorithm. The effectiveness of the algorithm is illustrated by experimental results.
GSFC Technology Development Center Report
NASA Technical Reports Server (NTRS)
Himwich, Ed; Gipson, John
2013-01-01
This report summarizes the activities of the GSFC Technology Development Center (TDC) for 2012 and forecasts planned activities for 2013. The GSFC TDC develops station software including the Field System (FS), scheduling software (SKED), hardware including tools for station timing and meteorology, scheduling algorithms, and operational procedures. It provides a pool of individuals to assist with station implementation, check-out, upgrades, and training.
NASA Technical Reports Server (NTRS)
Goodman, Steven; Blakeslee, Richard; Koshak, William
2008-01-01
The Geostationary Lightning Mapper (GLM) is a single channel, near-IR optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk as part of a 3-axis stabilized, geostationary weather satellite system. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series with a planned launch in 2014 will carry a GLM that will provide continuous day and night observations of lightning from the west coast of Africa (GOES-E) to New Zealand (GOES-W) when the constellation is fully operational. The mission objectives for the GLM are to 1) provide continuous,full-disk lightning measurements for storm warning and Nowcasting, 2) provide early warning of tornado activity, and 3) accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997-Present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. Instrument formulation studies were completed in March 2007 and the implementation phase to develop a prototype model and up to four flight units is expected to begin in latter part of the year. In parallel with the instrument development, a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2B algorithms and applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama and the Washington DC Metropolitan area) are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. Real time lightning mapping data provided to selected National Weather Service forecast offices in Southern and Eastern Region are also improving our understanding of the application of these data in the severe storm warning process and help to accelerate the development of the pre-launch algorithms and Nowcasting applications.
NASA Technical Reports Server (NTRS)
Goodman, Steven; Blakeslee, Richard; Koshak, William; Petersen, Walt; Buechler, Dennis; Krehbiel, Paul; Gatlin, Patrick; Zubrick, Steven
2008-01-01
The Geostationary Lightning Mapper (GLM) is a single channel, near-IR optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk as part of a 3-axis stabilized, geostationary weather satellite system. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series with a planned launch in 2014 will carry a GLM that will provide continuous day and night observations of lightning from the west coast of Africa (GOES-E) to New Zealand (GOES-W) when the constellation is fully operational.The mission objectives for the GLM are to 1) provide continuous,full-disk lightning measurements for storm warning and Nowcasting, 2) provide early warning of tornadic activity, and 3) accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997-Present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. Instrument formulation studies were completed in March 2007 and the implementation phase to develop a prototype model and up to four flight units is expected to begin in latter part of the year. In parallel with the instrument development, a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2B algorithms and applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) sate]lite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama and the Washington DC Metropolitan area) are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. Real time lightning mapping data provided to selected National Weather Service forecast offices in Southern and Eastern Region are also improving our understanding of the application of these data in the severe storm warning process and help to accelerate the development of the pre-launch algorithms and Nowcasting applications. Abstract for the 3 rd Conference on Meteorological
Algorithm Visualization: The State of the Field
ERIC Educational Resources Information Center
Shaffer, Clifford A.; Cooper, Matthew L.; Alon, Alexander Joel D.; Akbar, Monika; Stewart, Michael; Ponce, Sean; Edwards, Stephen H.
2010-01-01
We present findings regarding the state of the field of Algorithm Visualization (AV) based on our analysis of a collection of over 500 AVs. We examine how AVs are distributed among topics, who created them and when, their overall quality, and how they are disseminated. There does exist a cadre of good AVs and active developers. Unfortunately, we…
Automated detection and characterization of harmonic tremor in continuous seismic data
NASA Astrophysics Data System (ADS)
Roman, Diana C.
2017-06-01
Harmonic tremor is a common feature of volcanic, hydrothermal, and ice sheet seismicity and is thus an important proxy for monitoring changes in these systems. However, no automated methods for detecting harmonic tremor currently exist. Because harmonic tremor shares characteristics with speech and music, digital signal processing techniques for analyzing these signals can be adapted. I develop a novel pitch-detection-based algorithm to automatically identify occurrences of harmonic tremor and characterize their frequency content. The algorithm is applied to seismic data from Popocatepetl Volcano, Mexico, and benchmarked against a monthlong manually detected catalog of harmonic tremor events. During a period of heightened eruptive activity from December 2014 to May 2015, the algorithm detects 1465 min of harmonic tremor, which generally precede periods of heightened explosive activity. These results demonstrate the algorithm's ability to accurately characterize harmonic tremor while highlighting the need for additional work to understand its causes and implications at restless volcanoes.
Della Mea, Vincenzo; Quattrin, Omar; Parpinel, Maria
2017-12-01
Obesity and physical inactivity are the most important risk factors for chronic diseases. The present study aimed at (i) developing and testing a method for classifying household activities based on a smartphone accelerometer; (ii) evaluating the influence of smartphone position; and (iii) evaluating the acceptability of wearing a smartphone for activity recognition. An Android application was developed to record accelerometer data and calculate descriptive features on 5-second time blocks, then classified with nine algorithms. Household activities were: sitting, working at the computer, walking, ironing, sweeping the floor, going down stairs with a shopping bag, walking while carrying a large box, and climbing stairs with a shopping bag. Ten volunteers carried out the activities for three times, each one with a smartphone in a different position (pocket, arm, and wrist). Users were then asked to answer a questionnaire. 1440 time blocks were collected. Three algorithms demonstrated an accuracy greater than 80% for all smartphone positions. While for some subjects the smartphone was uncomfortable, it seems that it did not really affect activity. Smartphones can be used to recognize household activities. A further development is to measure metabolic equivalent tasks starting from accelerometer data only.
Demonstration of Active Combustion Control
NASA Technical Reports Server (NTRS)
Lovett, Jeffrey A.; Teerlinck, Karen A.; Cohen, Jeffrey M.
2008-01-01
The primary objective of this effort was to demonstrate active control of combustion instabilities in a direct-injection gas turbine combustor that accurately simulates engine operating conditions and reproduces an engine-type instability. This report documents the second phase of a two-phase effort. The first phase involved the analysis of an instability observed in a developmental aeroengine and the design of a single-nozzle test rig to replicate that phenomenon. This was successfully completed in 2001 and is documented in the Phase I report. This second phase was directed toward demonstration of active control strategies to mitigate this instability and thereby demonstrate the viability of active control for aircraft engine combustors. This involved development of high-speed actuator technology, testing and analysis of how the actuation system was integrated with the combustion system, control algorithm development, and demonstration testing in the single-nozzle test rig. A 30 percent reduction in the amplitude of the high-frequency (570 Hz) instability was achieved using actuation systems and control algorithms developed within this effort. Even larger reductions were shown with a low-frequency (270 Hz) instability. This represents a unique achievement in the development and practical demonstration of active combustion control systems for gas turbine applications.
Simulation of empty container logistic management at depot
NASA Astrophysics Data System (ADS)
Sze, San-Nah; Sek, Siaw-Ying Doreen; Chiew, Kang-Leng; Tiong, Wei-King
2017-07-01
This study focuses on the empty container management problem in a deficit regional area. Deficit area is the area having more export activities than the import activities, which always have a shortage of empty container. This environment has challenged the trading companies in the decision making in distributing the empty containers. A simulation model that fit to the environment is developed. Besides, a simple heuristic algorithm with some hard and soft constraints consideration are proposed to plan the logistic of empty container supply. Then, the feasible route with the minimum cost will be determined by applying the proposed heuristic algorithm. The heuristic algorithm can be divided into three main phases which are data sorting, data assigning and time window updating.
Automated assessment of cognitive health using smart home technologies.
Dawadi, Prafulla N; Cook, Diane J; Schmitter-Edgecombe, Maureen; Parsey, Carolyn
2013-01-01
The goal of this work is to develop intelligent systems to monitor the wellbeing of individuals in their home environments. This paper introduces a machine learning-based method to automatically predict activity quality in smart homes and automatically assess cognitive health based on activity quality. This paper describes an automated framework to extract set of features from smart home sensors data that reflects the activity performance or ability of an individual to complete an activity which can be input to machine learning algorithms. Output from learning algorithms including principal component analysis, support vector machine, and logistic regression algorithms are used to quantify activity quality for a complex set of smart home activities and predict cognitive health of participants. Smart home activity data was gathered from volunteer participants (n=263) who performed a complex set of activities in our smart home testbed. We compare our automated activity quality prediction and cognitive health prediction with direct observation scores and health assessment obtained from neuropsychologists. With all samples included, we obtained statistically significant correlation (r=0.54) between direct observation scores and predicted activity quality. Similarly, using a support vector machine classifier, we obtained reasonable classification accuracy (area under the ROC curve=0.80, g-mean=0.73) in classifying participants into two different cognitive classes, dementia and cognitive healthy. The results suggest that it is possible to automatically quantify the task quality of smart home activities and perform limited assessment of the cognitive health of individual if smart home activities are properly chosen and learning algorithms are appropriately trained.
Automated Assessment of Cognitive Health Using Smart Home Technologies
Dawadi, Prafulla N.; Cook, Diane J.; Schmitter-Edgecombe, Maureen; Parsey, Carolyn
2014-01-01
BACKGROUND The goal of this work is to develop intelligent systems to monitor the well being of individuals in their home environments. OBJECTIVE This paper introduces a machine learning-based method to automatically predict activity quality in smart homes and automatically assess cognitive health based on activity quality. METHODS This paper describes an automated framework to extract set of features from smart home sensors data that reflects the activity performance or ability of an individual to complete an activity which can be input to machine learning algorithms. Output from learning algorithms including principal component analysis, support vector machine, and logistic regression algorithms are used to quantify activity quality for a complex set of smart home activities and predict cognitive health of participants. RESULTS Smart home activity data was gathered from volunteer participants (n=263) who performed a complex set of activities in our smart home testbed. We compare our automated activity quality prediction and cognitive health prediction with direct observation scores and health assessment obtained from neuropsychologists. With all samples included, we obtained statistically significant correlation (r=0.54) between direct observation scores and predicted activity quality. Similarly, using a support vector machine classifier, we obtained reasonable classification accuracy (area under the ROC curve = 0.80, g-mean = 0.73) in classifying participants into two different cognitive classes, dementia and cognitive healthy. CONCLUSIONS The results suggest that it is possible to automatically quantify the task quality of smart home activities and perform limited assessment of the cognitive health of individual if smart home activities are properly chosen and learning algorithms are appropriately trained. PMID:23949177
Bourke, Alan K; van de Ven, Pepijn W J; Chaya, Amy E; OLaighin, Gearóid M; Nelson, John
2008-01-01
A fall detection system and algorithm, incorporated into a custom designed garment has been developed. The developed fall detection system uses a tri-axial accelerometer, microcontroller, battery and Bluetooth module. This sensor is attached to a custom designed vest, designed to be worn by the elderly person under clothing. The fall detection algorithm was developed and incorporates both impact and posture detection capability. The vest and fall algorithm was tested on young healthy subjects performing normal activities of daily living (ADL) and falls onto crash mats, while wearing the best and sensor. Results show that falls can de distinguished from normal activities with a sensitivity >90% and a specificity of >99%, from a total data set of 264 falls and 165 normal ADL. By incorporating the fall-detection sensor into a custom designed garment it is anticipated that greater compliance when wearing a fall-detection system can be achieved and will help reduce the incidence of the long-lie, when falls occur in the elderly population. However further long-term testing using elderly subjects is required to validate the systems performance.
NASA Technical Reports Server (NTRS)
Colliander, Andreas; Chan, Steven; Yueh, Simon; Cosh, Michael; Bindlish, Rajat; Jackson, Tom; Njoku, Eni
2010-01-01
Field experiment data sets that include coincident remote sensing measurements and in situ sampling will be valuable in the development and validation of the soil moisture algorithms of the NASA's future SMAP (Soil Moisture Active and Passive) mission. This paper presents an overview of the field experiment data collected from SGP99, SMEX02, CLASIC and SMAPVEX08 campaigns. Common in these campaigns were observations of the airborne PALS (Passive and Active L- and S-band) instrument, which was developed to acquire radar and radiometer measurements at low frequencies. The combined set of the PALS measurements and ground truth obtained from all these campaigns was under study. The investigation shows that the data set contains a range of soil moisture values collected under a limited number of conditions. The quality of both PALS and ground truth data meets the needs of the SMAP algorithm development and validation. The data set has already made significant impact on the science behind SMAP mission. The areas where complementing of the data would be most beneficial are also discussed.
Robotic space simulation integration of vision algorithms into an orbital operations simulation
NASA Technical Reports Server (NTRS)
Bochsler, Daniel C.
1987-01-01
In order to successfully plan and analyze future space activities, computer-based simulations of activities in low earth orbit will be required to model and integrate vision and robotic operations with vehicle dynamics and proximity operations procedures. The orbital operations simulation (OOS) is configured and enhanced as a testbed for robotic space operations. Vision integration algorithms are being developed in three areas: preprocessing, recognition, and attitude/attitude rates. The vision program (Rice University) was modified for use in the OOS. Systems integration testing is now in progress.
EOS Laser Atmosphere Wind Sounder (LAWS) investigation
NASA Technical Reports Server (NTRS)
Emmitt, George D.
1991-01-01
The related activities of the contract are outlined for the first year. These include: (1) attend team member meetings; (2) support EOS Project with science related activities; (3) prepare and Execution Phase plan; and (4) support LAWS and EOSDIS related work. Attached to the report is an appendix, 'LAWS Algorithm Development and Evaluation Laboratory (LADEL)'. Also attached is a copy of a proposal to the NASA EOS for 'LAWS Sampling Strategies and Wind Computation Algorithms -- Storm-Top Divergence Studies. Volume I: Investigation and Technical Plan, Data Plan, Computer Facilities Plan, Management Plan.'
Test of the Semi-Analytical Case 1 and Gelbstoff Case 2 SeaWiFS Algorithm with a Global Data Set
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1997-01-01
The algorithm-development activities at USF during the second half of 1997 have concentrated on data collection and theoretical modeling. Six abstracts were submitted for presentation at the AGU conference in San Diego, California during February 9-13, 1998. Four papers were submitted to JGR and Applied Optics for publication.
NASA Astrophysics Data System (ADS)
Garnier, Romain; Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier
2016-05-01
A new variational algorithm called adaptive vibrational configuration interaction (A-VCI) intended for the resolution of the vibrational Schrödinger equation was developed. The main advantage of this approach is to efficiently reduce the dimension of the active space generated into the configuration interaction (CI) process. Here, we assume that the Hamiltonian writes as a sum of products of operators. This adaptive algorithm was developed with the use of three correlated conditions, i.e., a suitable starting space, a criterion for convergence, and a procedure to expand the approximate space. The velocity of the algorithm was increased with the use of a posteriori error estimator (residue) to select the most relevant direction to increase the space. Two examples have been selected for benchmark. In the case of H2CO, we mainly study the performance of A-VCI algorithm: comparison with the variation-perturbation method, choice of the initial space, and residual contributions. For CH3CN, we compare the A-VCI results with a computed reference spectrum using the same potential energy surface and for an active space reduced by about 90%.
Garnier, Romain; Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier
2016-05-28
A new variational algorithm called adaptive vibrational configuration interaction (A-VCI) intended for the resolution of the vibrational Schrödinger equation was developed. The main advantage of this approach is to efficiently reduce the dimension of the active space generated into the configuration interaction (CI) process. Here, we assume that the Hamiltonian writes as a sum of products of operators. This adaptive algorithm was developed with the use of three correlated conditions, i.e., a suitable starting space, a criterion for convergence, and a procedure to expand the approximate space. The velocity of the algorithm was increased with the use of a posteriori error estimator (residue) to select the most relevant direction to increase the space. Two examples have been selected for benchmark. In the case of H2CO, we mainly study the performance of A-VCI algorithm: comparison with the variation-perturbation method, choice of the initial space, and residual contributions. For CH3CN, we compare the A-VCI results with a computed reference spectrum using the same potential energy surface and for an active space reduced by about 90%.
Hynes, Martin; Wang, Han; Kilmartin, Liam
2009-01-01
Over the last decade, there has been substantial research interest in the application of accelerometry data for many forms of automated gait and activity analysis algorithms. This paper introduces a summary of new "of-the-shelf" mobile phone handset platforms containing embedded accelerometers which support the development of custom software to implement real time analysis of the accelerometer data. An overview of the main software programming environments which support the development of such software, including Java ME based JSR 256 API, C++ based Motion Sensor API and the Python based "aXYZ" module, is provided. Finally, a sample application is introduced and its performance evaluated in order to illustrate how a standard mobile phone can be used to detect gait activity using such a non-intrusive and easily accepted sensing platform.
Teaching and Learning Activity Sequencing System using Distributed Genetic Algorithms
NASA Astrophysics Data System (ADS)
Matsui, Tatsunori; Ishikawa, Tomotake; Okamoto, Toshio
The purpose of this study is development of a supporting system for teacher's design of lesson plan. Especially design of lesson plan which relates to the new subject "Information Study" is supported. In this study, we developed a system which generates teaching and learning activity sequences by interlinking lesson's activities corresponding to the various conditions according to the user's input. Because user's input is multiple information, there will be caused contradiction which the system should solve. This multiobjective optimization problem is resolved by Distributed Genetic Algorithms, in which some fitness functions are defined with reference models on lesson, thinking and teaching style. From results of various experiments, effectivity and validity of the proposed methods and reference models were verified; on the other hand, some future works on reference models and evaluation functions were also pointed out.
Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2014-01-01
Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.
Development of a Multi-Biomarker Disease Activity Test for Rheumatoid Arthritis
Shen, Yijing; Ramanujan, Saroja; Knowlton, Nicholas; Swan, Kathryn A.; Turner, Mary; Sutton, Chris; Smith, Dustin R.; Haney, Douglas J.; Chernoff, David; Hesterberg, Lyndal K.; Carulli, John P.; Taylor, Peter C.; Shadick, Nancy A.; Weinblatt, Michael E.; Curtis, Jeffrey R.
2013-01-01
Background Disease activity measurement is a key component of rheumatoid arthritis (RA) management. Biomarkers that capture the complex and heterogeneous biology of RA have the potential to complement clinical disease activity assessment. Objectives To develop a multi-biomarker disease activity (MBDA) test for rheumatoid arthritis. Methods Candidate serum protein biomarkers were selected from extensive literature screens, bioinformatics databases, mRNA expression and protein microarray data. Quantitative assays were identified and optimized for measuring candidate biomarkers in RA patient sera. Biomarkers with qualifying assays were prioritized in a series of studies based on their correlations to RA clinical disease activity (e.g. the Disease Activity Score 28-C-Reactive Protein [DAS28-CRP], a validated metric commonly used in clinical trials) and their contributions to multivariate models. Prioritized biomarkers were used to train an algorithm to measure disease activity, assessed by correlation to DAS and area under the receiver operating characteristic curve for classification of low vs. moderate/high disease activity. The effect of comorbidities on the MBDA score was evaluated using linear models with adjustment for multiple hypothesis testing. Results 130 candidate biomarkers were tested in feasibility studies and 25 were selected for algorithm training. Multi-biomarker statistical models outperformed individual biomarkers at estimating disease activity. Biomarker-based scores were significantly correlated with DAS28-CRP and could discriminate patients with low vs. moderate/high clinical disease activity. Such scores were also able to track changes in DAS28-CRP and were significantly associated with both joint inflammation measured by ultrasound and damage progression measured by radiography. The final MBDA algorithm uses 12 biomarkers to generate an MBDA score between 1 and 100. No significant effects on the MBDA score were found for common comorbidities. Conclusion We followed a stepwise approach to develop a quantitative serum-based measure of RA disease activity, based on 12-biomarkers, which was consistently associated with clinical disease activity levels. PMID:23585841
Development of a multi-biomarker disease activity test for rheumatoid arthritis.
Centola, Michael; Cavet, Guy; Shen, Yijing; Ramanujan, Saroja; Knowlton, Nicholas; Swan, Kathryn A; Turner, Mary; Sutton, Chris; Smith, Dustin R; Haney, Douglas J; Chernoff, David; Hesterberg, Lyndal K; Carulli, John P; Taylor, Peter C; Shadick, Nancy A; Weinblatt, Michael E; Curtis, Jeffrey R
2013-01-01
Disease activity measurement is a key component of rheumatoid arthritis (RA) management. Biomarkers that capture the complex and heterogeneous biology of RA have the potential to complement clinical disease activity assessment. To develop a multi-biomarker disease activity (MBDA) test for rheumatoid arthritis. Candidate serum protein biomarkers were selected from extensive literature screens, bioinformatics databases, mRNA expression and protein microarray data. Quantitative assays were identified and optimized for measuring candidate biomarkers in RA patient sera. Biomarkers with qualifying assays were prioritized in a series of studies based on their correlations to RA clinical disease activity (e.g. the Disease Activity Score 28-C-Reactive Protein [DAS28-CRP], a validated metric commonly used in clinical trials) and their contributions to multivariate models. Prioritized biomarkers were used to train an algorithm to measure disease activity, assessed by correlation to DAS and area under the receiver operating characteristic curve for classification of low vs. moderate/high disease activity. The effect of comorbidities on the MBDA score was evaluated using linear models with adjustment for multiple hypothesis testing. 130 candidate biomarkers were tested in feasibility studies and 25 were selected for algorithm training. Multi-biomarker statistical models outperformed individual biomarkers at estimating disease activity. Biomarker-based scores were significantly correlated with DAS28-CRP and could discriminate patients with low vs. moderate/high clinical disease activity. Such scores were also able to track changes in DAS28-CRP and were significantly associated with both joint inflammation measured by ultrasound and damage progression measured by radiography. The final MBDA algorithm uses 12 biomarkers to generate an MBDA score between 1 and 100. No significant effects on the MBDA score were found for common comorbidities. We followed a stepwise approach to develop a quantitative serum-based measure of RA disease activity, based on 12-biomarkers, which was consistently associated with clinical disease activity levels.
Biffi, E; Menegon, A; Regalia, G; Maida, S; Ferrigno, G; Pedrocchi, A
2011-08-15
Modern drug discovery for Central Nervous System pathologies has recently focused its attention to in vitro neuronal networks as models for the study of neuronal activities. Micro Electrode Arrays (MEAs), a widely recognized tool for pharmacological investigations, enable the simultaneous study of the spiking activity of discrete regions of a neuronal culture, providing an insight into the dynamics of networks. Taking advantage of MEAs features and making the most of the cross-correlation analysis to assess internal parameters of a neuronal system, we provide an efficient method for the evaluation of comprehensive neuronal network activity. We developed an intra network burst correlation algorithm, we evaluated its sensitivity and we explored its potential use in pharmacological studies. Our results demonstrate the high sensitivity of this algorithm and the efficacy of this methodology in pharmacological dose-response studies, with the advantage of analyzing the effect of drugs on the comprehensive correlative properties of integrated neuronal networks. Copyright © 2011 Elsevier B.V. All rights reserved.
The Goes-R Geostationary Lightning Mapper (GLM): Algorithm and Instrument Status
NASA Technical Reports Server (NTRS)
Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Mach, Douglas
2010-01-01
The Geostationary Operational Environmental Satellite (GOES-R) is the next series to follow the existing GOES system currently operating over the Western Hemisphere. Superior spacecraft and instrument technology will support expanded detection of environmental phenomena, resulting in more timely and accurate forecasts and warnings. Advancements over current GOES capabilities include a new capability for total lightning detection (cloud and cloud-to-ground flashes) from the Geostationary Lightning Mapper (GLM), and improved capability for the Advanced Baseline Imager (ABI). The Geostationary Lighting Mapper (GLM) will map total lightning activity (in-cloud and cloud-to-ground lighting flashes) continuously day and night with near-uniform spatial resolution of 8 km with a product refresh rate of less than 20 sec over the Americas and adjacent oceanic regions. This will aid in forecasting severe storms and tornado activity, and convective weather impacts on aviation safety and efficiency. In parallel with the instrument development (a prototype and 4 flight models), a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms, cal/val performance monitoring tools, and new applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. A joint field campaign with Brazilian researchers in 2010-2011 will produce concurrent observations from a VHF lightning mapping array, Meteosat multi-band imagery, Tropical Rainfall Measuring Mission (TRMM) Lightning Imaging Sensor (LIS) overpasses, and related ground and in-situ lightning and meteorological measurements in the vicinity of Sao Paulo. These data will provide a new comprehensive proxy data set for algorithm and application development.
The center for causal discovery of biomedical knowledge from big data
Bahar, Ivet; Becich, Michael J; Benos, Panayiotis V; Berg, Jeremy; Espino, Jeremy U; Glymour, Clark; Jacobson, Rebecca Crowley; Kienholz, Michelle; Lee, Adrian V; Lu, Xinghua; Scheines, Richard
2015-01-01
The Big Data to Knowledge (BD2K) Center for Causal Discovery is developing and disseminating an integrated set of open source tools that support causal modeling and discovery of biomedical knowledge from large and complex biomedical datasets. The Center integrates teams of biomedical and data scientists focused on the refinement of existing and the development of new constraint-based and Bayesian algorithms based on causal Bayesian networks, the optimization of software for efficient operation in a supercomputing environment, and the testing of algorithms and software developed using real data from 3 representative driving biomedical projects: cancer driver mutations, lung disease, and the functional connectome of the human brain. Associated training activities provide both biomedical and data scientists with the knowledge and skills needed to apply and extend these tools. Collaborative activities with the BD2K Consortium further advance causal discovery tools and integrate tools and resources developed by other centers. PMID:26138794
NASA Technical Reports Server (NTRS)
Wheeler, Kevin; Timucin, Dogan; Rabbette, Maura; Curry, Charles; Allan, Mark; Lvov, Nikolay; Clanton, Sam; Pilewskie, Peter
2002-01-01
The goal of visual inference programming is to develop a software framework data analysis and to provide machine learning algorithms for inter-active data exploration and visualization. The topics include: 1) Intelligent Data Understanding (IDU) framework; 2) Challenge problems; 3) What's new here; 4) Framework features; 5) Wiring diagram; 6) Generated script; 7) Results of script; 8) Initial algorithms; 9) Independent Component Analysis for instrument diagnosis; 10) Output sensory mapping virtual joystick; 11) Output sensory mapping typing; 12) Closed-loop feedback mu-rhythm control; 13) Closed-loop training; 14) Data sources; and 15) Algorithms. This paper is in viewgraph form.
Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W
2012-01-01
The Food and Drug Administration's Mini-Sentinel pilot program aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest from administrative and claims data. This article summarizes the process and findings of the algorithm review of hypersensitivity reactions. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the hypersensitivity reactions of health outcomes of interest. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify hypersensitivity reactions and including validation estimates of the coding algorithms. We identified five studies that provided validated hypersensitivity-reaction algorithms. Algorithm positive predictive values (PPVs) for various definitions of hypersensitivity reactions ranged from 3% to 95%. PPVs were high (i.e. 90%-95%) when both exposures and diagnoses were very specific. PPV generally decreased when the definition of hypersensitivity was expanded, except in one study that used data mining methodology for algorithm development. The ability of coding algorithms to identify hypersensitivity reactions varied, with decreasing performance occurring with expanded outcome definitions. This examination of hypersensitivity-reaction coding algorithms provides an example of surveillance bias resulting from outcome definitions that include mild cases. Data mining may provide tools for algorithm development for hypersensitivity and other health outcomes. Research needs to be conducted on designing validation studies to test hypersensitivity-reaction algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Gat, N.; Subramanian, S.; Barhen, J.; Toomarian, N.
1996-01-01
This paper reviews the activities at OKSI related to imaging spectroscopy presenting current and future applications of the technology. The authors discuss the development of several systems including hardware, signal processing, data classification algorithms and benchmarking techniques to determine algorithm performance. Signal processing for each application is tailored by incorporating the phenomenology appropriate to the process, into the algorithms. Pixel signatures are classified using techniques such as principal component analyses, generalized eigenvalue analysis and novel very fast neural network methods. The major hyperspectral imaging systems developed at OKSI include the Intelligent Missile Seeker (IMS) demonstration project for real-time target/decoy discrimination, and the Thermal InfraRed Imaging Spectrometer (TIRIS) for detection and tracking of toxic plumes and gases. In addition, systems for applications in medical photodiagnosis, manufacturing technology, and for crop monitoring are also under development.
Linear regression models and k-means clustering for statistical analysis of fNIRS data.
Bonomini, Viola; Zucchelli, Lucia; Re, Rebecca; Ieva, Francesca; Spinelli, Lorenzo; Contini, Davide; Paganoni, Anna; Torricelli, Alessandro
2015-02-01
We propose a new algorithm, based on a linear regression model, to statistically estimate the hemodynamic activations in fNIRS data sets. The main concern guiding the algorithm development was the minimization of assumptions and approximations made on the data set for the application of statistical tests. Further, we propose a K-means method to cluster fNIRS data (i.e. channels) as activated or not activated. The methods were validated both on simulated and in vivo fNIRS data. A time domain (TD) fNIRS technique was preferred because of its high performances in discriminating cortical activation and superficial physiological changes. However, the proposed method is also applicable to continuous wave or frequency domain fNIRS data sets.
Linear regression models and k-means clustering for statistical analysis of fNIRS data
Bonomini, Viola; Zucchelli, Lucia; Re, Rebecca; Ieva, Francesca; Spinelli, Lorenzo; Contini, Davide; Paganoni, Anna; Torricelli, Alessandro
2015-01-01
We propose a new algorithm, based on a linear regression model, to statistically estimate the hemodynamic activations in fNIRS data sets. The main concern guiding the algorithm development was the minimization of assumptions and approximations made on the data set for the application of statistical tests. Further, we propose a K-means method to cluster fNIRS data (i.e. channels) as activated or not activated. The methods were validated both on simulated and in vivo fNIRS data. A time domain (TD) fNIRS technique was preferred because of its high performances in discriminating cortical activation and superficial physiological changes. However, the proposed method is also applicable to continuous wave or frequency domain fNIRS data sets. PMID:25780751
MODELING MERCURY CONTROL WITH POWDERED ACTIVATED CARBON
The paper presents a mathematical model of total mercury removed from the flue gas at coal-fired plants equipped with powdered activated carbon (PAC) injection for Mercury control. The developed algorithms account for mercury removal by both existing equipment and an added PAC in...
Tymur Sydor; Richard A. Kluender; Rodney L. Busby; Matthew Pelkki
2004-01-01
An activity algorithm was developed for standard marking methods for natural pine stands in Arkansas. For the two types of marking methods examined, thinning (selection from below) and single-tree selection (selection from above), cycle time and cost models were developed. Basal area (BA) removed was the major influencing factor in both models. Marking method was...
Automated Ecological Assessment of Physical Activity: Advancing Direct Observation.
Carlson, Jordan A; Liu, Bo; Sallis, James F; Kerr, Jacqueline; Hipp, J Aaron; Staggs, Vincent S; Papa, Amy; Dean, Kelsey; Vasconcelos, Nuno M
2017-12-01
Technological advances provide opportunities for automating direct observations of physical activity, which allow for continuous monitoring and feedback. This pilot study evaluated the initial validity of computer vision algorithms for ecological assessment of physical activity. The sample comprised 6630 seconds per camera (three cameras in total) of video capturing up to nine participants engaged in sitting, standing, walking, and jogging in an open outdoor space while wearing accelerometers. Computer vision algorithms were developed to assess the number and proportion of people in sedentary, light, moderate, and vigorous activity, and group-based metabolic equivalents of tasks (MET)-minutes. Means and standard deviations (SD) of bias/difference values, and intraclass correlation coefficients (ICC) assessed the criterion validity compared to accelerometry separately for each camera. The number and proportion of participants sedentary and in moderate-to-vigorous physical activity (MVPA) had small biases (within 20% of the criterion mean) and the ICCs were excellent (0.82-0.98). Total MET-minutes were slightly underestimated by 9.3-17.1% and the ICCs were good (0.68-0.79). The standard deviations of the bias estimates were moderate-to-large relative to the means. The computer vision algorithms appeared to have acceptable sample-level validity (i.e., across a sample of time intervals) and are promising for automated ecological assessment of activity in open outdoor settings, but further development and testing is needed before such tools can be used in a diverse range of settings.
Automated Ecological Assessment of Physical Activity: Advancing Direct Observation
Carlson, Jordan A.; Liu, Bo; Sallis, James F.; Kerr, Jacqueline; Papa, Amy; Dean, Kelsey; Vasconcelos, Nuno M.
2017-01-01
Technological advances provide opportunities for automating direct observations of physical activity, which allow for continuous monitoring and feedback. This pilot study evaluated the initial validity of computer vision algorithms for ecological assessment of physical activity. The sample comprised 6630 seconds per camera (three cameras in total) of video capturing up to nine participants engaged in sitting, standing, walking, and jogging in an open outdoor space while wearing accelerometers. Computer vision algorithms were developed to assess the number and proportion of people in sedentary, light, moderate, and vigorous activity, and group-based metabolic equivalents of tasks (MET)-minutes. Means and standard deviations (SD) of bias/difference values, and intraclass correlation coefficients (ICC) assessed the criterion validity compared to accelerometry separately for each camera. The number and proportion of participants sedentary and in moderate-to-vigorous physical activity (MVPA) had small biases (within 20% of the criterion mean) and the ICCs were excellent (0.82–0.98). Total MET-minutes were slightly underestimated by 9.3–17.1% and the ICCs were good (0.68–0.79). The standard deviations of the bias estimates were moderate-to-large relative to the means. The computer vision algorithms appeared to have acceptable sample-level validity (i.e., across a sample of time intervals) and are promising for automated ecological assessment of activity in open outdoor settings, but further development and testing is needed before such tools can be used in a diverse range of settings. PMID:29194358
Preliminary study on activity monitoring using an android smart-watch
Ahanathapillai, Vijayalakshmi; Goodwin, Zoe; James, Christopher J.
2015-01-01
The global trend for increasing life expectancy is resulting in aging populations in a number of countries. This brings to bear a pressure to provide effective care for the older population with increasing constraints on available resources. Providing care for and maintaining the independence of an older person in their own home is one way that this problem can be addressed. The EU Funded Unobtrusive Smart Environments for Independent Living (USEFIL) project is an assistive technology tool being developed to enhance independent living. As part of USEFIL, a wrist wearable unit (WWU) is being developed to monitor the physical activity (PA) of the user and integrate with the USEFIL system. The WWU is a novel application of an existing technology to the assisted living problem domain. It combines existing technologies and new algorithms to extract PA parameters for activity monitoring. The parameters that are extracted include: activity level, step count and worn state. The WWU, the algorithms that have been developed and a preliminary validation are presented. The results show that activity level can be successfully extracted, that worn state can be correctly identified and that step counts in walking data can be estimated within 3% error, using the controlled dataset. PMID:26609402
Preliminary study on activity monitoring using an android smart-watch.
Ahanathapillai, Vijayalakshmi; Amor, James D; Goodwin, Zoe; James, Christopher J
2015-02-01
The global trend for increasing life expectancy is resulting in aging populations in a number of countries. This brings to bear a pressure to provide effective care for the older population with increasing constraints on available resources. Providing care for and maintaining the independence of an older person in their own home is one way that this problem can be addressed. The EU Funded Unobtrusive Smart Environments for Independent Living (USEFIL) project is an assistive technology tool being developed to enhance independent living. As part of USEFIL, a wrist wearable unit (WWU) is being developed to monitor the physical activity (PA) of the user and integrate with the USEFIL system. The WWU is a novel application of an existing technology to the assisted living problem domain. It combines existing technologies and new algorithms to extract PA parameters for activity monitoring. The parameters that are extracted include: activity level, step count and worn state. The WWU, the algorithms that have been developed and a preliminary validation are presented. The results show that activity level can be successfully extracted, that worn state can be correctly identified and that step counts in walking data can be estimated within 3% error, using the controlled dataset.
NASA Astrophysics Data System (ADS)
Cai, Zhonglun; Chen, Peng; Angland, David; Zhang, Xin
2014-03-01
A novel iterative learning control (ILC) algorithm was developed and applied to an active flow control problem. The technique uses pulsed air jets to delay flow separation on a two-element high-lift wing. The ILC algorithm uses position-based pressure measurements to update the actuation. The method was experimentally tested on a wing model in a 0.9 m × 0.6 m low-speed wind tunnel at the University of Southampton. Compressed air and fast switching solenoid valves were used as actuators to excite the flow, and the pressure distribution around the chord of the wing was measured as a feedback control signal for the ILC controller. Experimental results showed that the actuation was able to delay the separation and increase the lift by approximately 10%-15%. By using the ILC algorithm, the controller was able to find the optimum control input and maintain the improvement despite sudden changes of the separation position.
Reynier, Frédéric; Petit, Fabien; Paye, Malick; Turrel-Davin, Fanny; Imbert, Pierre-Emmanuel; Hot, Arnaud; Mougin, Bruno; Miossec, Pierre
2011-01-01
The analysis of gene expression data shows that many genes display similarity in their expression profiles suggesting some co-regulation. Here, we investigated the co-expression patterns in gene expression data and proposed a correlation-based research method to stratify individuals. Using blood from rheumatoid arthritis (RA) patients, we investigated the gene expression profiles from whole blood using Affymetrix microarray technology. Co-expressed genes were analyzed by a biclustering method, followed by gene ontology analysis of the relevant biclusters. Taking the type I interferon (IFN) pathway as an example, a classification algorithm was developed from the 102 RA patients and extended to 10 systemic lupus erythematosus (SLE) patients and 100 healthy volunteers to further characterize individuals. We developed a correlation-based algorithm referred to as Classification Algorithm Based on a Biological Signature (CABS), an alternative to other approaches focused specifically on the expression levels. This algorithm applied to the expression of 35 IFN-related genes showed that the IFN signature presented a heterogeneous expression between RA, SLE and healthy controls which could reflect the level of global IFN signature activation. Moreover, the monitoring of the IFN-related genes during the anti-TNF treatment identified changes in type I IFN gene activity induced in RA patients. In conclusion, we have proposed an original method to analyze genes sharing an expression pattern and a biological function showing that the activation levels of a biological signature could be characterized by its overall state of correlation.
Koa-Wing, Michael; Nakagawa, Hiroshi; Luther, Vishal; Jamil-Copley, Shahnaz; Linton, Nick; Sandler, Belinda; Qureshi, Norman; Peters, Nicholas S; Davies, D Wyn; Francis, Darrel P; Jackman, Warren; Kanagaratnam, Prapa
2015-11-15
Ripple Mapping (RM) is designed to overcome the limitations of existing isochronal 3D mapping systems by representing the intracardiac electrogram as a dynamic bar on a surface bipolar voltage map that changes in height according to the electrogram voltage-time relationship, relative to a fiduciary point. We tested the hypothesis that standard approaches to atrial tachycardia CARTO™ activation maps were inadequate for RM creation and interpretation. From the results, we aimed to develop an algorithm to optimize RMs for future prospective testing on a clinical RM platform. CARTO-XP™ activation maps from atrial tachycardia ablations were reviewed by two blinded assessors on an off-line RM workstation. Ripple Maps were graded according to a diagnostic confidence scale (Grade I - high confidence with clear pattern of activation through to Grade IV - non-diagnostic). The RM-based diagnoses were corroborated against the clinical diagnoses. 43 RMs from 14 patients were classified as Grade I (5 [11.5%]); Grade II (17 [39.5%]); Grade III (9 [21%]) and Grade IV (12 [28%]). Causes of low gradings/errors included the following: insufficient chamber point density; window-of-interest<100% of cycle length (CL); <95% tachycardia CL mapped; variability of CL and/or unstable fiducial reference marker; and suboptimal bar height and scar settings. A data collection and map interpretation algorithm has been developed to optimize Ripple Maps in atrial tachycardias. This algorithm requires prospective testing on a real-time clinical platform. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
On-line, adaptive state estimator for active noise control
NASA Technical Reports Server (NTRS)
Lim, Tae W.
1994-01-01
Dynamic characteristics of airframe structures are expected to vary as aircraft flight conditions change. Accurate knowledge of the changing dynamic characteristics is crucial to enhancing the performance of the active noise control system using feedback control. This research investigates the development of an adaptive, on-line state estimator using a neural network concept to conduct active noise control. In this research, an algorithm has been developed that can be used to estimate displacement and velocity responses at any locations on the structure from a limited number of acceleration measurements and input force information. The algorithm employs band-pass filters to extract from the measurement signal the frequency contents corresponding to a desired mode. The filtered signal is then used to train a neural network which consists of a linear neuron with three weights. The structure of the neural network is designed as simple as possible to increase the sampling frequency as much as possible. The weights obtained through neural network training are then used to construct the transfer function of a mode in z-domain and to identify modal properties of each mode. By using the identified transfer function and interpolating the mode shape obtained at sensor locations, the displacement and velocity responses are estimated with reasonable accuracy at any locations on the structure. The accuracy of the response estimates depends on the number of modes incorporated in the estimates and the number of sensors employed to conduct mode shape interpolation. Computer simulation demonstrates that the algorithm is capable of adapting to the varying dynamic characteristics of structural properties. Experimental implementation of the algorithm on a DSP (digital signal processing) board for a plate structure is underway. The algorithm is expected to reach the sampling frequency range of about 10 kHz to 20 kHz which needs to be maintained for a typical active noise control application.
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.
2009-01-01
Previous studies have demonstrated that rapid increases in total lightning activity (intracloud + cloud-to-ground) are often observed tens of minutes in advance of the occurrence of severe weather at the ground. These rapid increases in lightning activity have been termed "lightning jumps." Herein, we document a positive correlation between lightning jumps and the manifestation of severe weather in thunderstorms occurring across the Tennessee Valley and Washington D.C. A total of 107 thunderstorms were examined in this study, with 69 of the 107 thunderstorms falling into the category of non-severe, and 38 into the category of severe. From the dataset of 69 isolated non-severe thunderstorms, an average peak 1 minute flash rate of 10 flashes/min was determined. A variety of severe thunderstorm types were examined for this study including an MCS, MCV, tornadic outer rainbands of tropical remnants, supercells, and pulse severe thunderstorms. Of the 107 thunderstorms, 85 thunderstorms (47 non-severe, 38 severe) from the Tennessee Valley and Washington D.C tested 6 lightning jump algorithm configurations (Gatlin, Gatlin 45, 2(sigma), 3(sigma), Threshold 10, and Threshold 8). Performance metrics for each algorithm were then calculated, yielding encouraging results from the limited sample of 85 thunderstorms. The 2(sigma) lightning jump algorithm had a high probability of detection (POD; 87%), a modest false alarm rate (FAR; 33%), and a solid Heidke Skill Score (HSS; 0.75). A second and more simplistic lightning jump algorithm named the Threshold 8 lightning jump algorithm also shows promise, with a POD of 81% and a FAR of 41%. Average lead times to severe weather occurrence for these two algorithms were 23 minutes and 20 minutes, respectively. The overall goal of this study is to advance the development of an operationally-applicable jump algorithm that can be used with either total lightning observations made from the ground, or in the near future from space using the GOES-R Geostationary Lightning Mapper.
Path planning on cellular nonlinear network using active wave computing technique
NASA Astrophysics Data System (ADS)
Yeniçeri, Ramazan; Yalçın, Müstak E.
2009-05-01
This paper introduces a simple algorithm to solve robot path finding problem using active wave computing techniques. A two-dimensional Cellular Neural/Nonlinear Network (CNN), consist of relaxation oscillators, has been used to generate active waves and to process the visual information. The network, which has been implemented on a Field Programmable Gate Array (FPGA) chip, has the feature of being programmed, controlled and observed by a host computer. The arena of the robot is modelled as the medium of the active waves on the network. Active waves are employed to cover the whole medium with their own dynamics, by starting from an initial point. The proposed algorithm is achieved by observing the motion of the wave-front of the active waves. Host program first loads the arena model onto the active wave generator network and command to start the generation. Then periodically pulls the network image from the generator hardware to analyze evolution of the active waves. When the algorithm is completed, vectorial data image is generated. The path from any of the pixel on this image to the active wave generating pixel is drawn by the vectors on this image. The robot arena may be a complicated labyrinth or may have a simple geometry. But, the arena surface always must be flat. Our Autowave Generator CNN implementation which is settled on the Xilinx University Program Virtex-II Pro Development System is operated by a MATLAB program running on the host computer. As the active wave generator hardware has 16, 384 neurons, an arena with 128 × 128 pixels can be modeled and solved by the algorithm. The system also has a monitor and network image is depicted on the monitor simultaneously.
Study of onboard expert systems to augment space shuttle and space station autonomy
NASA Technical Reports Server (NTRS)
Kurtzman, C. R.; Akin, D. L.; Kranzler, D.; Erlanson, E.
1986-01-01
The feasibility of onboard crew activity planning was examined. The use of expert systems technology to aid crewmembers in locating stowed equipment was also investigated. The crew activity planning problem, along with a summary of past and current research efforts, was discussed in detail. The requirements and specifications used to develop the crew activity planning system was also defined. The guidelines used to create, develop, and operate the MFIVE Crew Scheduler and Logistics Clerk were discussed. Also discussed is the mathematical algorithm, used by the MFIVE Scheduler, which was developed to aid in optimal crew activity planning.
Yang, Zheng Rong; Thomson, Rebecca; Hodgman, T Charles; Dry, Jonathan; Doyle, Austin K; Narayanan, Ajit; Wu, XiKun
2003-11-01
This paper presents an algorithm which is able to extract discriminant rules from oligopeptides for protease proteolytic cleavage activity prediction. The algorithm is developed using genetic programming. Three important components in the algorithm are a min-max scoring function, the reverse Polish notation (RPN) and the use of minimum description length. The min-max scoring function is developed using amino acid similarity matrices for measuring the similarity between an oligopeptide and a rule, which is a complex algebraic equation of amino acids rather than a simple pattern sequence. The Fisher ratio is then calculated on the scoring values using the class label associated with the oligopeptides. The discriminant ability of each rule can therefore be evaluated. The use of RPN makes the evolutionary operations simpler and therefore reduces the computational cost. To prevent overfitting, the concept of minimum description length is used to penalize over-complicated rules. A fitness function is therefore composed of the Fisher ratio and the use of minimum description length for an efficient evolutionary process. In the application to four protease datasets (Trypsin, Factor Xa, Hepatitis C Virus and HIV protease cleavage site prediction), our algorithm is superior to C5, a conventional method for deriving decision trees.
NASA Astrophysics Data System (ADS)
Andersen, Mie; Plaisance, Craig P.; Reuter, Karsten
2017-10-01
First-principles screening studies aimed at predicting the catalytic activity of transition metal (TM) catalysts have traditionally been based on mean-field (MF) microkinetic models, which neglect the effect of spatial correlations in the adsorbate layer. Here we critically assess the accuracy of such models for the specific case of CO methanation over stepped metals by comparing to spatially resolved kinetic Monte Carlo (kMC) simulations. We find that the typical low diffusion barriers offered by metal surfaces can be significantly increased at step sites, which results in persisting correlations in the adsorbate layer. As a consequence, MF models may overestimate the catalytic activity of TM catalysts by several orders of magnitude. The potential higher accuracy of kMC models comes at a higher computational cost, which can be especially challenging for surface reactions on metals due to a large disparity in the time scales of different processes. In order to overcome this issue, we implement and test a recently developed algorithm for achieving temporal acceleration of kMC simulations. While the algorithm overall performs quite well, we identify some challenging cases which may lead to a breakdown of acceleration algorithms and discuss possible directions for future algorithm development.
Extreme Trust Region Policy Optimization for Active Object Recognition.
Liu, Huaping; Wu, Yupei; Sun, Fuchun; Huaping Liu; Yupei Wu; Fuchun Sun; Sun, Fuchun; Liu, Huaping; Wu, Yupei
2018-06-01
In this brief, we develop a deep reinforcement learning method to actively recognize objects by choosing a sequence of actions for an active camera that helps to discriminate between the objects. The method is realized using trust region policy optimization, in which the policy is realized by an extreme learning machine and, therefore, leads to efficient optimization algorithm. The experimental results on the publicly available data set show the advantages of the developed extreme trust region optimization method.
Data mining and visualization from planetary missions: the VESPA-Europlanet2020 activity
NASA Astrophysics Data System (ADS)
Longobardo, Andrea; Capria, Maria Teresa; Zinzi, Angelo; Ivanovski, Stavro; Giardino, Marco; di Persio, Giuseppe; Fonte, Sergio; Palomba, Ernesto; Antonelli, Lucio Angelo; Fonte, Sergio; Giommi, Paolo; Europlanet VESPA 2020 Team
2017-06-01
This paper presents the VESPA (Virtual European Solar and Planetary Access) activity, developed in the context of the Europlanet 2020 Horizon project, aimed at providing tools for analysis and visualization of planetary data provided by space missions. In particular, the activity is focused on minor bodies of the Solar System.The structure of the computation node, the algorithms developed for analysis of planetary surfaces and cometary comae and the tools for data visualization are presented.
Adaptive signal processing at NOSC
NASA Astrophysics Data System (ADS)
Albert, T. R.
1992-03-01
Adaptive signal processing work at the Naval Ocean Systems Center (NOSC) dates back to the late 1960s. It began as an IR/IED project by John McCool, who made use of an adaptive algorithm that had been developed by Professor Bernard Widrow of Stanford University. In 1972, a team lead by McCool built the first hardware implementation of the algorithm that could process in real-time at acoustic bandwidths. Early tests with the two units that were built were extremely successful, and attracted much attention. Sponsors from different commands provided funding to develop hardware for submarine, surface ship, airborne, and other systems. In addition, an effort was initiated to analyze performance and behavior of the algorithm. Most of the hardware development and analysis efforts were active through the 1970s, and a few into the 1980s. One of the original programs continues to this date.
NASA GPM GV Science Implementation
NASA Technical Reports Server (NTRS)
Petersen, W. A.
2009-01-01
Pre-launch algorithm development & post-launch product evaluation: The GPM GV paradigm moves beyond traditional direct validation/comparison activities by incorporating improved algorithm physics & model applications (end-to-end validation) in the validation process. Three approaches: 1) National Network (surface): Operational networks to identify and resolve first order discrepancies (e.g., bias) between satellite and ground-based precipitation estimates. 2) Physical Process (vertical column): Cloud system and microphysical studies geared toward testing and refinement of physically-based retrieval algorithms. 3) Integrated (4-dimensional): Integration of satellite precipitation products into coupled prediction models to evaluate strengths/limitations of satellite precipitation producers.
Performance Evaluation of Multichannel Adaptive Algorithms for Local Active Noise Control
NASA Astrophysics Data System (ADS)
DE DIEGO, M.; GONZALEZ, A.
2001-07-01
This paper deals with the development of a multichannel active noise control (ANC) system inside an enclosed space. The purpose is to design a real practical system which works well in local ANC applications. Moreover, the algorithm implemented in the adaptive controller should be robust, of low computational complexity and it should manage to generate a uniform useful-size zone of quite in order to allow the head motion of a person seated on a seat inside a car. Experiments were carried out under semi-anechoic and listening room conditions to verify the successful implementation of the multichannel system. The developed prototype consists of an array of up to four microphones used as error sensors mounted on the headrest of a seat place inside the enclosure. One loudspeaker was used as single primary source and two secondary sources were placed facing the seat. The aim of this multichannel system is to reduce the sound pressure levels in an area around the error sensors, following a local control strategy. When using this technique, the cancellation points are not only the error sensor positions but an area around them, which is measured by using a monitoring microphone. Different multichannel adaptive algorithms for ANC have been analyzed and their performance verified. Multiple error algorithms are used in order to cancel out different types of primary noise (engine noise and random noise) with several configurations (up to four channels system). As an alternative to the multiple error LMS algorithm (multichannel version of the filtered-X LMS algorithm, MELMS), the least maximum mean squares (LMMS) and the scanning error-LMS algorithm have been developed in this work in order to reduce computational complexity and achieve a more uniform residual field. The ANC algorithms were programmed on a digital signal processing board equipped with a TMS320C40 floating point DSP processor. Measurements concerning real-time experiments on local noise reduction in two environments and at frequencies below 230 Hz are presented. Better noise levels attenuation is obtained in the semianechoic chamber due to the simplicity of the acoustic field. The size of the zone of quiet makes the system useful at relatively low frequencies and it is large enough to cover a listener's head movements. The spatial extent of the zones of quiet is generally observed to increase as the error sensors are moved away from the secondary source, they are put closer together or its number increases. In summary, different algorithms' performance and the viability of the multichannel system for local active noise control in real listening conditions are evaluated and some guidelines for designing such systems are then proposed.
Aerocapture Guidance Algorithm Comparison Campaign
NASA Technical Reports Server (NTRS)
Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric
2002-01-01
The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.
Autonomous Wheeled Robot Platform Testbed for Navigation and Mapping Using Low-Cost Sensors
NASA Astrophysics Data System (ADS)
Calero, D.; Fernandez, E.; Parés, M. E.
2017-11-01
This paper presents the concept of an architecture for a wheeled robot system that helps researchers in the field of geomatics to speed up their daily research on kinematic geodesy, indoor navigation and indoor positioning fields. The presented ideas corresponds to an extensible and modular hardware and software system aimed at the development of new low-cost mapping algorithms as well as at the evaluation of the performance of sensors. The concept, already implemented in the CTTC's system ARAS (Autonomous Rover for Automatic Surveying) is generic and extensible. This means that it is possible to incorporate new navigation algorithms or sensors at no maintenance cost. Only the effort related to the development tasks required to either create such algorithms needs to be taken into account. As a consequence, change poses a much small problem for research activities in this specific area. This system includes several standalone sensors that may be combined in different ways to accomplish several goals; that is, this system may be used to perform a variety of tasks, as, for instance evaluates positioning algorithms performance or mapping algorithms performance.
Liu, Y; Sareen, J; Bolton, J M; Wang, J L
2016-03-15
Suicidal ideation is one of the strongest predictors of recent and future suicide attempt. This study aimed to develop and validate a risk prediction algorithm for the recurrence of suicidal ideation among population with low mood 3035 participants from U.S National Epidemiologic Survey on Alcohol and Related Conditions with suicidal ideation at their lowest mood at baseline were included. The Alcohol Use Disorder and Associated Disabilities Interview Schedule, based on the DSM-IV criteria was used. Logistic regression modeling was conducted to derive the algorithm. Discrimination and calibration were assessed in the development and validation cohorts. In the development data, the proportion of recurrent suicidal ideation over 3 years was 19.5 (95% CI: 17.7, 21.5). The developed algorithm consisted of 6 predictors: age, feelings of emptiness, sudden mood changes, self-harm history, depressed mood in the past 4 weeks, interference with social activities in the past 4 weeks because of physical health or emotional problems and emptiness was the most important risk factor. The model had good discriminative power (C statistic=0.8273, 95% CI: 0.8027, 0.8520). The C statistic was 0.8091 (95% CI: 0.7786, 0.8395) in the external validation dataset and was 0.8193 (95% CI: 0.8001, 0.8385) in the combined dataset. This study does not apply to people with suicidal ideation who are not depressed. The developed risk algorithm for predicting the recurrence of suicidal ideation has good discrimination and excellent calibration. Clinicians can use this algorithm to stratify the risk of recurrence in patients and thus improve personalized treatment approaches, make advice and further intensive monitoring. Copyright © 2016 Elsevier B.V. All rights reserved.
Control algorithm implementation for a redundant degree of freedom manipulator
NASA Technical Reports Server (NTRS)
Cohan, Steve
1991-01-01
This project's purpose is to develop and implement control algorithms for a kinematically redundant robotic manipulator. The manipulator is being developed concurrently by Odetics Inc., under internal research and development funding. This SBIR contract supports algorithm conception, development, and simulation, as well as software implementation and integration with the manipulator hardware. The Odetics Dexterous Manipulator is a lightweight, high strength, modular manipulator being developed for space and commercial applications. It has seven fully active degrees of freedom, is electrically powered, and is fully operational in 1 G. The manipulator consists of five self-contained modules. These modules join via simple quick-disconnect couplings and self-mating connectors which allow rapid assembly/disassembly for reconfiguration, transport, or servicing. Each joint incorporates a unique drive train design which provides zero backlash operation, is insensitive to wear, and is single fault tolerant to motor or servo amplifier failure. The sensing system is also designed to be single fault tolerant. Although the initial prototype is not space qualified, the design is well-suited to meeting space qualification requirements. The control algorithm design approach is to develop a hierarchical system with well defined access and interfaces at each level. The high level endpoint/configuration control algorithm transforms manipulator endpoint position/orientation commands to joint angle commands, providing task space motion. At the same time, the kinematic redundancy is resolved by controlling the configuration (pose) of the manipulator, using several different optimizing criteria. The center level of the hierarchy servos the joints to their commanded trajectories using both linear feedback and model-based nonlinear control techniques. The lowest control level uses sensed joint torque to close torque servo loops, with the goal of improving the manipulator dynamic behavior. The control algorithms are subjected to a dynamic simulation before implementation.
Deriving novel relationships from the scientific literature is an important adjunct to datamining activities for complex datasets in genomics and high-throughput screening activities. Automated text-mining algorithms can be used to extract relevant content from the literature and...
Nguyen, Hung P; Ayachi, Fouaz; Lavigne-Pelletier, Catherine; Blamoutier, Margaux; Rahimi, Fariborz; Boissy, Patrick; Jog, Mandar; Duval, Christian
2015-04-11
Recently, much attention has been given to the use of inertial sensors for remote monitoring of individuals with limited mobility. However, the focus has been mostly on the detection of symptoms, not specific activities. The objective of the present study was to develop an automated recognition and segmentation algorithm based on inertial sensor data to identify common gross motor patterns during activity of daily living. A modified Time-Up-And-Go (TUG) task was used since it is comprised of four common daily living activities; Standing, Walking, Turning, and Sitting, all performed in a continuous fashion resulting in six different segments during the task. Sixteen healthy older adults performed two trials of a 5 and 10 meter TUG task. They were outfitted with 17 inertial motion sensors covering each body segment. Data from the 10 meter TUG were used to identify pertinent sensors on the trunk, head, hip, knee, and thigh that provided suitable data for detecting and segmenting activities associated with the TUG. Raw data from sensors were detrended to remove sensor drift, normalized, and band pass filtered with optimal frequencies to reveal kinematic peaks that corresponded to different activities. Segmentation was accomplished by identifying the time stamps of the first minimum or maximum to the right and the left of these peaks. Segmentation time stamps were compared to results from two examiners visually segmenting the activities of the TUG. We were able to detect these activities in a TUG with 100% sensitivity and specificity (n = 192) during the 10 meter TUG. The rate of success was subsequently confirmed in the 5 meter TUG (n = 192) without altering the parameters of the algorithm. When applying the segmentation algorithms to the 10 meter TUG, we were able to parse 100% of the transition points (n = 224) between different segments that were as reliable and less variable than visual segmentation performed by two independent examiners. The present study lays the foundation for the development of a comprehensive algorithm to detect and segment naturalistic activities using inertial sensors, in hope of evaluating automatically motor performance within the detected tasks.
NASA Astrophysics Data System (ADS)
Meier, Sebastian; Glinka, Katrin
2018-05-01
Personal and subjective perceptions of urban space have been a focus of various research projects in the area of cartography, geography, and related fields such as urban planning. This paper illustrates how personal georeferenced activity data can be used in algorithmic modelling of certain aspects of mental maps and customised spatial visualisations. The technical implementation of the algorithm is accompanied by a preliminary study which evaluates the performance of the algorithm. As a linking element between personal perception, interpretation, and depiction of space and the field of cartography and geography, we include perspectives from artistic practice and cultural theory. By developing novel visualisation concepts based on personal data, the paper in part mitigates the challenges presented by user modelling that is, amongst others, used in LBS applications.
NASA Technical Reports Server (NTRS)
Wang, Menghua
2003-01-01
The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.
Testing of a long-term fall detection system incorporated into a custom vest for the elderly.
Bourke, Alan K; van de Ven, Pepijn W J; Chaya, Amy E; OLaighin, Gearóid M; Nelson, John
2008-01-01
A fall detection system and algorithm, incorporated into a custom designed garment has been developed. The developed fall detection system uses a tri-axial accelerometer to detect impacts and monitor posture. This sensor is attached to a custom designed vest, designed to be worn by the elderly person under clothing. The fall detection algorithm was developed and incorporates both impact and posture detection capability. The vest and fall algorithm was tested by two teams of 5 elderly subjects who wore the sensor system in turn for 2 week each and were monitored for 8 hours a day. The system previously achieved sensitivity of >90% and a specificity of >99%, using young healthy subjects performing falls and normal activities of daily living (ADL). In this study, over 833 hours of monitoring was performed over the course of the four weeks from the elderly subjects, during normal daily activity. In this time no actual falls were recorded, however the system registered a total of the 42 fall-alerts however only 9 were received at the care taker site. A fall detection system incorporated into a custom designed garment has been developed which will help reduce the incidence of the long-lie, when falls occur in the elderly population. However further development is required to reduce the number of false-positives and improve the transmission of messages.
Algorithm for evaluating the effectiveness of a high-rise development project based on current yield
NASA Astrophysics Data System (ADS)
Soboleva, Elena
2018-03-01
The article is aimed at the issues of operational evaluation of development project efficiency in high-rise construction under the current economic conditions in Russia. The author touches the following issues: problems of implementing development projects, the influence of the operational evaluation quality of high-rise construction projects on general efficiency, assessing the influence of the project's external environment on the effectiveness of project activities under crisis conditions and the quality of project management. The article proposes the algorithm and the methodological approach to the quality management of the developer project efficiency based on operational evaluation of the current yield efficiency. The methodology for calculating the current efficiency of a development project for high-rise construction has been updated.
NASA Astrophysics Data System (ADS)
Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka
Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.
Kim, Heung Soo; Sohn, Jung Woo; Jeon, Juncheol; Choi, Seung-Bok
2013-01-01
In this work, active vibration control of an underwater cylindrical shell structure was investigated, to suppress structural vibration and structure-borne noise in water. Finite element modeling of the submerged cylindrical shell structure was developed, and experimentally evaluated. Modal reduction was conducted to obtain the reduced system equation for the active feedback control algorithm. Three Macro Fiber Composites (MFCs) were used as actuators and sensors. One MFC was used as an exciter. The optimum control algorithm was designed based on the reduced system equations. The active control performance was then evaluated using the lab scale underwater cylindrical shell structure. Structural vibration and structure-borne noise of the underwater cylindrical shell structure were reduced significantly by activating the optimal controller associated with the MFC actuators. The results provide that active vibration control of the underwater structure is a useful means to reduce structure-borne noise in water. PMID:23389344
Kim, Heung Soo; Sohn, Jung Woo; Jeon, Juncheol; Choi, Seung-Bok
2013-02-06
In this work, active vibration control of an underwater cylindrical shell structure was investigated, to suppress structural vibration and structure-borne noise in water. Finite element modeling of the submerged cylindrical shell structure was developed, and experimentally evaluated. Modal reduction was conducted to obtain the reduced system equation for the active feedback control algorithm. Three Macro Fiber Composites (MFCs) were used as actuators and sensors. One MFC was used as an exciter. The optimum control algorithm was designed based on the reduced system equations. The active control performance was then evaluated using the lab scale underwater cylindrical shell structure. Structural vibration and structure-borne noise of the underwater cylindrical shell structure were reduced significantly by activating the optimal controller associated with the MFC actuators. The results provide that active vibration control of the underwater structure is a useful means to reduce structure-borne noise in water.
Cloud and aerosol studies using combined CPL and MAS data
NASA Astrophysics Data System (ADS)
Vaughan, Mark A.; Rodier, Sharon; Hu, Yongxiang; McGill, Matthew J.; Holz, Robert E.
2004-11-01
Current uncertainties in the role of aerosols and clouds in the Earth's climate system limit our abilities to model the climate system and predict climate change. These limitations are due primarily to difficulties of adequately measuring aerosols and clouds on a global scale. The A-train satellites (Aqua, CALIPSO, CloudSat, PARASOL, and Aura) will provide an unprecedented opportunity to address these uncertainties. The various active and passive sensors of the A-train will use a variety of measurement techniques to provide comprehensive observations of the multi-dimensional properties of clouds and aerosols. However, to fully achieve the potential of this ensemble requires a robust data analysis framework to optimally and efficiently map these individual measurements into a comprehensive set of cloud and aerosol physical properties. In this work we introduce the Multi-Instrument Data Analysis and Synthesis (MIDAS) project, whose goal is to develop a suite of physically sound and computationally efficient algorithms that will combine active and passive remote sensing data in order to produce improved assessments of aerosol and cloud radiative and microphysical properties. These algorithms include (a) the development of an intelligent feature detection algorithm that combines inputs from both active and passive sensors, and (b) identifying recognizable multi-instrument signatures related to aerosol and cloud type derived from clusters of image pixels and the associated vertical profile information. Classification of these signatures will lead to the automated identification of aerosol and cloud types. Testing of these new algorithms is done using currently existing and readily available active and passive measurements from the Cloud Physics Lidar and the MODIS Airborne Simulator, which simulate, respectively, the CALIPSO and MODIS A-train instruments.
McKinney, Mark C; Riley, Jeffrey B
2007-12-01
The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time < 450 seconds with > 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.
NASA Astrophysics Data System (ADS)
Minnis, P.; Sun-Mack, S.; Chang, F.; Huang, J.; Nguyen, L.; Ayers, J. K.; Spangenberg, D. A.; Yi, Y.; Trepte, C. R.
2006-12-01
During the last few years, several algorithms have been developed to detect and retrieve multilayered clouds using passive satellite data. Assessing these techniques has been difficult due to the need for active sensors such as cloud radars and lidars that can "see" through different layers of clouds. Such sensors have been available only at a few surface sites and on aircraft during field programs. With the launch of the CALIPSO and CloudSat satellites on April 28, 2006, it is now possible to observe multilayered systems all over the globe using collocated cloud radar and lidar data. As part of the A- Train, these new active sensors are also matched in time ad space with passive measurements from the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer - EOS (AMSR-E). The Clouds and the Earth's Radiant Energy System (CERES) has been developing and testing algorithms to detect ice-over-water overlapping cloud systems and to retrieve the cloud liquid path (LWP) and ice water path (IWP) for those systems. One technique uses a combination of the CERES cloud retrieval algorithm applied to MODIS data and a microwave retrieval method applied to AMSR-E data. The combination of a CO2-slicing cloud retireval technique with the CERES algorithms applied to MODIS data (Chang et al., 2005) is used to detect and analyze such overlapped systems that contain thin ice clouds. A third technique uses brightness temperature differences and the CERES algorithms to detect similar overlapped methods. This paper uses preliminary CloudSat and CALIPSO data to begin a global scale assessment of these different methods. The long-term goals are to assess and refine the algorithms to aid the development of an optimal combination of the techniques to better monitor ice 9and liquid water clouds in overlapped conditions.
NASA Astrophysics Data System (ADS)
Minnis, P.; Sun-Mack, S.; Chang, F.; Huang, J.; Nguyen, L.; Ayers, J. K.; Spangenberg, D. A.; Yi, Y.; Trepte, C. R.
2005-05-01
During the last few years, several algorithms have been developed to detect and retrieve multilayered clouds using passive satellite data. Assessing these techniques has been difficult due to the need for active sensors such as cloud radars and lidars that can "see" through different layers of clouds. Such sensors have been available only at a few surface sites and on aircraft during field programs. With the launch of the CALIPSO and CloudSat satellites on April 28, 2006, it is now possible to observe multilayered systems all over the globe using collocated cloud radar and lidar data. As part of the A- Train, these new active sensors are also matched in time ad space with passive measurements from the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer - EOS (AMSR-E). The Clouds and the Earth's Radiant Energy System (CERES) has been developing and testing algorithms to detect ice-over-water overlapping cloud systems and to retrieve the cloud liquid path (LWP) and ice water path (IWP) for those systems. One technique uses a combination of the CERES cloud retrieval algorithm applied to MODIS data and a microwave retrieval method applied to AMSR-E data. The combination of a CO2-slicing cloud retireval technique with the CERES algorithms applied to MODIS data (Chang et al., 2005) is used to detect and analyze such overlapped systems that contain thin ice clouds. A third technique uses brightness temperature differences and the CERES algorithms to detect similar overlapped methods. This paper uses preliminary CloudSat and CALIPSO data to begin a global scale assessment of these different methods. The long-term goals are to assess and refine the algorithms to aid the development of an optimal combination of the techniques to better monitor ice 9and liquid water clouds in overlapped conditions.
Atmospheric electricity/meteorology analysis
NASA Technical Reports Server (NTRS)
Goodman, Steven J.; Blakeslee, Richard; Buechler, Dennis
1993-01-01
This activity focuses on Lightning Imaging Sensor (LIS)/Lightning Mapper Sensor (LMS) algorithm development and applied research. Specifically we are exploring the relationships between (1) global and regional lightning activity and rainfall, and (2) storm electrical development, physics, and the role of the environment. U.S. composite radar-rainfall maps and ground strike lightning maps are used to understand lightning-rainfall relationships at the regional scale. These observations are then compared to SSM/I brightness temperatures to simulate LIS/TRMM multi-sensor algorithm data sets. These data sets are supplied to the WETNET project archive. WSR88-D (NEXRAD) data are also used as it becomes available. The results of this study allow us to examine the information content from lightning imaging sensors in low-earth and geostationary orbits. Analysis of tropical and U.S. data sets continues. A neural network/sensor fusion algorithm is being refined for objectively associating lightning and rainfall with their parent storm systems. Total lightning data from interferometers are being used in conjunction with data from the national lightning network. A 6-year lightning/rainfall climatology has been assembled for LIS sampling studies.
A Functional-Genetic Scheme for Seizure Forecasting in Canine Epilepsy.
Bou Assi, Elie; Nguyen, Dang K; Rihana, Sandy; Sawan, Mohamad
2018-06-01
The objective of this work is the development of an accurate seizure forecasting algorithm that considers brain's functional connectivity for electrode selection. We start by proposing Kmeans-directed transfer function, an adaptive functional connectivity method intended for seizure onset zone localization in bilateral intracranial EEG recordings. Electrodes identified as seizure activity sources and sinks are then used to implement a seizure-forecasting algorithm on long-term continuous recordings in dogs with naturally-occurring epilepsy. A precision-recall genetic algorithm is proposed for feature selection in line with a probabilistic support vector machine classifier. Epileptic activity generators were focal in all dogs confirming the diagnosis of focal epilepsy in these animals while sinks spanned both hemispheres in 2 of 3 dogs. Seizure forecasting results show performance improvement compared to previous studies, achieving average sensitivity of 84.82% and time in warning of 0.1. Achieved performances highlight the feasibility of seizure forecasting in canine epilepsy. The ability to improve seizure forecasting provides promise for the development of EEG-triggered closed-loop seizure intervention systems for ambulatory implantation in patients with refractory epilepsy.
Operational algorithm development and refinement approaches
NASA Astrophysics Data System (ADS)
Ardanuy, Philip E.
2003-11-01
Next-generation polar and geostationary systems, such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the Geostationary Operational Environmental Satellite (GOES)-R, will deploy new generations of electro-optical reflective and emissive capabilities. These will include low-radiometric-noise, improved spatial resolution multi-spectral and hyperspectral imagers and sounders. To achieve specified performances (e.g., measurement accuracy, precision, uncertainty, and stability), and best utilize the advanced space-borne sensing capabilities, a new generation of retrieval algorithms will be implemented. In most cases, these advanced algorithms benefit from ongoing testing and validation using heritage research mission algorithms and data [e.g., the Earth Observing System (EOS)] Moderate-resolution Imaging Spectroradiometer (MODIS) and Shuttle Ozone Limb Scattering Experiment (SOLSE)/Limb Ozone Retreival Experiment (LORE). In these instances, an algorithm's theoretical basis is not static, but rather improves with time. Once frozen, an operational algorithm can "lose ground" relative to research analogs. Cost/benefit analyses provide a basis for change management. The challenge is in reconciling and balancing the stability, and "comfort," that today"s generation of operational platforms provide (well-characterized, known, sensors and algorithms) with the greatly improved quality, opportunities, and risks, that the next generation of operational sensors and algorithms offer. By using the best practices and lessons learned from heritage/groundbreaking activities, it is possible to implement an agile process that enables change, while managing change. This approach combines a "known-risk" frozen baseline with preset completion schedules with insertion opportunities for algorithm advances as ongoing validation activities identify and repair areas of weak performance. This paper describes an objective, adaptive implementation roadmap that takes into account the specific maturities of each system"s (sensor and algorithm) technology to provide for a program that contains continuous improvement while retaining its manageability.
Dissolved Organic Carbon along the Louisiana coast from MODIS and MERIS satellite data
NASA Astrophysics Data System (ADS)
Chaichi Tehrani, N.; D'Sa, E. J.
2012-12-01
Dissolved organic carbon (DOC) plays a critical role in the coastal and ocean carbon cycle. Hence, it is important to monitor and investigate its the distribution and fate in coastal waters. Since DOC cannot be measured directly through satellite remote sensors, chromophoric dissolved organic matter (CDOM) as an optically active fraction of DOC can be used as an alternative proxy to trace DOC concentrations. Here, satellite ocean color data from MODIS, MERIS, and field measurements of CDOM and DOC were used to develop and assess CDOM and DOC ocean color algorithms for coastal waters. To develop a CDOM retrieval algorithm, empirical relationships between CDOM absorption coefficient at 412 nm (aCDOM(412)) and reflectance ratios Rrs(488)/Rrs(555) for MODIS and Rrs(510)/Rrs(560) for MERIS were established. The performance of two CDOM empirical algorithms were evaluated for retrieval of (aCDOM(412)) from MODIS and MERIS in the northern Gulf of Mexico. Further, empirical algorithms were developed to estimate DOC concentration using the relationship between in situ aCDOM(412) and DOC, as well as using the newly developed CDOM empirical algorithms. Accordingly, our results revealed that DOC concentration was strongly correlated to aCDOM (412) for summer and spring-winter periods (r2 = 0.9 for both periods). Then, using the aCDOM(412)-Rrs and the aCDOM(412)-DOC relationships derived from field measurements, a relationship between DOC-Rrs was established for MODIS and MERIS data. The DOC empirical algorithms performed well as indicated by match-up comparisons between satellite estimates and field data (R2=0.52 and 0.58 for MODIS and MERIS for summer period, respectively). These algorithms were then used to examine DOC distribution along the Louisiana coast.
HEAVY DUTY DIESEL VEHICLE LOAD ESTIMATION: DEVELOPMENT OF VEHICLE ACTIVITY OPTIMIZATION ALGORITHM
The Heavy-Duty Vehicle Modal Emission Model (HDDV-MEM) developed by the Georgia Institute of Technology(Georgia Tech) has a capability to model link-specific second-by-second emissions using speed/accleration matrices. To estimate emissions, engine power demand calculated usin...
NASA Astrophysics Data System (ADS)
Mugnai, A.; Smith, E. A.; Tripoli, G. J.; Bizzarri, B.; Casella, D.; Dietrich, S.; Di Paola, F.; Panegrossi, G.; Sanò, P.
2013-04-01
Satellite Application Facility on Support to Operational Hydrology and Water Management (H-SAF) is a EUMETSAT (European Organisation for the Exploitation of Meteorological Satellites) program, designed to deliver satellite products of hydrological interest (precipitation, soil moisture and snow parameters) over the European and Mediterranean region to research and operations users worldwide. Six satellite precipitation algorithms and concomitant precipitation products are the responsibility of various agencies in Italy. Two of these algorithms have been designed for maximum accuracy by restricting their inputs to measurements from conical and cross-track scanning passive microwave (PMW) radiometers mounted on various low Earth orbiting satellites. They have been developed at the Italian National Research Council/Institute of Atmospheric Sciences and Climate in Rome (CNR/ISAC-Rome), and are providing operational retrievals of surface rain rate and its phase properties. Each of these algorithms is physically based, however, the first of these, referred to as the Cloud Dynamics and Radiation Database (CDRD) algorithm, uses a Bayesian-based solution solver, while the second, referred to as the PMW Neural-net Precipitation Retrieval (PNPR) algorithm, uses a neural network-based solution solver. Herein we first provide an overview of the two initial EU research and applications programs that motivated their initial development, EuroTRMM and EURAINSAT (European Satellite Rainfall Analysis and Monitoring at the Geostationary Scale), and the current H-SAF program that provides the framework for their operational use and continued development. We stress the relevance of the CDRD and PNPR algorithms and their precipitation products in helping secure the goals of H-SAF's scientific and operations agenda, the former helpful as a secondary calibration reference to other algorithms in H-SAF's complete mix of algorithms. Descriptions of the algorithms' designs are provided including a few examples of their performance. This aspect of the development of the two algorithms is placed in the context of what we refer to as the TRMM era, which is the era denoting the active and ongoing period of the Tropical Rainfall Measuring Mission (TRMM) that helped inspire their original development. In 2015, the ISAC-Rome precipitation algorithms will undergo a transformation beginning with the upcoming Global Precipitation Measurement (GPM) mission, particularly the GPM Core Satellite technologies. A few years afterward, the first pair of imaging and sounding Meteosat Third Generation (MTG) satellites will be launched, providing additional technological advances. Various of the opportunities presented by the GPM Core and MTG satellites for improving the current CDRD and PNPR precipitation retrieval algorithms, as well as extending their product capability, are discussed.
NASA Astrophysics Data System (ADS)
Ma, Xunjun; Lu, Yang; Wang, Fengjiao
2017-09-01
This paper presents the recent advances in reduction of multifrequency noise inside helicopter cabin using an active structural acoustic control system, which is based on active gearbox struts technical approach. To attenuate the multifrequency gearbox vibrations and resulting noise, a new scheme of discrete model predictive sliding mode control has been proposed based on controlled auto-regressive moving average model. Its implementation only needs input/output data, hence a broader frequency range of controlled system is modelled and the burden on the state observer design is released. Furthermore, a new iteration form of the algorithm is designed, improving the developing efficiency and run speed. To verify the algorithm's effectiveness and self-adaptability, experiments of real-time active control are performed on a newly developed helicopter model system. The helicopter model can generate gear meshing vibration/noise similar to a real helicopter with specially designed gearbox and active struts. The algorithm's control abilities are sufficiently checked by single-input single-output and multiple-input multiple-output experiments via different feedback strategies progressively: (1) control gear meshing noise through attenuating vibrations at the key points on the transmission path, (2) directly control the gear meshing noise in the cabin using the actuators. Results confirm that the active control system is practical for cancelling multifrequency helicopter interior noise, which also weakens the frequency-modulation of the tones. For many cases, the attenuations of the measured noise exceed the level of 15 dB, with maximum reduction reaching 31 dB. Also, the control process is demonstrated to be smoother and faster.
Quantifying Aluminum Crystal Size Part 2: The Model-Development Sequence
ERIC Educational Resources Information Center
Hjalmarson, Margret; Diefes-Dux, Heidi A.; Bowman, Keith; Zawojewski, Judith S.
2006-01-01
We have designed model-development sequences using a common context to provide authentic problem-solving experiences for first-year students. The model-development sequence takes a model-eliciting activity a step further by engaging students in the exploration and adaptation of a mathematical model (e.g., procedure, algorithm, method) for solving…
Test Platforms for Model-Based Flight Research
NASA Astrophysics Data System (ADS)
Dorobantu, Andrei
Demonstrating the reliability of flight control algorithms is critical to integrating unmanned aircraft systems into the civilian airspace. For many potential applications, design and certification of these algorithms will rely heavily on mathematical models of the aircraft dynamics. Therefore, the aerospace community must develop flight test platforms to support the advancement of model-based techniques. The University of Minnesota has developed a test platform dedicated to model-based flight research for unmanned aircraft systems. This thesis provides an overview of the test platform and its research activities in the areas of system identification, model validation, and closed-loop control for small unmanned aircraft.
Deducing chemical structure from crystallographically determined atomic coordinates
Bruno, Ian J.; Shields, Gregory P.; Taylor, Robin
2011-01-01
An improved algorithm has been developed for assigning chemical structures to incoming entries to the Cambridge Structural Database, using only the information available in the deposited CIF. Steps in the algorithm include detection of bonds, selection of polymer unit, resolution of disorder, and assignment of bond types and formal charges. The chief difficulty is posed by the large number of metallo-organic crystal structures that must be processed, given our aspiration that assigned chemical structures should accurately reflect properties such as the oxidation states of metals and redox-active ligands, metal coordination numbers and hapticities, and the aromaticity or otherwise of metal ligands. Other complications arise from disorder, especially when it is symmetry imposed or modelled with the SQUEEZE algorithm. Each assigned structure is accompanied by an estimate of reliability and, where necessary, diagnostic information indicating probable points of error. Although the algorithm was written to aid building of the Cambridge Structural Database, it has the potential to develop into a general-purpose tool for adding chemical information to newly determined crystal structures. PMID:21775812
NASA Technical Reports Server (NTRS)
Njoku, Eni; Entekhabi, Dara; O'Neill, Peggy; Jackson, Tom; Kellogg, Kent; Entin, Jared
2011-01-01
NASA's Soil Moisture Active Passive (SMAP) mission, planned for launch in late 2014, has as its key measurement objective the frequent, global mapping of near-surface soil moisture and its freeze-thaw state. SMAP soil moisture and freeze/thaw measurements at 10 km and 3 km resolutions respectively, would enable significantly improved estimates of water, energy and carbon transfers between the land and atmosphere. Soil moisture control of these fluxes is a key factor in the performance of atmospheric models used for weather forecasts and climate projections Soil moisture measurements are also of great importance in assessing floods and for monitoring drought. In addition, observations of soil moisture and freeze/thaw timing over the boreal latitudes can help reduce uncertainties in quantifying the global carbon balance. The SMAP measurement concept utilizes an L-band radar and radiometer sharing a rotating 6-meter mesh reflector antenna. The SMAP radiometer and radar flight hardware and ground processing designs are incorporating approaches to identify and mitigate potential terrestrial radio frequency interference (RFI). The radar and radiometer instruments are planned to operate in a 680 km polar orbit, viewing the surface at a constant 40-degree incidence angle with a 1000-km swath width, providing 3-day global coverage. Data from the instruments would yield global maps of soil moisture and freeze/thaw state to be provided at 10 km and 3 km resolutions respectively, every two to three days. Plans are to provide also a radiometer-only soil moisture product at 40-km spatial resolution. This product and the underlying brightness temperatures have characteristics similar to those provided by the Soil Moisture and Ocean Salinity (SMOS) mission. As a result, there are unique opportunities for common data product development and continuity between the two missions. SMAP also has commonalities with other satellite missions having L-band radiometer and/or radar sensors applicable to soil moisture measurement, such as Aquarius, SAO COM, and ALOS-2. The algorithms and data products for SMAP are being developed in the SMAP Science Data System (SDS) Testbed. The algorithms are developed and evaluated in the SDS Testbed using simulated SMAP observations as well as observational data from current airborne and spaceborne L-band sensors including SMOS. The SMAP project is developing a Calibration and Validation (Cal/Val) Plan that is designed to support algorithm development (pre-launch) and data product validation (post-launch). A key component of the Cal/Val Plan is the identification, characterization, and instrumentation of sites that can be used to calibrate and validate the sensor data (Level I) and derived geophysical products (Level 2 and higher). In this presentation we report on the development status of the SMAP data product algorithms, and the planning and implementation of the SMAP Cal/Val program. Several components of the SMAP algorithm development and Cal/Val plans have commonality with those of SMOS, and for this reason there are shared activities and resources that can be utilized between the missions, including in situ networks, ancillary data sets, and long-term monitoring sites.
NASA Astrophysics Data System (ADS)
Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Murray, J. R.
2016-12-01
Finite-fault source algorithms can greatly benefit earthquake early warning (EEW) systems. Estimates of finite-fault parameters provide spatial information, which can significantly improve real-time shaking calculations and help with disaster response. In this project, we have focused on integrating a finite-fault seismic-geodetic algorithm into the West Coast ShakeAlert framework. The seismic part is FinDer 2, a C++ version of the algorithm developed by Böse et al. (2012). It interpolates peak ground accelerations and calculates the best fault length and strike from template matching. The geodetic part is a C++ version of BEFORES, the algorithm developed by Minson et al. (2014) that uses a Bayesian methodology to search for the most probable slip distribution on a fault of unknown orientation. Ultimately, these two will be used together where FinDer generates a Bayesian prior for BEFORES via the methodology of Minson et al. (2015), and the joint solution will generate estimates of finite-fault extent, strike, dip, best slip distribution, and magnitude. We have created C++ versions of both FinDer and BEFORES using open source libraries and have developed a C++ Application Protocol Interface (API) for them both. Their APIs allow FinDer and BEFORES to contribute to the ShakeAlert system via an open source messaging system, ActiveMQ. FinDer has been receiving real-time data, detecting earthquakes, and reporting messages on the development system for several months. We are also testing FinDer extensively with Earthworm tankplayer files. BEFORES has been tested with ActiveMQ messaging in the ShakeAlert framework, and works off a FinDer trigger. We are finishing the FinDer-BEFORES connections in this framework, and testing this system via seismic-geodetic tankplayer files. This will include actual and simulated data.
Collegial Activity Learning between Heterogeneous Sensors.
Feuz, Kyle D; Cook, Diane J
2017-11-01
Activity recognition algorithms have matured and become more ubiquitous in recent years. However, these algorithms are typically customized for a particular sensor platform. In this paper we introduce PECO, a Personalized activity ECOsystem, that transfers learned activity information seamlessly between sensor platforms in real time so that any available sensor can continue to track activities without requiring its own extensive labeled training data. We introduce a multi-view transfer learning algorithm that facilitates this information handoff between sensor platforms and provide theoretical performance bounds for the algorithm. In addition, we empirically evaluate PECO using datasets that utilize heterogeneous sensor platforms to perform activity recognition. These results indicate that not only can activity recognition algorithms transfer important information to new sensor platforms, but any number of platforms can work together as colleagues to boost performance.
Integrated Solution for Physical Activity Monitoring Based on Mobile Phone and PC.
Lee, Mi Hee; Kim, Jungchae; Jee, Sun Ha; Yoo, Sun Kook
2011-03-01
This study is part of the ongoing development of treatment methods for metabolic syndrome (MS) project, which involves monitoring daily physical activity. In this study, we have focused on detecting walking activity from subjects which includes many other physical activities such as standing, sitting, lying, walking, running, and falling. Specially, we implemented an integrated solution for various physical activities monitoring using a mobile phone and PC. We put the iPod touch has built in a tri-axial accelerometer on the waist of the subjects, and measured change in acceleration signal according to change in ambulatory movement and physical activities. First, we developed of programs that are aware of step counts, velocity of walking, energy consumptions, and metabolic equivalents based on iPod. Second, we have developed the activity recognition program based on PC. iPod synchronization with PC to transmit measured data using iPhoneBrowser program. Using the implemented system, we analyzed change in acceleration signal according to the change of six activity patterns. We compared results of the step counting algorithm with different positions. The mean accuracy across these tests was 99.6 ± 0.61%, 99.1 ± 0.87% (right waist location, right pants pocket). Moreover, six activities recognition was performed using Fuzzy c means classification algorithm recognized over 98% accuracy. In addition we developed of programs that synchronization of data between PC and iPod for long-term physical activity monitoring. This study will provide evidence on using mobile phone and PC for monitoring various activities in everyday life. The next step in our system will be addition of a standard value of various physical activities in everyday life such as household duties and a health guideline how to select and plan exercise considering one's physical characteristics and condition.
Personalized Physical Activity Coaching: A Machine Learning Approach
Dijkhuis, Talko B.; van Ittersum, Miriam W.; Velthuijsen, Hugo
2018-01-01
Living a sedentary lifestyle is one of the major causes of numerous health problems. To encourage employees to lead a less sedentary life, the Hanze University started a health promotion program. One of the interventions in the program was the use of an activity tracker to record participants' daily step count. The daily step count served as input for a fortnightly coaching session. In this paper, we investigate the possibility of automating part of the coaching procedure on physical activity by providing personalized feedback throughout the day on a participant’s progress in achieving a personal step goal. The gathered step count data was used to train eight different machine learning algorithms to make hourly estimations of the probability of achieving a personalized, daily steps threshold. In 80% of the individual cases, the Random Forest algorithm was the best performing algorithm (mean accuracy = 0.93, range = 0.88–0.99, and mean F1-score = 0.90, range = 0.87–0.94). To demonstrate the practical usefulness of these models, we developed a proof-of-concept Web application that provides personalized feedback about whether a participant is expected to reach his or her daily threshold. We argue that the use of machine learning could become an invaluable asset in the process of automated personalized coaching. The individualized algorithms allow for predicting physical activity during the day and provides the possibility to intervene in time. PMID:29463052
On-Board Cryospheric Change Detection By The Autonomous Sciencecraft Experiment
NASA Astrophysics Data System (ADS)
Doggett, T.; Greeley, R.; Castano, R.; Cichy, B.; Chien, S.; Davies, A.; Baker, V.; Dohm, J.; Ip, F.
2004-12-01
The Autonomous Sciencecraft Experiment (ASE) is operating on-board Earth Observing - 1 (EO-1) with the Hyperion hyper-spectral visible/near-IR spectrometer. ASE science activities include autonomous monitoring of cryopsheric changes, triggering the collection of additional data when change is detected and filtering of null data such as no change or cloud cover. This would have application to the study of cryospheres on Earth, Mars and the icy moons of the outer solar system. A cryosphere classification algorithm, in combination with a previously developed cloud algorithm [1] has been tested on-board ten times from March through August 2004. The cloud algorithm correctly screened out three scenes with total cloud cover, while the cryosphere algorithm detected alpine snow cover in the Rocky Mountains, lake thaw near Madison, Wisconsin, and the presence and subsequent break-up of sea ice in the Barrow Strait of the Canadian Arctic. Hyperion has 220 bands ranging from 400 to 2400 nm, with a spatial resolution of 30 m/pixel and a spectral resolution of 10 nm. Limited on-board memory and processing speed imposed the constraint that only partially processed Level 0.5 data with dark image subtraction and gain factors applied, but not full radiometric calibration. In addition, a maximum of 12 bands could be used for any stacked sequence of algorithms run for a scene on-board. The cryosphere algorithm was developed to classify snow, water, ice and land, using six Hyperion bands at 427, 559, 661, 864, 1245 and 1649 nm. Of these, only 427 nm does overlap with the cloud algorithm. The cloud algorithm was developed with Level 1 data, which introduces complications because of the incomplete calibration of SWIR in Level 0.5 data, including a high level of noise in the 1377 nm band used by the cloud algorithm. Development of a more robust cryosphere classifier, including cloud classification specifically adapted to Level 0.5, is in progress for deployment on EO-1 as part of continued ASE operations. [1] Griffin, M.K. et al., Cloud Cover Detection Algorithm For EO-1 Hyperion Imagery, SPIE 17, 2003.
Motion artifact removal algorithm by ICA for e-bra: a women ECG measurement system
NASA Astrophysics Data System (ADS)
Kwon, Hyeokjun; Oh, Sechang; Varadan, Vijay K.
2013-04-01
Wearable ECG(ElectroCardioGram) measurement systems have increasingly been developing for people who suffer from CVD(CardioVascular Disease) and have very active lifestyles. Especially, in the case of female CVD patients, several abnormal CVD symptoms are accompanied with CVDs. Therefore, monitoring women's ECG signal is a significant diagnostic method to prevent from sudden heart attack. The E-bra ECG measurement system from our previous work provides more convenient option for women than Holter monitor system. The e-bra system was developed with a motion artifact removal algorithm by using an adaptive filter with LMS(least mean square) and a wandering noise baseline detection algorithm. In this paper, ICA(independent component analysis) algorithms are suggested to remove motion artifact factor for the e-bra system. Firstly, the ICA algorithms are developed with two kinds of statistical theories: Kurtosis, Endropy and evaluated by performing simulations with a ECG signal created by sgolayfilt function of MATLAB, a noise signal including 0.4Hz, 1.1Hz and 1.9Hz, and a weighed vector W estimated by kurtosis or entropy. A correlation value is shown as the degree of similarity between the created ECG signal and the estimated new ECG signal. In the real time E-Bra system, two pseudo signals are extracted by multiplying with a random weighted vector W, the measured ECG signal from E-bra system, and the noise component signal by noise extraction algorithm from our previous work. The suggested ICA algorithm basing on kurtosis or entropy is used to estimate the new ECG signal Y without noise component.
A development of intelligent entertainment robot for home life
NASA Astrophysics Data System (ADS)
Kim, Cheoltaek; Lee, Ju-Jang
2005-12-01
The purpose of this paper was to present the study and design idea for entertainment robot with educational purpose (IRFEE). The robot has been designed for home life considering dependability and interaction. The developed robot has three objectives - 1. Develop autonomous robot, 2. Design robot considering mobility and robustness, 3. Develop robot interface and software considering entertainment and education functionalities. The autonomous navigation was implemented by active vision based SLAM and modified EPF algorithm. The two differential wheels, the pan-tilt were designed mobility and robustness and the exterior was designed considering esthetic element and minimizing interference. The speech and tracking algorithm provided the good interface with human. The image transfer and Internet site connection is needed for service of remote connection and educational purpose.
Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I
2017-08-15
Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.
Automatic Near-Real-Time Detection of CMEs in Mauna Loa K-Cor Coronagraph Images
NASA Astrophysics Data System (ADS)
Thompson, W. T.; St. Cyr, O. C.; Burkepile, J. T.; Posner, A.
2017-10-01
A simple algorithm has been developed to detect the onset of coronal mass ejections (CMEs), together with speed estimates, in near-real time using linearly polarized white-light solar coronal images from the Mauna Loa Solar Observatory K-Cor telescope. Ground observations in the low corona can warn of CMEs well before they appear in space coronagraphs. The algorithm used is a variation on the Solar Eruptive Event Detection System developed at George Mason University. It was tested against K-Cor data taken between 29 April 2014 and 20 February 2017, on days identified as containing CMEs. This resulted in testing of 139 days' worth of data containing 171 CMEs. The detection rate varied from close to 80% when solar activity was high down to as low as 20-30% when activity was low. The difference in effectiveness with solar cycle is attributed to the relative prevalence of strong CMEs between active and quiet periods. There were also 12 false detections, leading to an average false detection rate of 8.6%. The K-Cor data were also compared with major solar energetic particle (SEP) storms during this time period. There were three SEP events detected either at Earth or at one of the two STEREO spacecraft when K-Cor was observing during the relevant time period. The algorithm successfully generated alerts for two of these events, with lead times of 1-3 h before the SEP onset at 1 AU. The third event was not detected by the automatic algorithm because of the unusually broad width in position angle.
Non-US data compression and coding research. FASAC Technical Assessment Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, R.M.; Cohn, M.; Craver, L.W.
1993-11-01
This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity,more » though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.« less
Extravehicular Activity System Sizing Analysis Tool (EVAS_SAT)
NASA Technical Reports Server (NTRS)
Brown, Cheryl B.; Conger, Bruce C.; Miranda, Bruno M.; Bue, Grant C.; Rouen, Michael N.
2007-01-01
An effort was initiated by NASA/JSC in 2001 to develop an Extravehicular Activity System Sizing Analysis Tool (EVAS_SAT) for the sizing of Extravehicular Activity System (EVAS) architecture and studies. Its intent was to support space suit development efforts and to aid in conceptual designs for future human exploration missions. Its basis was the Life Support Options Performance Program (LSOPP), a spacesuit and portable life support system (PLSS) sizing program developed for NASA/JSC circa 1990. EVAS_SAT estimates the mass, power, and volume characteristics for user-defined EVAS architectures, including Suit Systems, Airlock Systems, Tools and Translation Aids, and Vehicle Support equipment. The tool has undergone annual changes and has been updated as new data have become available. Certain sizing algorithms have been developed based on industry standards, while others are based on the LSOPP sizing routines. The sizing algorithms used by EVAS_SAT are preliminary. Because EVAS_SAT was designed for use by members of the EVA community, subsystem familiarity on the part of the intended user group and in the analysis of results is assumed. The current EVAS_SAT is operated within Microsoft Excel 2003 using a Visual Basic interface system.
Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor
2012-01-01
A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371
Active control of flexible structures using a fuzzy logic algorithm
NASA Astrophysics Data System (ADS)
Cohen, Kelly; Weller, Tanchum; Ben-Asher, Joseph Z.
2002-08-01
This study deals with the development and application of an active control law for the vibration suppression of beam-like flexible structures experiencing transient disturbances. Collocated pairs of sensors/actuators provide active control of the structure. A design methodology for the closed-loop control algorithm based on fuzzy logic is proposed. First, the behavior of the open-loop system is observed. Then, the number and locations of collocated actuator/sensor pairs are selected. The proposed control law, which is based on the principles of passivity, commands the actuator to emulate the behavior of a dynamic vibration absorber. The absorber is tuned to a targeted frequency, whereas the damping coefficient of the dashpot is varied in a closed loop using a fuzzy logic based algorithm. This approach not only ensures inherent stability associated with passive absorbers, but also circumvents the phenomenon of modal spillover. The developed controller is applied to the AFWAL/FIB 10 bar truss. Simulated results using MATLAB© show that the closed-loop system exhibits fairly quick settling times and desirable performance, as well as robustness characteristics. To demonstrate the robustness of the control system to changes in the temporal dynamics of the flexible structure, the transient response to a considerably perturbed plant is simulated. The modal frequencies of the 10 bar truss were raised as well as lowered substantially, thereby significantly perturbing the natural frequencies of vibration. For these cases, too, the developed control law provides adequate settling times and rates of vibrational energy dissipation.
NASA Astrophysics Data System (ADS)
O'Shea, Daniel J.; Shenoy, Krishna V.
2018-04-01
Objective. Electrical stimulation is a widely used and effective tool in systems neuroscience, neural prosthetics, and clinical neurostimulation. However, electrical artifacts evoked by stimulation prevent the detection of spiking activity on nearby recording electrodes, which obscures the neural population response evoked by stimulation. We sought to develop a method to clean artifact-corrupted electrode signals recorded on multielectrode arrays in order to recover the underlying neural spiking activity. Approach. We created an algorithm, which performs estimation and removal of array artifacts via sequential principal components regression (ERAASR). This approach leverages the similar structure of artifact transients, but not spiking activity, across simultaneously recorded channels on the array, across pulses within a train, and across trials. The ERAASR algorithm requires no special hardware, imposes no requirements on the shape of the artifact or the multielectrode array geometry, and comprises sequential application of straightforward linear methods with intuitive parameters. The approach should be readily applicable to most datasets where stimulation does not saturate the recording amplifier. Main results. The effectiveness of the algorithm is demonstrated in macaque dorsal premotor cortex using acute linear multielectrode array recordings and single electrode stimulation. Large electrical artifacts appeared on all channels during stimulation. After application of ERAASR, the cleaned signals were quiescent on channels with no spontaneous spiking activity, whereas spontaneously active channels exhibited evoked spikes which closely resembled spontaneously occurring spiking waveforms. Significance. We hope that enabling simultaneous electrical stimulation and multielectrode array recording will help elucidate the causal links between neural activity and cognition and facilitate naturalistic sensory protheses.
Research of the multimodal brain-tumor segmentation algorithm
NASA Astrophysics Data System (ADS)
Lu, Yisu; Chen, Wufan
2015-12-01
It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. A new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain tumor images, we developed the algorithm to segment multimodal brain tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated and compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance.
Evaluation of SMAP Level 2 Soil Moisture Algorithms Using SMOS Data
NASA Technical Reports Server (NTRS)
Bindlish, Rajat; Jackson, Thomas J.; Zhao, Tianjie; Cosh, Michael; Chan, Steven; O'Neill, Peggy; Njoku, Eni; Colliander, Andreas; Kerr, Yann; Shi, J. C.
2011-01-01
The objectives of the SMAP (Soil Moisture Active Passive) mission are global measurements of soil moisture and land freeze/thaw state at 10 km and 3 km resolution, respectively. SMAP will provide soil moisture with a spatial resolution of 10 km with a 3-day revisit time at an accuracy of 0.04 m3/m3 [1]. In this paper we contribute to the development of the Level 2 soil moisture algorithm that is based on passive microwave observations by exploiting Soil Moisture Ocean Salinity (SMOS) satellite observations and products. SMOS brightness temperatures provide a global real-world, rather than simulated, test input for the SMAP radiometer-only soil moisture algorithm. Output of the potential SMAP algorithms will be compared to both in situ measurements and SMOS soil moisture products. The investigation will result in enhanced SMAP pre-launch algorithms for soil moisture.
Time series analysis of infrared satellite data for detecting thermal anomalies: a hybrid approach
NASA Astrophysics Data System (ADS)
Koeppen, W. C.; Pilger, E.; Wright, R.
2011-07-01
We developed and tested an automated algorithm that analyzes thermal infrared satellite time series data to detect and quantify the excess energy radiated from thermal anomalies such as active volcanoes. Our algorithm enhances the previously developed MODVOLC approach, a simple point operation, by adding a more complex time series component based on the methods of the Robust Satellite Techniques (RST) algorithm. Using test sites at Anatahan and Kīlauea volcanoes, the hybrid time series approach detected ~15% more thermal anomalies than MODVOLC with very few, if any, known false detections. We also tested gas flares in the Cantarell oil field in the Gulf of Mexico as an end-member scenario representing very persistent thermal anomalies. At Cantarell, the hybrid algorithm showed only a slight improvement, but it did identify flares that were undetected by MODVOLC. We estimate that at least 80 MODIS images for each calendar month are required to create good reference images necessary for the time series analysis of the hybrid algorithm. The improved performance of the new algorithm over MODVOLC will result in the detection of low temperature thermal anomalies that will be useful in improving our ability to document Earth's volcanic eruptions, as well as detecting low temperature thermal precursors to larger eruptions.
Algorithms of walking and stability for an anthropomorphic robot
NASA Astrophysics Data System (ADS)
Sirazetdinov, R. T.; Devaev, V. M.; Nikitina, D. V.; Fadeev, A. Y.; Kamalov, A. R.
2017-09-01
Autonomous movement of an anthropomorphic robot is considered as a superposition of a set of typical elements of movement - so-called patterns, each of which can be considered as an agent of some multi-agent system [ 1 ]. To control the AP-601 robot, an information and communication infrastructure has been created that represents some multi-agent system that allows the development of algorithms for individual patterns of moving and run them in the system as a set of independently executed and interacting agents. The algorithms of lateral movement of the anthropomorphic robot AP-601 series with active stability due to the stability pattern are presented.
Multimodal Neurodiagnostic Tool for Exploration Missions
NASA Technical Reports Server (NTRS)
Lee, Yong Jin
2015-01-01
Linea Research Corporation has developed a neurodiagnostic tool that detects behavioral stress markers for astronauts on long-duration space missions. Lightweight and compact, the device is unobtrusive and requires minimal time and effort for the crew to use. The system provides a real-time functional imaging of cortical activity during normal activities. In Phase I of the project, Linea Research successfully monitored cortical activity using multiparameter sensor modules. Using electroencephalography (EEG) and functional near-infrared spectroscopy signals, the company obtained photoplethysmography and electrooculography signals to compute the heart rate and frequency of eye movement. The company also demonstrated the functionality of an algorithm that automatically classifies the varying degrees of cognitive loading based on physiological parameters. In Phase II, Linea Research developed the flight-capable neurodiagnostic device. Worn unobtrusively on the head, the device detects and classifies neurophysiological markers associated with decrements in behavior state and cognition. An automated algorithm identifies key decrements and provides meaningful and actionable feedback to the crew and ground-based medical staff.
Spatial and Social Diffusion of Information and Influence: Models and Algorithms
ERIC Educational Resources Information Center
Doo, Myungcheol
2012-01-01
In this dissertation research, we argue that spatial alarms and activity-based social networks are two fundamentally new types of information and influence diffusion channels. Such new channels have the potential of enriching our professional experiences and our personal life quality in many unprecedented ways. First, we develop an activity driven…
Control Algorithms For Liquid-Cooled Garments
NASA Technical Reports Server (NTRS)
Drew, B.; Harner, K.; Hodgson, E.; Homa, J.; Jennings, D.; Yanosy, J.
1988-01-01
Three algorithms developed for control of cooling in protective garments. Metabolic rate inferred from temperatures of cooling liquid outlet and inlet, suitably filtered to account for thermal lag of human body. Temperature at inlet adjusted to value giving maximum comfort at inferred metabolic rate. Applicable to space suits, used for automatic control of cooling in suits worn by workers in radioactive, polluted, or otherwise hazardous environments. More effective than manual control, subject to frequent, overcompensated adjustments as level of activity varies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frenkel, G.; Paterson, T.S.; Smith, M.E.
The Institute for Defense Analyses (IDA) has collected and analyzed information on battle management algorithm technology that is relevant to Battle Management/Command, Control and Communications (BM/C3). This Memorandum Report represents a program plan that will provide the BM/C3 Directorate of the Strategic Defense Initiative Organization (SDIO) with administrative and technical insight into algorithm technology. This program plan focuses on current activity in algorithm development and provides information and analysis to the SDIO to be used in formulating budget requirements for FY 1988 and beyond. Based upon analysis of algorithm requirements and ongoing programs, recommendations have been made for research areasmore » that should be pursued, including both the continuation of current work and the initiation of new tasks. This final report includes all relevant material from interim reports as well as new results.« less
Banda, Jorge A; Haydel, K Farish; Davila, Tania; Desai, Manisha; Bryson, Susan; Haskell, William L; Matheson, Donna; Robinson, Thomas N
2016-01-01
To examine the effects of accelerometer epoch lengths, wear time (WT) algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB), and physical activity (PA). 268 7-11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4-7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA), moderate (MPA), and vigorous (VPA) PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points. WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p < .0001), but did not vary significantly by epoch length when using the ≥ 20 minute consecutive zero or Choi WT algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p < .0001). Across all epoch lengths, minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA also varied significantly across all sets of activity cut-points with all three WT algorithms (all p < .0001). The common practice of converting WT algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy.
A New Model for Solving Time-Cost-Quality Trade-Off Problems in Construction
Fu, Fang; Zhang, Tao
2016-01-01
A poor quality affects project makespan and its total costs negatively, but it can be recovered by repair works during construction. We construct a new non-linear programming model based on the classic multi-mode resource constrained project scheduling problem considering repair works. In order to obtain satisfactory quality without a high increase of project cost, the objective is to minimize total quality cost which consists of the prevention cost and failure cost according to Quality-Cost Analysis. A binary dependent normal distribution function is adopted to describe the activity quality; Cumulative quality is defined to determine whether to initiate repair works, according to the different relationships among activity qualities, namely, the coordinative and precedence relationship. Furthermore, a shuffled frog-leaping algorithm is developed to solve this discrete trade-off problem based on an adaptive serial schedule generation scheme and adjusted activity list. In the program of the algorithm, the frog-leaping progress combines the crossover operator of genetic algorithm and a permutation-based local search. Finally, an example of a construction project for a framed railway overpass is provided to examine the algorithm performance, and it assist in decision making to search for the appropriate makespan and quality threshold with minimal cost. PMID:27911939
NASA Astrophysics Data System (ADS)
Mori, Taketoshi; Ishino, Takahito; Noguchi, Hiroshi; Shimosaka, Masamichi; Sato, Tomomasa
2011-06-01
We propose a life pattern estimation method and an anomaly detection method for elderly people living alone. In our observation system for such people, we deploy some pyroelectric sensors into the house and measure the person's activities all the time in order to grasp the person's life pattern. The data are transferred successively to the operation center and displayed to the nurses in the center in a precise way. Then, the nurses decide whether the data is the anomaly or not. In the system, the people whose features in their life resemble each other are categorized as the same group. Anomalies occurred in the past are shared in the group and utilized in the anomaly detection algorithm. This algorithm is based on "anomaly score." The "anomaly score" is figured out by utilizing the activeness of the person. This activeness is approximately proportional to the frequency of the sensor response in a minute. The "anomaly score" is calculated from the difference between the activeness in the present and the past one averaged in the long term. Thus, the score is positive if the activeness in the present is higher than the average in the past, and the score is negative if the value in the present is lower than the average. If the score exceeds a certain threshold, it means that an anomaly event occurs. Moreover, we developed an activity estimation algorithm. This algorithm estimates the residents' basic activities such as uprising, outing, and so on. The estimation is shown to the nurses with the "anomaly score" of the residents. The nurses can understand the residents' health conditions by combining these two information.
Rapid classification of hippocampal replay content for real-time applications
Liu, Daniel F.; Karlsson, Mattias P.; Frank, Loren M.; Eden, Uri T.
2016-01-01
Sharp-wave ripple (SWR) events in the hippocampus replay millisecond-timescale patterns of place cell activity related to the past experience of an animal. Interrupting SWR events leads to learning and memory impairments, but how the specific patterns of place cell spiking seen during SWRs contribute to learning and memory remains unclear. A deeper understanding of this issue will require the ability to manipulate SWR events based on their content. Accurate real-time decoding of SWR replay events requires new algorithms that are able to estimate replay content and the associated uncertainty, along with software and hardware that can execute these algorithms for biological interventions on a millisecond timescale. Here we develop an efficient estimation algorithm to categorize the content of replay from multiunit spiking activity. Specifically, we apply real-time decoding methods to each SWR event and then compute the posterior probability of the replay feature. We illustrate this approach by classifying SWR events from data recorded in the hippocampus of a rat performing a spatial memory task into four categories: whether they represent outbound or inbound trajectories and whether the activity is replayed forward or backward in time. We show that our algorithm can classify the majority of SWR events in a recording epoch within 20 ms of the replay onset with high certainty, which makes the algorithm suitable for a real-time implementation with short latencies to incorporate into content-based feedback experiments. PMID:27535369
Ferentinos, Konstantinos P
2005-09-01
Two neural network (NN) applications in the field of biological engineering are developed, designed and parameterized by an evolutionary method based on the evolutionary process of genetic algorithms. The developed systems are a fault detection NN model and a predictive modeling NN system. An indirect or 'weak specification' representation was used for the encoding of NN topologies and training parameters into genes of the genetic algorithm (GA). Some a priori knowledge of the demands in network topology for specific application cases is required by this approach, so that the infinite search space of the problem is limited to some reasonable degree. Both one-hidden-layer and two-hidden-layer network architectures were explored by the GA. Except for the network architecture, each gene of the GA also encoded the type of activation functions in both hidden and output nodes of the NN and the type of minimization algorithm that was used by the backpropagation algorithm for the training of the NN. Both models achieved satisfactory performance, while the GA system proved to be a powerful tool that can successfully replace the problematic trial-and-error approach that is usually used for these tasks.
de Araujo Furtado, Marcio; Zheng, Andy; Sedigh-Sarvestani, Madineh; Lumley, Lucille; Lichtenstein, Spencer; Yourick, Debra
2009-10-30
The organophosphorous compound soman is an acetylcholinesterase inhibitor that causes damage to the brain. Exposure to soman causes neuropathology as a result of prolonged and recurrent seizures. In the present study, long-term recordings of cortical EEG were used to develop an unbiased means to quantify measures of seizure activity in a large data set while excluding other signal types. Rats were implanted with telemetry transmitters and exposed to soman followed by treatment with therapeutics similar to those administered in the field after nerve agent exposure. EEG, activity and temperature were recorded continuously for a minimum of 2 days pre-exposure and 15 days post-exposure. A set of automatic MATLAB algorithms have been developed to remove artifacts and measure the characteristics of long-term EEG recordings. The algorithms use short-time Fourier transforms to compute the power spectrum of the signal for 2-s intervals. The spectrum is then divided into the delta, theta, alpha, and beta frequency bands. A linear fit to the power spectrum is used to distinguish normal EEG activity from artifacts and high amplitude spike wave activity. Changes in time spent in seizure over a prolonged period are a powerful indicator of the effects of novel therapeutics against seizures. A graphical user interface has been created that simultaneously plots the raw EEG in the time domain, the power spectrum, and the wavelet transform. Motor activity and temperature are associated with EEG changes. The accuracy of this algorithm is also verified against visual inspection of video recordings up to 3 days after exposure.
NASA Technical Reports Server (NTRS)
Nicholson, Shaun R.
1994-01-01
Improved measurements of precipitation will aid our understanding of the role of latent heating on global circulations. Spaceborne meteorological sensors such as the planned precipitation radar and microwave radiometers on the Tropical Rainfall Measurement Mission (TRMM) provide for the first time a comprehensive means of making these global measurements. Pre-TRMM activities include development of precipitation algorithms using existing satellite data, computer simulations, and measurements from limited aircraft campaigns. Since the TRMM radar will be the first spaceborne precipitation radar, there is limited experience with such measurements, and only recently have airborne radars become available that can attempt to address the issue of the limitations of a spaceborne radar. There are many questions regarding how much attenuation occurs in various cloud types and the effect of cloud vertical motions on the estimation of precipitation rates. The EDOP program being developed by NASA GSFC will provide data useful for testing both rain-retrieval algorithms and the importance of vertical motions on the rain measurements. The purpose of this report is to describe the design and development of real-time embedded parallel algorithms used by EDOP to extract reflectivity and Doppler products (velocity, spectrum width, and signal-to-noise ratio) as the first step in the aforementioned goals.
Intelligent Medical Systems for Aerospace Emergency Medical Services
NASA Technical Reports Server (NTRS)
Epler, John; Zimmer, Gary
2004-01-01
The purpose of this project is to develop a portable, hands free device for emergency medical decision support to be used in remote or confined settings by non-physician providers. Phase I of the project will entail the development of a voice-activated device that will utilize an intelligent algorithm to provide guidance in establishing an airway in an emergency situation. The interactive, hands free software will process requests for assistance based on verbal prompts and algorithmic decision-making. The device will allow the CMO to attend to the patient while receiving verbal instruction. The software will also feature graphic representations where it is felt helpful in aiding in procedures. We will also develop a training program to orient users to the algorithmic approach, the use of the hardware and specific procedural considerations. We will validate the efficacy of this mode of technology application by testing in the Johns Hopkins Department of Emergency Medicine. Phase I of the project will focus on the validation of the proposed algorithm, testing and validation of the decision making tool and modifications of medical equipment. In Phase 11, we will produce the first generation software for hands-free, interactive medical decision making for use in acute care environments.
Deleu, Dirk; Mesraoua, Boulenouar; El Khider, Hisham; Canibano, Beatriz; Melikyan, Gayane; Al Hail, Hassan; Mhjob, Noha; Bhagat, Anjushri; Ibrahim, Faiza; Hanssens, Yolande
2017-03-01
The introduction of disease-modifying therapies (DMTs) - with varying degrees of efficacy for reducing annual relapse rate and disability progression - has considerably transformed the therapeutic landscape of relapsing-remitting multiple sclerosis (RRMS). We aim to develop rational evidence-based treatment recommendations and algorithms for the management of clinically isolated syndrome (CIS) and RRMS that conform to the healthcare system in a fast-developing economic country such as Qatar. We conducted a systematic review using a comprehensive search of MEDLINE, PubMed, and Cochrane Database of Systematic Reviews (1 January 1990 through 30 September 2016). Additional searches of the American Academy of Neurology and European Committee for Treatment and Research in Multiple Sclerosis abstracts from 2012 through 2016 were performed, in addition to searches of the Food and Drug Administration and European Medicines Agency websites to obtain relevant safety information on these DMTs. For each of the DMTs, the mode of action, efficacy, safety and tolerability are briefly discussed. To facilitate the interpretation, the efficacy data of the pivotal phase III trials are expressed by their most clinically useful measure of therapeutic efficacy, the number needed to treat (NNT). In addition, an overview of head-to-head trials in RRMS is provided as well as a summary of the several different RRMS management strategies (lateral switching, escalation, induction, maintenance and combination therapy) and the potential role of each DMT. Finally, algorithms were developed for CIS, active and highly active or rapidly evolving RRMS and subsequent breakthrough disease or suboptimal treatment response while on DMTs. The benefit-to-risk profiles of the DMTs, taking into account patient preference, allowed the provision of rational and safe patient-tailored treatment algorithms. Recommendations and algorithms for the management of CIS and RRMS have been developed relevant to the healthcare system of this fast-developing economic country.
NASA Astrophysics Data System (ADS)
Vickers, H.; Eckerstorfer, M.; Malnes, E.; Larsen, Y.; Hindberg, H.
2016-11-01
Avalanches are a natural hazard that occur in mountainous regions of Troms County in northern Norway during winter and can cause loss of human life and damage to infrastructure. Knowledge of when and where they occur especially in remote, high mountain areas is often lacking due to difficult access. However, complete, spatiotemporal avalanche activity data sets are important for accurate avalanche forecasting, as well as for deeper understanding of the link between avalanche occurrences and the triggering snowpack and meteorological factors. It is therefore desirable to develop a technique that enables active mapping and monitoring of avalanches over an entire winter. Avalanche debris can be observed remotely over large spatial areas, under all weather and light conditions by synthetic aperture radar (SAR) satellites. The recently launched Sentinel-1A satellite acquires SAR images covering the entire Troms County with frequent updates. By focusing on a case study from New Year 2015 we use Sentinel-1A images to develop an automated avalanche debris detection algorithm that utilizes change detection and unsupervised object classification methods. We compare our results with manually identified avalanche debris and field-based images to quantify the algorithm accuracy. Our results indicate that a correct detection rate of over 60% can be achieved, which is sensitive to several algorithm parameters that may need revising. With further development and refinement of the algorithm, we believe that this method could play an effective role in future operational monitoring of avalanches within Troms and has potential application in avalanche forecasting areas worldwide.
Internet (WWW) based system of ultrasonic image processing tools for remote image analysis.
Zeng, Hong; Fei, Ding-Yu; Fu, Cai-Ting; Kraft, Kenneth A
2003-07-01
Ultrasonic Doppler color imaging can provide anatomic information and simultaneously render flow information within blood vessels for diagnostic purpose. Many researchers are currently developing ultrasound image processing algorithms in order to provide physicians with accurate clinical parameters from the images. Because researchers use a variety of computer languages and work on different computer platforms to implement their algorithms, it is difficult for other researchers and physicians to access those programs. A system has been developed using World Wide Web (WWW) technologies and HTTP communication protocols to publish our ultrasonic Angle Independent Doppler Color Image (AIDCI) processing algorithm and several general measurement tools on the Internet, where authorized researchers and physicians can easily access the program using web browsers to carry out remote analysis of their local ultrasonic images or images provided from the database. In order to overcome potential incompatibility between programs and users' computer platforms, ActiveX technology was used in this project. The technique developed may also be used for other research fields.
Automatic near-real-time detection of CMEs in Mauna Loa K-Cor coronagraph images
NASA Astrophysics Data System (ADS)
Thompson, William T.; St. Cyr, Orville Chris; Burkepile, Joan; Posner, Arik
2017-08-01
A simple algorithm has been developed to detect the onset of coronal mass ejections (CMEs), together with an estimate of their speed, in near-real-time using images of the linearly polarized white-light solar corona taken by the K-Cor telescope at the Mauna Loa Solar Observatory (MLSO). The algorithm used is a variation on the Solar Eruptive Event Detection System (SEEDS) developed at George Mason University. The algorithm was tested against K-Cor data taken between 29 April 2014 and 20 February 2017, on days which the MLSO website marked as containing CMEs. This resulted in testing of 139 days worth of data containing 171 CMEs. The detection rate varied from close to 80% in 2014-2015 when solar activity was high, down to as low as 20-30% in 2017 when activity was low. The difference in effectiveness with solar cycle is attributed to the difference in relative prevalance of strong CMEs between active and quiet periods. There were also twelve false detections during this time period, leading to an average false detection rate of 8.6% on any given day. However, half of the false detections were clustered into two short periods of a few days each when special conditions prevailed to increase the false detection rate. The K-Cor data were also compared with major Solar Energetic Particle (SEP) storms during this time period. There were three SEP events detected either at Earth or at one of the two STEREO spacecraft where K-Cor was observing during the relevant time period. The K-Cor CME detection algorithm successfully generated alerts for two of these events, with lead times of 1-3 hours before the SEP onset at 1 AU. The third event was not detected by the automatic algorithm because of the unusually broad width of the CME in position angle.
Advances in Landslide Hazard Forecasting: Evaluation of Global and Regional Modeling Approach
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia B.; Adler, Robert; Hone, Yang; Kumar, Sujay; Peters-Lidard, Christa; Lerner-Lam, Arthur
2010-01-01
A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that exhibit a high potential for landslide activity by combining a calculation of landslide susceptibility with satellite-derived rainfall estimates. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale landslide forecasting efforts, it requires several modifications before it can be fully realized as an operational tool. The evaluation finds that the landslide forecasting may be more feasible at a regional scale. This study draws upon a prior work's recommendations to develop a new approach for considering landslide susceptibility and forecasting at the regional scale. This case study uses a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America: Guatemala, Honduras, EI Salvador and Nicaragua. A regional susceptibility map is calculated from satellite and surface datasets using a statistical methodology. The susceptibility map is tested with a regional rainfall intensity-duration triggering relationship and results are compared to global algorithm framework for the Hurricane Mitch event. The statistical results suggest that this regional investigation provides one plausible way to approach some of the data and resolution issues identified in the global assessment, providing more realistic landslide forecasts for this case study. Evaluation of landslide hazards for this extreme event helps to identify several potential improvements of the algorithm framework, but also highlights several remaining challenges for the algorithm assessment, transferability and performance accuracy. Evaluation challenges include representation errors from comparing susceptibility maps of different spatial resolutions, biases in event-based landslide inventory data, and limited nonlandslide event data for more comprehensive evaluation. Additional factors that may improve algorithm performance accuracy include incorporating additional triggering factors such as tectonic activity, anthropogenic impacts and soil moisture into the algorithm calculation. Despite these limitations, the methodology presented in this regional evaluation is both straightforward to calculate and easy to interpret, making results transferable between regions and allowing findings to be placed within an inter-comparison framework. The regional algorithm scenario represents an important step in advancing regional and global-scale landslide hazard assessment and forecasting.
NASA Astrophysics Data System (ADS)
He, Nana; Zhang, Xiaolong; Zhao, Juanjuan; Zhao, Huilan; Qiang, Yan
2017-07-01
While the popular thin layer scanning technology of spiral CT has helped to improve diagnoses of lung diseases, the large volumes of scanning images produced by the technology also dramatically increase the load of physicians in lesion detection. Computer-aided diagnosis techniques like lesions segmentation in thin CT sequences have been developed to address this issue, but it remains a challenge to achieve high segmentation efficiency and accuracy without much involvement of human manual intervention. In this paper, we present our research on automated segmentation of lung parenchyma with an improved geodesic active contour model that is geodesic active contour model based on similarity (GACBS). Combining spectral clustering algorithm based on Nystrom (SCN) with GACBS, this algorithm first extracts key image slices, then uses these slices to generate an initial contour of pulmonary parenchyma of un-segmented slices with an interpolation algorithm, and finally segments lung parenchyma of un-segmented slices. Experimental results show that the segmentation results generated by our method are close to what manual segmentation can produce, with an average volume overlap ratio of 91.48%.
Fang, Hongqing; He, Lei; Si, Hao; Liu, Peng; Xie, Xiaolei
2014-09-01
In this paper, Back-propagation(BP) algorithm has been used to train the feed forward neural network for human activity recognition in smart home environments, and inter-class distance method for feature selection of observed motion sensor events is discussed and tested. And then, the human activity recognition performances of neural network using BP algorithm have been evaluated and compared with other probabilistic algorithms: Naïve Bayes(NB) classifier and Hidden Markov Model(HMM). The results show that different feature datasets yield different activity recognition accuracy. The selection of unsuitable feature datasets increases the computational complexity and degrades the activity recognition accuracy. Furthermore, neural network using BP algorithm has relatively better human activity recognition performances than NB classifier and HMM. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Bhatt-Mehta, Varsha; MacArthur, Robert B.; Löbenberg, Raimar; Cies, Jeffrey J.; Cernak, Ibolja; Parrish, Richard H.
2015-01-01
The lack of commercially-available pediatric drug products and dosage forms is well-known. A group of clinicians and scientists with a common interest in pediatric drug development and medicines-use systems developed a practical framework for identifying a list of active pharmaceutical ingredients (APIs) with the greatest market potential for development to use in pediatric patients. Reliable and reproducible evidence-based drug formulations designed for use in pediatric patients are needed vitally, otherwise safe and consistent clinical practices and outcomes assessments will continue to be difficult to ascertain. Identification of a prioritized list of candidate APIs for oral formulation using the described algorithm provides a broader integrated clinical, scientific, regulatory, and market basis to allow for more reliable dosage forms and safer, effective medicines use in children of all ages. Group members derived a list of candidate API molecules by factoring in a number of pharmacotherapeutic, scientific, manufacturing, and regulatory variables into the selection algorithm that were absent in other rubrics. These additions will assist in identifying and categorizing prime API candidates suitable for oral formulation development. Moreover, the developed algorithm aids in prioritizing useful APIs with finished oral liquid dosage forms available from other countries with direct importation opportunities to North America and beyond. PMID:28975916
YANA – a software tool for analyzing flux modes, gene-expression and enzyme activities
Schwarz, Roland; Musch, Patrick; von Kamp, Axel; Engels, Bernd; Schirmer, Heiner; Schuster, Stefan; Dandekar, Thomas
2005-01-01
Background A number of algorithms for steady state analysis of metabolic networks have been developed over the years. Of these, Elementary Mode Analysis (EMA) has proven especially useful. Despite its low user-friendliness, METATOOL as a reliable high-performance implementation of the algorithm has been the instrument of choice up to now. As reported here, the analysis of metabolic networks has been improved by an editor and analyzer of metabolic flux modes. Analysis routines for expression levels and the most central, well connected metabolites and their metabolic connections are of particular interest. Results YANA features a platform-independent, dedicated toolbox for metabolic networks with a graphical user interface to calculate (integrating METATOOL), edit (including support for the SBML format), visualize, centralize, and compare elementary flux modes. Further, YANA calculates expected flux distributions for a given Elementary Mode (EM) activity pattern and vice versa. Moreover, a dissection algorithm, a centralization algorithm, and an average diameter routine can be used to simplify and analyze complex networks. Proteomics or gene expression data give a rough indication of some individual enzyme activities, whereas the complete flux distribution in the network is often not known. As such data are noisy, YANA features a fast evolutionary algorithm (EA) for the prediction of EM activities with minimum error, including alerts for inconsistent experimental data. We offer the possibility to include further known constraints (e.g. growth constraints) in the EA calculation process. The redox metabolism around glutathione reductase serves as an illustration example. All software and documentation are available for download at . Conclusion A graphical toolbox and an editor for METATOOL as well as a series of additional routines for metabolic network analyses constitute a new user-friendly software for such efforts. PMID:15929789
Investigation of the application of remote sensing technology to environmental monitoring
NASA Technical Reports Server (NTRS)
Rader, M. L. (Principal Investigator)
1980-01-01
Activities and results are reported of a project to investigate the application of remote sensing technology developed for the LACIE, AgRISTARS, Forestry and other NASA remote sensing projects for the environmental monitoring of strip mining, industrial pollution, and acid rain. Following a remote sensing workshop for EPA personnel, the EOD clustering algorithm CLASSY was selected for evaluation by EPA as a possible candidate technology. LANDSAT data acquired for a North Dakota test sight was clustered in order to compare CLASSY with other algorithms.
Li, Dong; Pan, Zhisong; Hu, Guyu; Zhu, Zexuan; He, Shan
2017-03-14
Active modules are connected regions in biological network which show significant changes in expression over particular conditions. The identification of such modules is important since it may reveal the regulatory and signaling mechanisms that associate with a given cellular response. In this paper, we propose a novel active module identification algorithm based on a memetic algorithm. We propose a novel encoding/decoding scheme to ensure the connectedness of the identified active modules. Based on the scheme, we also design and incorporate a local search operator into the memetic algorithm to improve its performance. The effectiveness of proposed algorithm is validated on both small and large protein interaction networks.
Cornejo-Aragón, Luz G; Santos-Cuevas, Clara L; Ocampo-García, Blanca E; Chairez-Oria, Isaac; Diaz-Nieto, Lorenza; García-Quiroz, Janice
2017-01-01
The aim of this study was to develop a semi automatic image processing algorithm (AIPA) based on the simultaneous information provided by X-ray and radioisotopic images to determine the biokinetic models of Tc-99m radiopharmaceuticals from quantification of image radiation activity in murine models. These radioisotopic images were obtained by a CCD (charge couple device) camera coupled to an ultrathin phosphorous screen in a preclinical multimodal imaging system (Xtreme, Bruker). The AIPA consisted of different image processing methods for background, scattering and attenuation correction on the activity quantification. A set of parametric identification algorithms was used to obtain the biokinetic models that characterize the interaction between different tissues and the radiopharmaceuticals considered in the study. The set of biokinetic models corresponded to the Tc-99m biodistribution observed in different ex vivo studies. This fact confirmed the contribution of the semi-automatic image processing technique developed in this study.
Bellmunt, Joaquim; Calvo, Emiliano; Castellano, Daniel; Climent, Miguel Angel; Esteban, Emilio; García del Muro, Xavier; González-Larriba, José Luis; Maroto, Pablo; Trigo, José Manuel
2009-03-01
For almost the last two decades, interleukin-2 and interferon-alpha have been the only systemic treatment options available for metastatic renal cell carcinoma. However, in recent years, five new targeted therapies namely sunitinib, sorafenib, temsirolimus, everolimus and bevacizumab have demonstrated clinical activity in these patients. With the availability of new targeted agents that are active in this disease, there is a need to continuously update the treatment algorithm of the disease. Due to the important advances obtained, the Spanish Oncology Genitourinary Group (SOGUG) has considered it would be useful to review the current status of the disease, including the genetic and molecular biology factors involved, the current predicting models for development of metastases as well as the role of surgery, radiotherapy and systemic therapies in the early- or late management of the disease. Based on this previous work, a treatment algorithm was developed.
Incremental update of electrostatic interactions in adaptively restrained particle simulations.
Edorh, Semeho Prince A; Redon, Stéphane
2018-04-06
The computation of long-range potentials is one of the demanding tasks in Molecular Dynamics. During the last decades, an inventive panoply of methods was developed to reduce the CPU time of this task. In this work, we propose a fast method dedicated to the computation of the electrostatic potential in adaptively restrained systems. We exploit the fact that, in such systems, only some particles are allowed to move at each timestep. We developed an incremental algorithm derived from a multigrid-based alternative to traditional Fourier-based methods. Our algorithm was implemented inside LAMMPS, a popular molecular dynamics simulation package. We evaluated the method on different systems. We showed that the new algorithm's computational complexity scales with the number of active particles in the simulated system, and is able to outperform the well-established Particle Particle Particle Mesh (P3M) for adaptively restrained simulations. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Goodman, Steven J.; Blakeslee, R. J.; Koshak, W.; Petersen, W.; Buechler, D. E.; Krehbiel, P. R.; Gatlin, P.; Zubrick, S.
2008-01-01
The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk as part of a 3-axis stabilized, geostationary weather satellite system. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series with a planned launch in 2014 will carry a GLM that will provide continuous day and night observations of lightning from the west coast of Africa (GOES-E) to New Zealand (GOES-W) when the constellation is fUlly operational. The mission objectives for the GLM are to 1) provide continuous, full-disk lightning measurements for storm warning and nowcasting, 2) provide early warning of tornadic activity, and 3) accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997-Present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. Instrument formulation studies were completed in March 2007 and the implementation phase to develop a prototype model and up to four flight models is expected to be underway in the latter part of 2007. In parallel with the instrument development, a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 ground processing algorithms and applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama and the Washington DC Metropolitan area)
Activation Product Inverse Calculations with NDI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, Mark Girard
NDI based forward calculations of activation product concentrations can be systematically used to infer structural element concentrations from measured activation product concentrations with an iterative algorithm. The algorithm converges exactly for the basic production-depletion chain with explicit activation product production and approximately, in the least-squares sense, for the full production-depletion chain with explicit activation product production and nosub production-depletion chain. The algorithm is suitable for automation.
Robotic Lunar Lander Development Project Status
NASA Technical Reports Server (NTRS)
Hammond, Monica; Bassler, Julie; Morse, Brian
2010-01-01
This slide presentation reviews the status of the development of a robotic lunar lander. The goal of the project is to perform engineering tests and risk reduction activities to support the development of a small lunar lander for lunar surface science. This includes: (1) risk reduction for the flight of the robotic lander, (i.e., testing and analyzing various phase of the project); (2) the incremental development for the design of the robotic lander, which is to demonstrate autonomous, controlled descent and landing on airless bodies, and design of thruster configuration for 1/6th of the gravity of earth; (3) cold gas test article in flight demonstration testing; (4) warm gas testing of the robotic lander design; (5) develop and test landing algorithms; (6) validate the algorithms through analysis and test; and (7) tests of the flight propulsion system.
Physics Based Model for Cryogenic Chilldown and Loading. Part I: Algorithm
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry G.; Smelyanskiy, Vadim N.; Brown, Barbara
2014-01-01
We report the progress in the development of the physics based model for cryogenic chilldown and loading. The chilldown and loading is model as fully separated non-equilibrium two-phase flow of cryogenic fluid thermally coupled to the pipe walls. The solution follow closely nearly-implicit and semi-implicit algorithms developed for autonomous control of thermal-hydraulic systems developed by Idaho National Laboratory. A special attention is paid to the treatment of instabilities. The model is applied to the analysis of chilldown in rapid loading system developed at NASA-Kennedy Space Center. The nontrivial characteristic feature of the analyzed chilldown regime is its active control by dump valves. The numerical predictions are in reasonable agreement with the experimental time traces. The obtained results pave the way to the development of autonomous loading operation on the ground and space.
Automated Lead Optimization of MMP-12 Inhibitors Using a Genetic Algorithm.
Pickett, Stephen D; Green, Darren V S; Hunt, David L; Pardoe, David A; Hughes, Ian
2011-01-13
Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure-activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods.
Automated Lead Optimization of MMP-12 Inhibitors Using a Genetic Algorithm
2010-01-01
Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure−activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods. PMID:24900251
NEOPROP: A NEO Propagator for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Zuccarelli, Valentino; Bancelin, David; Weikert, Sven; Thuillot, William; Hestroffer, Daniel; Yabar Valle, Celia; Koschny, Detlef
2013-09-01
The overall aim of the Space Situational Awareness (SSA) Preparatory Programme is to support the European independent utilisation of and access to space for research or services, through providing timely and quality data, information, services and knowledge regarding the environment, the threats and the sustainable exploitation of the outer space surrounding our planet Earth. The SSA system will comprise three main segments:• Space Weather (SWE) monitoring and forecast• Near-Earth Objects (NEO) survey and follow-up• Space Surveillance and Tracking (SST) of man-made space objectsCurrently, there are over 600.000 asteroids known in our Solar System, where more than 9.500 of these are NEOs. These could potentially hit our planet and depending on their size could produce considerable damage. For this reason NEOs deserve active detection and tracking efforts.The role of the SSA programme is to provide warning services against potential asteroid impact hazards, including discovery, identification, orbit prediction and civil alert capabilities. ESA is now working to develop a NEO Coordination Centre which will later evolve into a SSA-NEO Small Bodies Data Centre (SBDC), located at ESA/ESRIN, Italy. The Software prototype developed in the frame of this activity may be later implemented as a part of the SSA-NEO programme simulators aimed at assessing the trajectory of asteroids. There already exist different algorithms to predict orbits for NEOs. The objective of this activity is to come up with a different trajectory prediction algorithm, which allows an independent validation of the current algorithms within the SSA-NEO segment (e.g. NEODyS, JPL Sentry System).The key objective of this activity was to design, develop, test, verify, and validate trajectory prediction algorithm of NEOs in order to be able to computeanalytically and numerically the minimum orbital intersection distances (MOIDs).The NEOPROP software consists of two separate modules/tools:1. The Analytical Module makes use of analytical algorithms in order to rapidly assess the impact risk of a NEO. It is responsible for the preliminary analysis. Orbit Determination algorithms, as the Gauss and the Linear Least Squares (LLS) methods, will determine the initial state (from MPC observations), along with its uncertainty, and the MOID of the NEO (analytically).2. The Numerical Module makes use of numerical algorithms in order to refine and to better assess the impact probabilities. The initial state provided by the orbit determination process will be used to numerically propagate the trajectory. The numerical propagation can be run in two modes: one faster ("fast analysis"), in order to get a fast evaluation of the trajectory and one more precise ("complete analysis") taking into consideration more detailed perturbation models. Moreover, a configurable number of Virtual Asteroids (VAs) will be numerically propagated in order to determine the Earth closest approach. This new "MOID" computation differs from the analytical one since it takes into consideration the full dynamics of the problem.
Assessing the quality of activities in a smart environment.
Cook, Diane J; Schmitter-Edgecombe, M
2009-01-01
Pervasive computing technology can provide valuable health monitoring and assistance technology to help individuals live independent lives in their own homes. As a critical part of this technology, our objective is to design software algorithms that recognize and assess the consistency of activities of daily living that individuals perform in their own homes. We have designed algorithms that automatically learn Markov models for each class of activity. These models are used to recognize activities that are performed in a smart home and to identify errors and inconsistencies in the performed activity. We validate our approach using data collected from 60 volunteers who performed a series of activities in our smart apartment testbed. The results indicate that the algorithms correctly label the activities and successfully assess the completeness and consistency of the performed task. Our results indicate that activity recognition and assessment can be automated using machine learning algorithms and smart home technology. These algorithms will be useful for automating remote health monitoring and interventions.
The center for causal discovery of biomedical knowledge from big data.
Cooper, Gregory F; Bahar, Ivet; Becich, Michael J; Benos, Panayiotis V; Berg, Jeremy; Espino, Jeremy U; Glymour, Clark; Jacobson, Rebecca Crowley; Kienholz, Michelle; Lee, Adrian V; Lu, Xinghua; Scheines, Richard
2015-11-01
The Big Data to Knowledge (BD2K) Center for Causal Discovery is developing and disseminating an integrated set of open source tools that support causal modeling and discovery of biomedical knowledge from large and complex biomedical datasets. The Center integrates teams of biomedical and data scientists focused on the refinement of existing and the development of new constraint-based and Bayesian algorithms based on causal Bayesian networks, the optimization of software for efficient operation in a supercomputing environment, and the testing of algorithms and software developed using real data from 3 representative driving biomedical projects: cancer driver mutations, lung disease, and the functional connectome of the human brain. Associated training activities provide both biomedical and data scientists with the knowledge and skills needed to apply and extend these tools. Collaborative activities with the BD2K Consortium further advance causal discovery tools and integrate tools and resources developed by other centers. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Liu, Peipei; Yang, Suyoung; Lim, Hyung Jin; Park, Hyung Chul; Ko, In Chang; Sohn, Hoon
2014-03-01
Fatigue crack is one of the main culprits for the failure of metallic structures. Recently, it has been shown that nonlinear wave modulation spectroscopy (NWMS) is effective in detecting nonlinear mechanisms produced by fatigue crack. In this study, an active wireless sensor node for fatigue crack detection is developed based on NWMS. Using PZT transducers attached to a target structure, ultrasonic waves at two distinctive frequencies are generated, and their modulation due to fatigue crack formation is detected using another PZT transducer. Furthermore, a reference-free NWMS algorithm is developed so that fatigue crack can be detected without relying on history data of the structure with minimal parameter adjustment by the end users. The algorithm is embedded into FPGA, and the diagnosis is transmitted to a base station using a commercial wireless communication system. The whole design of the sensor node is fulfilled in a low power working strategy. Finally, an experimental verification has been performed using aluminum plate specimens to show the feasibility of the developed active wireless NWMS sensor node.
Blob-level active-passive data fusion for Benthic classification
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Kalluri, Hemanth; Mathur, Abhinav; Ramnath, Vinod; Kim, Minsu; Aitken, Jennifer; Tuell, Grady
2012-06-01
We extend the data fusion pixel level to the more semantically meaningful blob level, using the mean-shift algorithm to form labeled blobs having high similarity in the feature domain, and connectivity in the spatial domain. We have also developed Bhattacharyya Distance (BD) and rule-based classifiers, and have implemented these higher-level data fusion algorithms into the CZMIL Data Processing System. Applying these new algorithms to recent SHOALS and CASI data at Plymouth Harbor, Massachusetts, we achieved improved benthic classification accuracies over those produced with either single sensor, or pixel-level fusion strategies. These results appear to validate the hypothesis that classification accuracy may be generally improved by adopting higher spatial and semantic levels of fusion.
Using Machine Learning for Advanced Anomaly Detection and Classification
NASA Astrophysics Data System (ADS)
Lane, B.; Poole, M.; Camp, M.; Murray-Krezan, J.
2016-09-01
Machine Learning (ML) techniques have successfully been used in a wide variety of applications to automatically detect and potentially classify changes in activity, or a series of activities by utilizing large amounts data, sometimes even seemingly-unrelated data. The amount of data being collected, processed, and stored in the Space Situational Awareness (SSA) domain has grown at an exponential rate and is now better suited for ML. This paper describes development of advanced algorithms to deliver significant improvements in characterization of deep space objects and indication and warning (I&W) using a global network of telescopes that are collecting photometric data on a multitude of space-based objects. The Phase II Air Force Research Laboratory (AFRL) Small Business Innovative Research (SBIR) project Autonomous Characterization Algorithms for Change Detection and Characterization (ACDC), contracted to ExoAnalytic Solutions Inc. is providing the ability to detect and identify photometric signature changes due to potential space object changes (e.g. stability, tumble rate, aspect ratio), and correlate observed changes to potential behavioral changes using a variety of techniques, including supervised learning. Furthermore, these algorithms run in real-time on data being collected and processed by the ExoAnalytic Space Operations Center (EspOC), providing timely alerts and warnings while dynamically creating collection requirements to the EspOC for the algorithms that generate higher fidelity I&W. This paper will discuss the recently implemented ACDC algorithms, including the general design approach and results to date. The usage of supervised algorithms, such as Support Vector Machines, Neural Networks, k-Nearest Neighbors, etc., and unsupervised algorithms, for example k-means, Principle Component Analysis, Hierarchical Clustering, etc., and the implementations of these algorithms is explored. Results of applying these algorithms to EspOC data both in an off-line "pattern of life" analysis as well as using the algorithms on-line in real-time, meaning as data is collected, will be presented. Finally, future work in applying ML for SSA will be discussed.
Detection of algorithmic trading
NASA Astrophysics Data System (ADS)
Bogoev, Dimitar; Karam, Arzé
2017-10-01
We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.
Murai, Akihiko; Kurosaki, Kosuke; Yamane, Katsu; Nakamura, Yoshihiko
2010-12-01
In this paper, we present a system that estimates and visualizes muscle tensions in real time using optical motion capture and electromyography (EMG). The system overlays rendered musculoskeletal human model on top of a live video image of the subject. The subject therefore has an impression that he/she sees the muscles with tension information through the cloth and skin. The main technical challenge lies in real-time estimation of muscle tension. Since existing algorithms using mathematical optimization to distribute joint torques to muscle tensions are too slow for our purpose, we develop a new algorithm that computes a reasonable approximation of muscle tensions based on the internal connections between muscles known as neuronal binding. The algorithm can estimate the tensions of 274 muscles in only 16 ms, and the whole visualization system runs at about 15 fps. The developed system is applied to assisting sport training, and the user case studies show its usefulness. Possible applications include interfaces for assisting rehabilitation. Copyright © 2010 Elsevier Ltd. All rights reserved.
An improved affine projection algorithm for active noise cancellation
NASA Astrophysics Data System (ADS)
Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo
2017-08-01
Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.
Schultz, Simon R; Copeland, Caroline S; Foust, Amanda J; Quicke, Peter; Schuck, Renaud
2017-01-01
Recent years have seen substantial developments in technology for imaging neural circuits, raising the prospect of large scale imaging studies of neural populations involved in information processing, with the potential to lead to step changes in our understanding of brain function and dysfunction. In this article we will review some key recent advances: improved fluorophores for single cell resolution functional neuroimaging using a two photon microscope; improved approaches to the problem of scanning active circuits; and the prospect of scanless microscopes which overcome some of the bandwidth limitations of current imaging techniques. These advances in technology for experimental neuroscience have in themselves led to technical challenges, such as the need for the development of novel signal processing and data analysis tools in order to make the most of the new experimental tools. We review recent work in some active topics, such as region of interest segmentation algorithms capable of demixing overlapping signals, and new highly accurate algorithms for calcium transient detection. These advances motivate the development of new data analysis tools capable of dealing with spatial or spatiotemporal patterns of neural activity, that scale well with pattern size.
Schultz, Simon R.; Copeland, Caroline S.; Foust, Amanda J.; Quicke, Peter; Schuck, Renaud
2017-01-01
Recent years have seen substantial developments in technology for imaging neural circuits, raising the prospect of large scale imaging studies of neural populations involved in information processing, with the potential to lead to step changes in our understanding of brain function and dysfunction. In this article we will review some key recent advances: improved fluorophores for single cell resolution functional neuroimaging using a two photon microscope; improved approaches to the problem of scanning active circuits; and the prospect of scanless microscopes which overcome some of the bandwidth limitations of current imaging techniques. These advances in technology for experimental neuroscience have in themselves led to technical challenges, such as the need for the development of novel signal processing and data analysis tools in order to make the most of the new experimental tools. We review recent work in some active topics, such as region of interest segmentation algorithms capable of demixing overlapping signals, and new highly accurate algorithms for calcium transient detection. These advances motivate the development of new data analysis tools capable of dealing with spatial or spatiotemporal patterns of neural activity, that scale well with pattern size. PMID:28757657
Physical activity in low-income postpartum women.
Wilkinson, Susan; Huang, Chiu-Mieh; Walker, Lorraine O; Sterling, Bobbie Sue; Kim, Minseong
2004-01-01
To validate the 7-day physical activity recall (PAR), including alternative PAR scoring algorithms, using pedometer readings with low-income postpartum women, and to describe physical activity patterns of a low-income population of postpartum women. Forty-four women (13 African American, 19 Hispanic, and 12 White) from the Austin New Mothers Study (ANMS) were interviewed at 3 months postpartum. Data were scored alternatively according to the Blair (sitting treated as light activity) and Welk (sitting excluded from light activity and treated as rest) algorithms. Step counts based on 3 days of wearing pedometers served as the validation measure. Using the Welk algorithm, PAR components significantly correlated with step counts were: minutes spent in light activity, total activity (sum of light to very hard activity), and energy expenditure. Minutes spent in sitting were negatively correlated with step counts. No PAR component activities derived with the Blair algorithm were significantly related to step counts. The largest amount of active time was spent in light activity: 384.4 minutes with the Welk algorithm. Mothers averaged fewer than 16 minutes per day in moderate or high intensity activity. Step counts measured by pedometers averaged 6,262 (SD = 2,712) per day. The findings indicate support for the validity of the PAR as a measure of physical activity with low-income postpartum mothers when scored according to the Welk algorithm. On average, low-income postpartum women in this study did not meet recommendations for amount of moderate or high intensity physical activity.
The Goes-R Geostationary Lightning Mapper (GLM)
NASA Technical Reports Server (NTRS)
Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Mach, Douglas
2011-01-01
The Geostationary Operational Environmental Satellite (GOES-R) is the next series to follow the existing GOES system currently operating over the Western Hemisphere. Superior spacecraft and instrument technology will support expanded detection of environmental phenomena, resulting in more timely and accurate forecasts and warnings. Advancements over current GOES capabilities include a new capability for total lightning detection (cloud and cloud-to-ground flashes) from the Geostationary Lightning Mapper (GLM), and improved storm diagnostic capability with the Advanced Baseline Imager. The GLM will map total lightning activity (in-cloud and cloud-to-ground lighting flashes) continuously day and night with near-uniform spatial resolution of 8 km with a product refresh rate of less than 20 sec over the Americas and adjacent oceanic regions. This will aid in forecasting severe storms and tornado activity, and convective weather impacts on aviation safety and efficiency. In parallel with the instrument development, a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms, cal/val performance monitoring tools, and new applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. In this paper we will report on new Nowcasting and storm warning applications being developed and evaluated at various NOAA Testbeds.
NASA Technical Reports Server (NTRS)
Trevino, Luis; Berg, Peter; England, Dwight; Johnson, Stephen B.
2016-01-01
Analysis methods and testing processes are essential activities in the engineering development and verification of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS). Central to mission success is reliable verification of the Mission and Fault Management (M&FM) algorithms for the SLS launch vehicle (LV) flight software. This is particularly difficult because M&FM algorithms integrate and operate LV subsystems, which consist of diverse forms of hardware and software themselves, with equally diverse integration from the engineering disciplines of LV subsystems. M&FM operation of SLS requires a changing mix of LV automation. During pre-launch the LV is primarily operated by the Kennedy Space Center (KSC) Ground Systems Development and Operations (GSDO) organization with some LV automation of time-critical functions, and much more autonomous LV operations during ascent that have crucial interactions with the Orion crew capsule, its astronauts, and with mission controllers at the Johnson Space Center. M&FM algorithms must perform all nominal mission commanding via the flight computer to control LV states from pre-launch through disposal and also address failure conditions by initiating autonomous or commanded aborts (crew capsule escape from the failing LV), redundancy management of failing subsystems and components, and safing actions to reduce or prevent threats to ground systems and crew. To address the criticality of the verification testing of these algorithms, the NASA M&FM team has utilized the State Flow environment6 (SFE) with its existing Vehicle Management End-to-End Testbed (VMET) platform which also hosts vendor-supplied physics-based LV subsystem models. The human-derived M&FM algorithms are designed and vetted in Integrated Development Teams composed of design and development disciplines such as Systems Engineering, Flight Software (FSW), Safety and Mission Assurance (S&MA) and major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GN&C), Thrust Vector Control (TVC), liquid engines, and the astronaut crew office. Since the algorithms are realized using model-based engineering (MBE) methods from a hybrid of the Unified Modeling Language (UML) and Systems Modeling Language (SysML), SFE methods are a natural fit to provide an in depth analysis of the interactive behavior of these algorithms with the SLS LV subsystem models. For this, the M&FM algorithms and the SLS LV subsystem models are modeled using constructs provided by Matlab which also enables modeling of the accompanying interfaces providing greater flexibility for integrated testing and analysis, which helps forecast expected behavior in forward VMET integrated testing activities. In VMET, the M&FM algorithms are prototyped and implemented using the same C++ programming language and similar state machine architectural concepts used by the FSW group. Due to the interactive complexity of the algorithms, VMET testing thus far has verified all the individual M&FM subsystem algorithms with select subsystem vendor models but is steadily progressing to assessing the interactive behavior of these algorithms with LV subsystems, as represented by subsystem models. The novel SFE applications has proven to be useful for quick look analysis into early integrated system behavior and assessment of the M&FM algorithms with the modeled LV subsystems. This early MBE analysis generates vital insight into the integrated system behaviors, algorithm sensitivities, design issues, and has aided in the debugging of the M&FM algorithms well before full testing can begin in more expensive, higher fidelity but more arduous environments such as VMET, FSW testing, and the Systems Integration Lab7 (SIL). SFE has exhibited both expected and unexpected behaviors in nominal and off nominal test cases prior to full VMET testing. In many findings, these behavioral characteristics were used to correct the M&FM algorithms, enable better test coverage, and develop more effective test cases for each of the LV subsystems. This has improved the fidelity of testing and planning for the next generation of M&FM algorithms as the SLS program evolves from non-crewed to crewed flight, impacting subsystem configurations and the M&FM algorithms that control them. SFE analysis has improved robustness and reliability of the M&FM algorithms by revealing implementation errors and documentation inconsistencies. It is also improving planning efficiency for future VMET testing of the M&FM algorithms hosted in the LV flight computers, further reducing risk for the SLS launch infrastructure, the SLS LV, and most importantly the crew.
Methods to assess an exercise intervention trial based on 3-level functional data.
Li, Haocheng; Kozey Keadle, Sarah; Staudenmayer, John; Assaad, Houssein; Huang, Jianhua Z; Carroll, Raymond J
2015-10-01
Motivated by data recording the effects of an exercise intervention on subjects' physical activity over time, we develop a model to assess the effects of a treatment when the data are functional with 3 levels (subjects, weeks and days in our application) and possibly incomplete. We develop a model with 3-level mean structure effects, all stratified by treatment and subject random effects, including a general subject effect and nested effects for the 3 levels. The mean and random structures are specified as smooth curves measured at various time points. The association structure of the 3-level data is induced through the random curves, which are summarized using a few important principal components. We use penalized splines to model the mean curves and the principal component curves, and cast the proposed model into a mixed effects model framework for model fitting, prediction and inference. We develop an algorithm to fit the model iteratively with the Expectation/Conditional Maximization Either (ECME) version of the EM algorithm and eigenvalue decompositions. Selection of the number of principal components and handling incomplete data issues are incorporated into the algorithm. The performance of the Wald-type hypothesis test is also discussed. The method is applied to the physical activity data and evaluated empirically by a simulation study. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Improvements to a five-phase ABS algorithm for experimental validation
NASA Astrophysics Data System (ADS)
Gerard, Mathieu; Pasillas-Lépine, William; de Vries, Edwin; Verhaegen, Michel
2012-10-01
The anti-lock braking system (ABS) is the most important active safety system for passenger cars. Unfortunately, the literature is not really precise about its description, stability and performance. This research improves a five-phase hybrid ABS control algorithm based on wheel deceleration [W. Pasillas-Lépine, Hybrid modeling and limit cycle analysis for a class of five-phase anti-lock brake algorithms, Veh. Syst. Dyn. 44 (2006), pp. 173-188] and validates it on a tyre-in-the-loop laboratory facility. Five relevant effects are modelled so that the simulation matches the reality: oscillations in measurements, wheel acceleration reconstruction, brake pressure dynamics, brake efficiency changes and tyre relaxation. The time delays in measurement and actuation have been identified as the main difficulty for the initial algorithm to work in practice. Three methods are proposed in order to deal with these delays. It is verified that the ABS limit cycles encircle the optimal braking point, without assuming any tyre parameter being a priori known. The ABS algorithm is compared with the commercial algorithm developed by Bosch.
Schiavo, M; Bagnara, M C; Pomposelli, E; Altrinetti, V; Calamia, I; Camerieri, L; Giusti, M; Pesce, G; Reitano, C; Bagnasco, M; Caputo, M
2013-09-01
Radioiodine is a common option for treatment of hyperfunctioning thyroid nodules. Due to the expected selective radioiodine uptake by adenoma, relatively high "fixed" activities are often used. Alternatively, the activity is individually calculated upon the prescription of a fixed value of target absorbed dose. We evaluated the use of an algorithm for personalized radioiodine activity calculation, which allows as a rule the administration of lower radioiodine activities. Seventy-five patients with single hyperfunctioning thyroid nodule eligible for 131I treatment were studied. The activities of 131I to be administered were estimated by the method described by Traino et al. and developed for Graves'disease, assuming selective and homogeneous 131I uptake by adenoma. The method takes into account 131I uptake and its effective half-life, target (adenoma) volume and its expected volume reduction during treatment. A comparison with the activities calculated by other dosimetric protocols, and the "fixed" activity method was performed. 131I uptake was measured by external counting, thyroid nodule volume by ultrasonography, thyroid hormones and TSH by ELISA. Remission of hyperthyroidism was observed in all but one patient; volume reduction of adenoma was closely similar to that assumed by our model. Effective half-life was highly variable in different patients, and critically affected dose calculation. The administered activities were clearly lower with respect to "fixed" activities and other protocols' prescription. The proposed algorithm proved to be effective also for single hyperfunctioning thyroid nodule treatment and allowed a significant reduction of administered 131I activities, without loss of clinical efficacy.
Zare-Shahabadi, Vali; Abbasitabar, Fatemeh
2010-09-01
Quantitative structure-activity relationship models were derived for 107 analogs of 1-[(2-hydroxyethoxy) methyl]-6-(phenylthio)thymine, a potent inhibitor of the HIV-1 reverse transcriptase. The activities of these compounds were investigated by means of multiple linear regression (MLR) technique. An ant colony optimization algorithm, called Memorized_ACS, was applied for selecting relevant descriptors and detecting outliers. This algorithm uses an external memory based upon knowledge incorporation from previous iterations. At first, the memory is empty, and then it is filled by running several ACS algorithms. In this respect, after each ACS run, the elite ant is stored in the memory and the process is continued to fill the memory. Here, pheromone updating is performed by all elite ants collected in the memory; this results in improvements in both exploration and exploitation behaviors of the ACS algorithm. The memory is then made empty and is filled again by performing several ACS algorithms using updated pheromone trails. This process is repeated for several iterations. At the end, the memory contains several top solutions for the problem. Number of appearance of each descriptor in the external memory is a good criterion for its importance. Finally, prediction is performed by the elitist ant, and interpretation is carried out by considering the importance of each descriptor. The best MLR model has a training error of 0.47 log (1/EC(50)) units (R(2) = 0.90) and a prediction error of 0.76 log (1/EC(50)) units (R(2) = 0.88). Copyright 2010 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Moneta, Diana; Mora, Paolo; Viganò, Giacomo; Alimonti, Gianluca
2014-12-01
The diffusion of Distributed Generation (DG) based on Renewable Energy Sources (RES) requires new strategies to ensure reliable and economic operation of the distribution networks and to support the diffusion of DG itself. An advanced algorithm (DISCoVER - DIStribution Company VoltagE Regulator) is being developed to optimize the operation of active network by means of an advanced voltage control based on several regulations. Starting from forecasted load and generation, real on-field measurements, technical constraints and costs for each resource, the algorithm generates for each time period a set of commands for controllable resources that guarantees achievement of technical goals minimizing the overall cost. Before integrating the controller into the telecontrol system of the real networks, and in order to validate the proper behaviour of the algorithm and to identify possible critical conditions, a complete simulation phase has started. The first step is concerning the definition of a wide range of "case studies", that are the combination of network topology, technical constraints and targets, load and generation profiles and "costs" of resources that define a valid context to test the algorithm, with particular focus on battery and RES management. First results achieved from simulation activity on test networks (based on real MV grids) and actual battery characteristics are given, together with prospective performance on real case applications.
Discriminative structural approaches for enzyme active-site prediction.
Kato, Tsuyoshi; Nagano, Nozomi
2011-02-15
Predicting enzyme active-sites in proteins is an important issue not only for protein sciences but also for a variety of practical applications such as drug design. Because enzyme reaction mechanisms are based on the local structures of enzyme active-sites, various template-based methods that compare local structures in proteins have been developed to date. In comparing such local sites, a simple measurement, RMSD, has been used so far. This paper introduces new machine learning algorithms that refine the similarity/deviation for comparison of local structures. The similarity/deviation is applied to two types of applications, single template analysis and multiple template analysis. In the single template analysis, a single template is used as a query to search proteins for active sites, whereas a protein structure is examined as a query to discover the possible active-sites using a set of templates in the multiple template analysis. This paper experimentally illustrates that the machine learning algorithms effectively improve the similarity/deviation measurements for both the analyses.
Evaluation of stochastic differential equation approximation of ion channel gating models.
Bruce, Ian C
2009-04-01
Fox and Lu derived an algorithm based on stochastic differential equations for approximating the kinetics of ion channel gating that is simpler and faster than "exact" algorithms for simulating Markov process models of channel gating. However, the approximation may not be sufficiently accurate to predict statistics of action potential generation in some cases. The objective of this study was to develop a framework for analyzing the inaccuracies and determining their origin. Simulations of a patch of membrane with voltage-gated sodium and potassium channels were performed using an exact algorithm for the kinetics of channel gating and the approximate algorithm of Fox & Lu. The Fox & Lu algorithm assumes that channel gating particle dynamics have a stochastic term that is uncorrelated, zero-mean Gaussian noise, whereas the results of this study demonstrate that in many cases the stochastic term in the Fox & Lu algorithm should be correlated and non-Gaussian noise with a non-zero mean. The results indicate that: (i) the source of the inaccuracy is that the Fox & Lu algorithm does not adequately describe the combined behavior of the multiple activation particles in each sodium and potassium channel, and (ii) the accuracy does not improve with increasing numbers of channels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-02-02
This report consists of three separate but related reports. They are (1) Human Resource Development, (2) Carbon-based Structural Materials Research Cluster, and (3) Data Parallel Algorithms for Scientific Computing. To meet the objectives of the Human Resource Development plan, the plan includes K--12 enrichment activities, undergraduate research opportunities for students at the state`s two Historically Black Colleges and Universities, graduate research through cluster assistantships and through a traineeship program targeted specifically to minorities, women and the disabled, and faculty development through participation in research clusters. One research cluster is the chemistry and physics of carbon-based materials. The objective of thismore » cluster is to develop a self-sustaining group of researchers in carbon-based materials research within the institutions of higher education in the state of West Virginia. The projects will involve analysis of cokes, graphites and other carbons in order to understand the properties that provide desirable structural characteristics including resistance to oxidation, levels of anisotropy and structural characteristics of the carbons themselves. In the proposed cluster on parallel algorithms, research by four WVU faculty and three state liberal arts college faculty are: (1) modeling of self-organized critical systems by cellular automata; (2) multiprefix algorithms and fat-free embeddings; (3) offline and online partitioning of data computation; and (4) manipulating and rendering three dimensional objects. This cluster furthers the state Experimental Program to Stimulate Competitive Research plan by building on existing strengths at WVU in parallel algorithms.« less
Du, Yuncheng; Budman, Hector M; Duever, Thomas A
2017-06-01
Accurate and fast quantitative analysis of living cells from fluorescence microscopy images is useful for evaluating experimental outcomes and cell culture protocols. An algorithm is developed in this work to automatically segment and distinguish apoptotic cells from normal cells. The algorithm involves three steps consisting of two segmentation steps and a classification step. The segmentation steps are: (i) a coarse segmentation, combining a range filter with a marching square method, is used as a prefiltering step to provide the approximate positions of cells within a two-dimensional matrix used to store cells' images and the count of the number of cells for a given image; and (ii) a fine segmentation step using the Active Contours Without Edges method is applied to the boundaries of cells identified in the coarse segmentation step. Although this basic two-step approach provides accurate edges when the cells in a given image are sparsely distributed, the occurrence of clusters of cells in high cell density samples requires further processing. Hence, a novel algorithm for clusters is developed to identify the edges of cells within clusters and to approximate their morphological features. Based on the segmentation results, a support vector machine classifier that uses three morphological features: the mean value of pixel intensities in the cellular regions, the variance of pixel intensities in the vicinity of cell boundaries, and the lengths of the boundaries, is developed for distinguishing apoptotic cells from normal cells. The algorithm is shown to be efficient in terms of computational time, quantitative analysis, and differentiation accuracy, as compared with the use of the active contours method without the proposed preliminary coarse segmentation step.
Local spatio-temporal analysis in vision systems
NASA Astrophysics Data System (ADS)
Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David
1994-07-01
The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.
Modelling of evaporation of a dispersed liquid component in a chemically active gas flow
NASA Astrophysics Data System (ADS)
Kryukov, V. G.; Naumov, V. I.; Kotov, V. Yu.
1994-01-01
A model has been developed to investigate evaporation of dispersed liquids in chemically active gas flow. Major efforts have been directed at the development of algorithms for implementing this model. The numerical experiments demonstrate that, in the boundary layer, significant changes in the composition and temperature of combustion products take place. This gives the opportunity to more correctly model energy release processes in combustion chambers of liquid-propellant rocket engines, gas-turbine engines, and other power devices.
USDA-ARS?s Scientific Manuscript database
The purpose of SMAP (Soil Moisture Active Passive) Validation Experiment 2012 (SMAPVEX12) campaign was to collect data for the pre-launch development and validation of SMAP soil moisture algorithms. SMAP is a National Aeronautics and Space Administration’s (NASA) satellite mission designed for the m...
Prognostics of Power Electronics, Methods and Validation Experiments
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan S.; Celaya, Jose R.; Biswas, Gautam; Goebel, Kai
2012-01-01
Abstract Failure of electronic devices is a concern for future electric aircrafts that will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. As a result, investigation of precursors to failure in electronics and prediction of remaining life of electronic components is of key importance. DC-DC power converters are power electronics systems employed typically as sourcing elements for avionics equipment. Current research efforts in prognostics for these power systems focuses on the identification of failure mechanisms and the development of accelerated aging methodologies and systems to accelerate the aging process of test devices, while continuously measuring key electrical and thermal parameters. Preliminary model-based prognostics algorithms have been developed making use of empirical degradation models and physics-inspired degradation model with focus on key components like electrolytic capacitors and power MOSFETs (metal-oxide-semiconductor-field-effect-transistor). This paper presents current results on the development of validation methods for prognostics algorithms of power electrolytic capacitors. Particularly, in the use of accelerated aging systems for algorithm validation. Validation of prognostics algorithms present difficulties in practice due to the lack of run-to-failure experiments in deployed systems. By using accelerated experiments, we circumvent this problem in order to define initial validation activities.
BOREAS RSS-7 Regional LAI and FPAR Images From 10-Day AVHRR-LAC Composites
NASA Technical Reports Server (NTRS)
Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Chen, Jing; Cihlar, Josef
2000-01-01
The BOReal Ecosystem-Atmosphere Study Remote Sensing Science (BOREAS RSS-7) team collected various data sets to develop and validate an algorithm to allow the retrieval of the spatial distribution of Leaf Area Index (LAI) from remotely sensed images. Advanced Very High Resolution Radiometer (AVHRR) level-4c 10-day composite Normalized Difference Vegetation Index (NDVI) images produced at CCRS were used to produce images of LAI and the Fraction of Photosynthetically Active Radiation (FPAR) absorbed by plant canopies for the three summer IFCs in 1994 across the BOREAS region. The algorithms were developed based on ground measurements and Landsat Thematic Mapper (TM) images. The data are stored in binary image format files.
Syndromic Algorithms for Detection of Gambiense Human African Trypanosomiasis in South Sudan
Palmer, Jennifer J.; Surur, Elizeous I.; Goch, Garang W.; Mayen, Mangar A.; Lindner, Andreas K.; Pittet, Anne; Kasparian, Serena; Checchi, Francesco; Whitty, Christopher J. M.
2013-01-01
Background Active screening by mobile teams is considered the best method for detecting human African trypanosomiasis (HAT) caused by Trypanosoma brucei gambiense but the current funding context in many post-conflict countries limits this approach. As an alternative, non-specialist health care workers (HCWs) in peripheral health facilities could be trained to identify potential cases who need testing based on their symptoms. We explored the predictive value of syndromic referral algorithms to identify symptomatic cases of HAT among a treatment-seeking population in Nimule, South Sudan. Methodology/Principal Findings Symptom data from 462 patients (27 cases) presenting for a HAT test via passive screening over a 7 month period were collected to construct and evaluate over 14,000 four item syndromic algorithms considered simple enough to be used by peripheral HCWs. For comparison, algorithms developed in other settings were also tested on our data, and a panel of expert HAT clinicians were asked to make referral decisions based on the symptom dataset. The best performing algorithms consisted of three core symptoms (sleep problems, neurological problems and weight loss), with or without a history of oedema, cervical adenopathy or proximity to livestock. They had a sensitivity of 88.9–92.6%, a negative predictive value of up to 98.8% and a positive predictive value in this context of 8.4–8.7%. In terms of sensitivity, these out-performed more complex algorithms identified in other studies, as well as the expert panel. The best-performing algorithm is predicted to identify about 9/10 treatment-seeking HAT cases, though only 1/10 patients referred would test positive. Conclusions/Significance In the absence of regular active screening, improving referrals of HAT patients through other means is essential. Systematic use of syndromic algorithms by peripheral HCWs has the potential to increase case detection and would increase their participation in HAT programmes. The algorithms proposed here, though promising, should be validated elsewhere. PMID:23350005
Code Verification Capabilities and Assessments in Support of ASC V&V Level 2 Milestone #6035
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott William; Budzien, Joanne Louise; Ferguson, Jim Michael
This document provides a summary of the code verification activities supporting the FY17 Level 2 V&V milestone entitled “Deliver a Capability for V&V Assessments of Code Implementations of Physics Models and Numerical Algorithms in Support of Future Predictive Capability Framework Pegposts.” The physics validation activities supporting this milestone are documented separately. The objectives of this portion of the milestone are: 1) Develop software tools to support code verification analysis; 2) Document standard definitions of code verification test problems; and 3) Perform code verification assessments (focusing on error behavior of algorithms). This report and a set of additional standalone documents servemore » as the compilation of results demonstrating accomplishment of these objectives.« less
Generic Algorithms for Estimating Foliar Pigment Content
NASA Astrophysics Data System (ADS)
Gitelson, Anatoly; Solovchenko, Alexei
2017-09-01
Foliar pigment contents and composition are main factors governing absorbed photosynthetically active radiation, photosynthetic activity, and physiological status of vegetation. In this study the performance of nondestructive techniques based on leaf reflectance were tested for estimating chlorophyll (Chl) and anthocyanin (AnC) contents in species with widely variable leaf structure, pigment content, and composition. Only three spectral bands (green, red edge, and near-infrared) are required for nondestructive Chl and AnC estimation with normalized root-mean-square error (NRMSE) below 4.5% and 6.1%, respectively. The algorithms developed are generic, not requiring reparameterization for each species allowing for accurate nondestructive Chl and AnC estimation using simple handheld field/lab instrumentation. They also have potential in interpretation of airborne and satellite data.
Awad, Joseph; Owrangi, Amir; Villemaire, Lauren; O'Riordan, Elaine; Parraga, Grace; Fenster, Aaron
2012-02-01
Manual segmentation of lung tumors is observer dependent and time-consuming but an important component of radiology and radiation oncology workflow. The objective of this study was to generate an automated lung tumor measurement tool for segmentation of pulmonary metastatic tumors from x-ray computed tomography (CT) images to improve reproducibility and decrease the time required to segment tumor boundaries. The authors developed an automated lung tumor segmentation algorithm for volumetric image analysis of chest CT images using shape constrained Otsu multithresholding (SCOMT) and sparse field active surface (SFAS) algorithms. The observer was required to select the tumor center and the SCOMT algorithm subsequently created an initial surface that was deformed using level set SFAS to minimize the total energy consisting of mean separation, edge, partial volume, rolling, distribution, background, shape, volume, smoothness, and curvature energies. The proposed segmentation algorithm was compared to manual segmentation whereby 21 tumors were evaluated using one-dimensional (1D) response evaluation criteria in solid tumors (RECIST), two-dimensional (2D) World Health Organization (WHO), and 3D volume measurements. Linear regression goodness-of-fit measures (r(2) = 0.63, p < 0.0001; r(2) = 0.87, p < 0.0001; and r(2) = 0.96, p < 0.0001), and Pearson correlation coefficients (r = 0.79, p < 0.0001; r = 0.93, p < 0.0001; and r = 0.98, p < 0.0001) for 1D, 2D, and 3D measurements, respectively, showed significant correlations between manual and algorithm results. Intra-observer intraclass correlation coefficients (ICC) demonstrated high reproducibility for algorithm (0.989-0.995, 0.996-0.997, and 0.999-0.999) and manual measurements (0.975-0.993, 0.985-0.993, and 0.980-0.992) for 1D, 2D, and 3D measurements, respectively. The intra-observer coefficient of variation (CV%) was low for algorithm (3.09%-4.67%, 4.85%-5.84%, and 5.65%-5.88%) and manual observers (4.20%-6.61%, 8.14%-9.57%, and 14.57%-21.61%) for 1D, 2D, and 3D measurements, respectively. The authors developed an automated segmentation algorithm requiring only that the operator select the tumor to measure pulmonary metastatic tumors in 1D, 2D, and 3D. Algorithm and manual measurements were significantly correlated. Since the algorithm segmentation involves selection of a single seed point, it resulted in reduced intra-observer variability and decreased time, for making the measurements.
The GOES-R Series Geostationary Lightning Mapper (GLM)
NASA Technical Reports Server (NTRS)
Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Mach, Douglas M.
2011-01-01
The Geostationary Operational Environmental Satellite (GOES-R) is the next series to follow the existing GOES system currently operating over the Western Hemisphere. Superior spacecraft and instrument technology will support expanded detection of environmental phenomena, resulting in more timely and accurate forecasts and warnings. Advancements over current GOES capabilities include a new capability for total lightning detection (cloud and cloud-to-ground flashes) from the Geostationary Lightning Mapper (GLM), which will have just completed Critical Design Review and move forward into the construction phase of instrument development. The GLM will operate continuously day and night with near-uniform spatial resolution of 8 km with a product refresh rate of less than 20 sec over the Americas and adjacent oceanic regions. This will aid in forecasting severe storms and tornado activity, and convective weather impacts on aviation safety and efficiency. In parallel with the instrument development (an engineering development unit and 4 flight models), a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms, cal/val performance monitoring tools, and new applications. Proxy total lightning data from the NASA Lightning Imaging Sensor (LIS) on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional ground-based lightning networks are being used to develop the pre-launch algorithms, test data sets, and applications, as well as improve our knowledge of thunderstorm initiation and evolution. In this presentation we review the planned implementation of the instrument and suite of operational algorithms
Computer algorithms in the search for unrelated stem cell donors.
Steiner, David
2012-01-01
Hematopoietic stem cell transplantation (HSCT) is a medical procedure in the field of hematology and oncology, most often performed for patients with certain cancers of the blood or bone marrow. A lot of patients have no suitable HLA-matched donor within their family, so physicians must activate a "donor search process" by interacting with national and international donor registries who will search their databases for adult unrelated donors or cord blood units (CBU). Information and communication technologies play a key role in the donor search process in donor registries both nationally and internationaly. One of the major challenges for donor registry computer systems is the development of a reliable search algorithm. This work discusses the top-down design of such algorithms and current practice. Based on our experience with systems used by several stem cell donor registries, we highlight typical pitfalls in the implementation of an algorithm and underlying data structure.
Design of synthetic biological logic circuits based on evolutionary algorithm.
Chuang, Chia-Hua; Lin, Chun-Liang; Chang, Yen-Chang; Jennawasin, Tanagorn; Chen, Po-Kuei
2013-08-01
The construction of an artificial biological logic circuit using systematic strategy is recognised as one of the most important topics for the development of synthetic biology. In this study, a real-structured genetic algorithm (RSGA), which combines general advantages of the traditional real genetic algorithm with those of the structured genetic algorithm, is proposed to deal with the biological logic circuit design problem. A general model with the cis-regulatory input function and appropriate promoter activity functions is proposed to synthesise a wide variety of fundamental logic gates such as NOT, Buffer, AND, OR, NAND, NOR and XOR. The results obtained can be extended to synthesise advanced combinational and sequential logic circuits by topologically distinct connections. The resulting optimal design of these logic gates and circuits are established via the RSGA. The in silico computer-based modelling technology has been verified showing its great advantages in the purpose.
A Cross-Layer User Centric Vertical Handover Decision Approach Based on MIH Local Triggers
NASA Astrophysics Data System (ADS)
Rehan, Maaz; Yousaf, Muhammad; Qayyum, Amir; Malik, Shahzad
Vertical handover decision algorithm that is based on user preferences and coupled with Media Independent Handover (MIH) local triggers have not been explored much in the literature. We have developed a comprehensive cross-layer solution, called Vertical Handover Decision (VHOD) approach, which consists of three parts viz. mechanism for collecting and storing user preferences, Vertical Handover Decision (VHOD) algorithm and the MIH Function (MIHF). MIHF triggers the VHOD algorithm which operates on user preferences to issue handover commands to mobility management protocol. VHOD algorithm is an MIH User and therefore needs to subscribe events and configure thresholds for receiving triggers from MIHF. In this regard, we have performed experiments in WLAN to suggest thresholds for Link Going Down trigger. We have also critically evaluated the handover decision process, proposed Just-in-time interface activation technique, compared our proposed approach with prominent user centric approaches and analyzed our approach from different aspects.
Analysis of modal behavior at frequency cross-over
NASA Astrophysics Data System (ADS)
Costa, Robert N., Jr.
1994-11-01
The existence of the mode crossing condition is detected and analyzed in the Active Control of Space Structures Model 4 (ACOSS4). The condition is studied for its contribution to the inability of previous algorithms to successfully optimize the structure and converge to a feasible solution. A new algorithm is developed to detect and correct for mode crossings. The existence of the mode crossing condition is verified in ACOSS4 and found not to have appreciably affected the solution. The structure is then successfully optimized using new analytic methods based on modal expansion. An unrelated error in the optimization algorithm previously used is verified and corrected, thereby equipping the optimization algorithm with a second analytic method for eigenvector differentiation based on Nelson's Method. The second structure is the Control of Flexible Structures (COFS). The COFS structure is successfully reproduced and an initial eigenanalysis completed.
NASA Astrophysics Data System (ADS)
Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.
2016-03-01
Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.
ECG-gated interventional cardiac reconstruction for non-periodic motion.
Rohkohl, Christopher; Lauritsch, Günter; Biller, Lisa; Hornegger, Joachim
2010-01-01
The 3-D reconstruction of cardiac vasculature using C-arm CT is an active and challenging field of research. In interventional environments patients often do have arrhythmic heart signals or cannot hold breath during the complete data acquisition. This important group of patients cannot be reconstructed with current approaches that do strongly depend on a high degree of cardiac motion periodicity for working properly. In a last year's MICCAI contribution a first algorithm was presented that is able to estimate non-periodic 4-D motion patterns. However, to some degree that algorithm still depends on periodicity, as it requires a prior image which is obtained using a simple ECG-gated reconstruction. In this work we aim to provide a solution to this problem by developing a motion compensated ECG-gating algorithm. It is built upon a 4-D time-continuous affine motion model which is capable of compactly describing highly non-periodic motion patterns. A stochastic optimization scheme is derived which minimizes the error between the measured projection data and the forward projection of the motion compensated reconstruction. For evaluation, the algorithm is applied to 5 datasets of the left coronary arteries of patients that have ignored the breath hold command and/or had arrhythmic heart signals during the data acquisition. By applying the developed algorithm the average visibility of the vessel segments could be increased by 27%. The results show that the proposed algorithm provides excellent reconstruction quality in cases where classical approaches fail. The algorithm is highly parallelizable and a clinically feasible runtime of under 4 minutes is achieved using modern graphics card hardware.
ASTER cloud coverage reassessment using MODIS cloud mask products
NASA Astrophysics Data System (ADS)
Tonooka, Hideyuki; Omagari, Kunjuro; Yamamoto, Hirokazu; Tachikawa, Tetsushi; Fujita, Masaru; Paitaer, Zaoreguli
2010-10-01
In the Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER) Project, two kinds of algorithms are used for cloud assessment in Level-1 processing. The first algorithm based on the LANDSAT-5 TM Automatic Cloud Cover Assessment (ACCA) algorithm is used for a part of daytime scenes observed with only VNIR bands and all nighttime scenes, and the second algorithm based on the LANDSAT-7 ETM+ ACCA algorithm is used for most of daytime scenes observed with all spectral bands. However, the first algorithm does not work well for lack of some spectral bands sensitive to cloud detection, and the two algorithms have been less accurate over snow/ice covered areas since April 2008 when the SWIR subsystem developed troubles. In addition, they perform less well for some combinations of surface type and sun elevation angle. We, therefore, have developed the ASTER cloud coverage reassessment system using MODIS cloud mask (MOD35) products, and have reassessed cloud coverage for all ASTER archived scenes (>1.7 million scenes). All of the new cloud coverage data are included in Image Management System (IMS) databases of the ASTER Ground Data System (GDS) and NASA's Land Process Data Active Archive Center (LP DAAC) and used for ASTER product search by users, and cloud mask images are distributed to users through Internet. Daily upcoming scenes (about 400 scenes per day) are reassessed and inserted into the IMS databases in 5 to 7 days after each scene observation date. Some validation studies for the new cloud coverage data and some mission-related analyses using those data are also demonstrated in the present paper.
Geostationary Lightning Mapper for GOES-R and Beyond
NASA Technical Reports Server (NTRS)
Goodman, Steven J.; Blakeslee, R. J.; Koshak, W.
2008-01-01
The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk as part of a 3-axis stabilized, geostationary weather satellite system. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series with a planned launch readiness in December 2014 will carry a GLM that will provide continuous day and night observations of lightning from the west coast of Africa (GOES-E) to New Zealand (GOES-W) when the constellation is fUlly operational. The mission objectives for the GLM are to 1) provide continuous, full-disk lightning measurements for storm warning and nowcasting, 2) provide early warning of tornadic activity, and 3) accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997-Present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. Instrument formulation studies were completed in March 2007 and the implementation phase to develop a prototype model and up to four flight models will be underway in the latter part of 2007. In parallel with the instrument development, a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms and applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama and the Washington DC Metropolitan area) are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. Real time lightning mapping data are being provided in an experimental mode to selected National Weather Service (NWS) forecast offices in Southern and Eastern Region. This effort is designed to help improve our understanding of the application of these data in operational settings.
Ergonomics principles to design clothing work for electrical workers in Colombia.
Castillo, Juan; Cubillos, A
2012-01-01
The recent development of the Colombian legislation, have been identified the need to develop protective clothing to work according to specifications from the work done and in compliance with international standards. These involve the development and design of new strategies and measures for work clothing design. In this study we analyzes the activities of the workers in the electrical sector, the method analyzes the risks activity data in various activities, that activities include power generation plants, local facilities, industrial facilities and maintenance of urban and rural networks. The analyses method is focused on ergonomic approach, risk analysis is done, we evaluate the role of security expert and we use a design algorithm developed for this purpose. The result of this study is the identification of constraints and variables that contribute to the development of a model of analysis that leads to the development the work protective clothes.
NASA Technical Reports Server (NTRS)
Mikic, I.; Krucinski, S.; Thomas, J. D.
1998-01-01
This paper presents a method for segmentation and tracking of cardiac structures in ultrasound image sequences. The developed algorithm is based on the active contour framework. This approach requires initial placement of the contour close to the desired position in the image, usually an object outline. Best contour shape and position are then calculated, assuming that at this configuration a global energy function, associated with a contour, attains its minimum. Active contours can be used for tracking by selecting a solution from a previous frame as an initial position in a present frame. Such an approach, however, fails for large displacements of the object of interest. This paper presents a technique that incorporates the information on pixel velocities (optical flow) into the estimate of initial contour to enable tracking of fast-moving objects. The algorithm was tested on several ultrasound image sequences, each covering one complete cardiac cycle. The contour successfully tracked boundaries of mitral valve leaflets, aortic root and endocardial borders of the left ventricle. The algorithm-generated outlines were compared against manual tracings by expert physicians. The automated method resulted in contours that were within the boundaries of intraobserver variability.
The inverse electroencephalography pipeline
NASA Astrophysics Data System (ADS)
Weinstein, David Michael
The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.
Status report: Data management program algorithm evaluation activity at Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
Jayroe, R. R., Jr.
1977-01-01
An algorithm evaluation activity was initiated to study the problems associated with image processing by assessing the independent and interdependent effects of registration, compression, and classification techniques on LANDSAT data for several discipline applications. The objective of the activity was to make recommendations on selected applicable image processing algorithms in terms of accuracy, cost, and timeliness or to propose alternative ways of processing the data. As a means of accomplishing this objective, an Image Coding Panel was established. The conduct of the algorithm evaluation is described.
The design and performance characteristics of a cellular logic 3-D image classification processor
NASA Astrophysics Data System (ADS)
Ankeney, L. A.
1981-04-01
The introduction of high resolution scanning laser radar systems which are capable of collecting range and reflectivity images, is predicted to significantly influence the development of processors capable of performing autonomous target classification tasks. Actively sensed range images are shown to be superior to passively collected infrared images in both image stability and information content. An illustrated tutorial introduces cellular logic (neighborhood) transformations and two and three dimensional erosion and dilation operations which are used for noise filters and geometric shape measurement. A unique 'cookbook' approach to selecting a sequence of neighborhood transformations suitable for object measurement is developed and related to false alarm rate and algorithm effectiveness measures. The cookbook design approach is used to develop an algorithm to classify objects based upon their 3-D geometrical features. A Monte Carlo performance analysis is used to demonstrate the utility of the design approach by characterizing the ability of the algorithm to classify randomly positioned three dimensional objects in the presence of additive noise, scale variations, and other forms of image distortion.
Learning Behavior Characterization with Multi-Feature, Hierarchical Activity Sequences
ERIC Educational Resources Information Center
Ye, Cheng; Segedy, James R.; Kinnebrew, John S.; Biswas, Gautam
2015-01-01
This paper discusses Multi-Feature Hierarchical Sequential Pattern Mining, MFH-SPAM, a novel algorithm that efficiently extracts patterns from students' learning activity sequences. This algorithm extends an existing sequential pattern mining algorithm by dynamically selecting the level of specificity for hierarchically-defined features…
The GOES-R GeoStationary Lightning Mapper (GLM)
NASA Technical Reports Server (NTRS)
Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Mach, Douglas
2011-01-01
The Geostationary Operational Environmental Satellite (GOES-R) is the next series to follow the existing GOES system currently operating over the Western Hemisphere. Superior spacecraft and instrument technology will support expanded detection of environmental phenomena, resulting in more timely and accurate forecasts and warnings. Advancements over current GOES capabilities include a new capability for total lightning detection (cloud and cloud-to-ground flashes) from the Geostationary Lightning Mapper (GLM), and improved capability for the Advanced Baseline Imager (ABI). The Geostationary Lighting Mapper (GLM) will map total lightning activity (in-cloud and cloud-to-ground lighting flashes) continuously day and night with near-uniform spatial resolution of 8 km with a product refresh rate of less than 20 sec over the Americas and adjacent oceanic regions. This will aid in forecasting severe storms and tornado activity, and convective weather impacts on aviation safety and efficiency among a number of potential applications. In parallel with the instrument development (a prototype and 4 flight models), a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms (environmental data records), cal/val performance monitoring tools, and new applications using GLM alone, in combination with the ABI, merged with ground-based sensors, and decision aids augmented by numerical weather prediction model forecasts. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. An international field campaign planned for 2011-2012 will produce concurrent observations from a VHF lightning mapping array, Meteosat multi-band imagery, Tropical Rainfall Measuring Mission (TRMM) Lightning Imaging Sensor (LIS) overpasses, and related ground and in-situ lightning and meteorological measurements in the vicinity of Sao Paulo. These data will provide a new comprehensive proxy data set for algorithm and application development.
Data-driven approach for creating synthetic electronic medical records.
Buczak, Anna L; Babin, Steven; Moniz, Linda
2010-10-14
New algorithms for disease outbreak detection are being developed to take advantage of full electronic medical records (EMRs) that contain a wealth of patient information. However, due to privacy concerns, even anonymized EMRs cannot be shared among researchers, resulting in great difficulty in comparing the effectiveness of these algorithms. To bridge the gap between novel bio-surveillance algorithms operating on full EMRs and the lack of non-identifiable EMR data, a method for generating complete and synthetic EMRs was developed. This paper describes a novel methodology for generating complete synthetic EMRs both for an outbreak illness of interest (tularemia) and for background records. The method developed has three major steps: 1) synthetic patient identity and basic information generation; 2) identification of care patterns that the synthetic patients would receive based on the information present in real EMR data for similar health problems; 3) adaptation of these care patterns to the synthetic patient population. We generated EMRs, including visit records, clinical activity, laboratory orders/results and radiology orders/results for 203 synthetic tularemia outbreak patients. Validation of the records by a medical expert revealed problems in 19% of the records; these were subsequently corrected. We also generated background EMRs for over 3000 patients in the 4-11 yr age group. Validation of those records by a medical expert revealed problems in fewer than 3% of these background patient EMRs and the errors were subsequently rectified. A data-driven method was developed for generating fully synthetic EMRs. The method is general and can be applied to any data set that has similar data elements (such as laboratory and radiology orders and results, clinical activity, prescription orders). The pilot synthetic outbreak records were for tularemia but our approach may be adapted to other infectious diseases. The pilot synthetic background records were in the 4-11 year old age group. The adaptations that must be made to the algorithms to produce synthetic background EMRs for other age groups are indicated.
Evaluation of FNS control systems: software development and sensor characterization.
Riess, J; Abbas, J J
1997-01-01
Functional Neuromuscular Stimulation (FNS) systems activate paralyzed limbs by electrically stimulating motor neurons. These systems have been used to restore functions such as standing and stepping in people with thoracic level spinal cord injury. Research in our laboratory is directed at the design and evaluation of the control algorithms for generating posture and movement. This paper describes software developed for implementing FNS control systems and the characterization of a sensor system used to implement and evaluate controllers in the laboratory. In order to assess FNS control algorithms, we have developed a versatile software package using Lab VIEW (National Instruments, Corp). This package provides the ability to interface with sensor systems via serial port or A/D board, implement data processing and real-time control algorithms, and interface with neuromuscular stimulation devices. In our laboratory, we use the Flock of Birds (Ascension Technology Corp.) motion tracking sensor system to monitor limb segment position and orientation (6 degrees of freedom). Errors in the sensor system have been characterized and nonlinear polynomial models have been developed to account for these errors. With this compensation, the error in the distance measurement is reduced by 90 % so that the maximum error is less than 1 cm.
Visco, Carlo; Li, Yan; Xu-Monette, Zijun Y.; Miranda, Roberto N.; Green, Tina M.; Li, Yong; Tzankov, Alexander; Wen, Wei; Liu, Wei-min; Kahl, Brad S.; d’Amore, Emanuele S. G.; Montes-Moreno, Santiago; Dybkær, Karen; Chiu, April; Tam, Wayne; Orazi, Attilio; Zu, Youli; Bhagat, Govind; Winter, Jane N.; Wang, Huan-You; O’Neill, Stacey; Dunphy, Cherie H.; Hsi, Eric D.; Zhao, X. Frank; Go, Ronald S.; Choi, William W. L.; Zhou, Fan; Czader, Magdalena; Tong, Jiefeng; Zhao, Xiaoying; van Krieken, J. Han; Huang, Qing; Ai, Weiyun; Etzell, Joan; Ponzoni, Maurilio; Ferreri, Andres J. M.; Piris, Miguel A.; Møller, Michael B.; Bueso-Ramos, Carlos E.; Medeiros, L. Jeffrey; Wu, Lin; Young, Ken H.
2013-01-01
Gene expression profiling (GEP) has stratified diffuse large B-cell lymphoma (DLBCL) into molecular subgroups that correspond to different stages of lymphocyte development - namely germinal center B-cell-like and activated B-cell-like. This classification has prognostic significance, but GEP is expensive and not readily applicable into daily practice, which has lead to immunohistochemical algorithms proposed as a surrogate for GEP analysis. We assembled tissue microarrays from 475 de novo DLBCL patients who were treated with rituximab-CHOP chemotherapy. All cases were successfully profiled by GEP on formalin-fixed, paraffin-embedded tissue samples. Sections were stained with antibodies reactive with CD10, GCET1, FOXP1, MUM1, and BCL6 and cases were classified following a rationale of sequential steps of differentiation of B-cells. Cutoffs for each marker were obtained using receiver operating characteristic curves, obviating the need for any arbitrary method. An algorithm based on the expression of CD10, FOXP1, and BCL6 was developed that had a simpler structure than other recently proposed algorithms and 92.6% concordance with GEP. In multivariate analysis, both the International Prognostic Index and our proposed algorithm were significant independent predictors of progression-free and overall survival. In conclusion, this algorithm effectively predicts prognosis of DLBCL patients matching GEP subgroups in the era of rituximab therapy. PMID:22437443
Keylogger Application to Monitoring Users Activity with Exact String Matching Algorithm
NASA Astrophysics Data System (ADS)
Rahim, Robbi; Nurdiyanto, Heri; Saleh A, Ansari; Abdullah, Dahlan; Hartama, Dedy; Napitupulu, Darmawan
2018-01-01
The development of technology is very fast, especially in the field of Internet technology that at any time experiencing significant changes, The development also supported by the ability of human resources, Keylogger is a tool that most developed because this application is very rarely recognized a malicious program by antivirus, keylogger will record all activities related to keystrokes, the recording process is accomplished by using string matching method. The application of string matching method in the process of recording the keyboard is to help the admin in knowing what the user accessed on the computer.
Novel Virtual Screening Approach for the Discovery of Human Tyrosinase Inhibitors
Ai, Ni; Welsh, William J.; Santhanam, Uma; Hu, Hong; Lyga, John
2014-01-01
Tyrosinase is the key enzyme involved in the human pigmentation process, as well as the undesired browning of fruits and vegetables. Compounds inhibiting tyrosinase catalytic activity are an important class of cosmetic and dermatological agents which show high potential as depigmentation agents used for skin lightening. The multi-step protocol employed for the identification of novel tyrosinase inhibitors incorporated the Shape Signatures computational algorithm for rapid screening of chemical libraries. This algorithm converts the size and shape of a molecule, as well its surface charge distribution and other bio-relevant properties, into compact histograms (signatures) that lend themselves to rapid comparison between molecules. Shape Signatures excels at scaffold hopping across different chemical families, which enables identification of new actives whose molecular structure is distinct from other known actives. Using this approach, we identified a novel class of depigmentation agents that demonstrated promise for skin lightening product development. PMID:25426625
Novel virtual screening approach for the discovery of human tyrosinase inhibitors.
Ai, Ni; Welsh, William J; Santhanam, Uma; Hu, Hong; Lyga, John
2014-01-01
Tyrosinase is the key enzyme involved in the human pigmentation process, as well as the undesired browning of fruits and vegetables. Compounds inhibiting tyrosinase catalytic activity are an important class of cosmetic and dermatological agents which show high potential as depigmentation agents used for skin lightening. The multi-step protocol employed for the identification of novel tyrosinase inhibitors incorporated the Shape Signatures computational algorithm for rapid screening of chemical libraries. This algorithm converts the size and shape of a molecule, as well its surface charge distribution and other bio-relevant properties, into compact histograms (signatures) that lend themselves to rapid comparison between molecules. Shape Signatures excels at scaffold hopping across different chemical families, which enables identification of new actives whose molecular structure is distinct from other known actives. Using this approach, we identified a novel class of depigmentation agents that demonstrated promise for skin lightening product development.
Active semi-supervised learning method with hybrid deep belief networks.
Zhou, Shusen; Chen, Qingcai; Wang, Xiaolong
2014-01-01
In this paper, we develop a novel semi-supervised learning algorithm called active hybrid deep belief networks (AHD), to address the semi-supervised sentiment classification problem with deep learning. First, we construct the previous several hidden layers using restricted Boltzmann machines (RBM), which can reduce the dimension and abstract the information of the reviews quickly. Second, we construct the following hidden layers using convolutional restricted Boltzmann machines (CRBM), which can abstract the information of reviews effectively. Third, the constructed deep architecture is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Finally, active learning method is combined based on the proposed deep architecture. We did several experiments on five sentiment classification datasets, and show that AHD is competitive with previous semi-supervised learning algorithm. Experiments are also conducted to verify the effectiveness of our proposed method with different number of labeled reviews and unlabeled reviews respectively.
Applying active learning to supervised word sense disambiguation in MEDLINE.
Chen, Yukun; Cao, Hongxin; Mei, Qiaozhu; Zheng, Kai; Xu, Hua
2013-01-01
This study was to assess whether active learning strategies can be integrated with supervised word sense disambiguation (WSD) methods, thus reducing the number of annotated samples, while keeping or improving the quality of disambiguation models. We developed support vector machine (SVM) classifiers to disambiguate 197 ambiguous terms and abbreviations in the MSH WSD collection. Three different uncertainty sampling-based active learning algorithms were implemented with the SVM classifiers and were compared with a passive learner (PL) based on random sampling. For each ambiguous term and each learning algorithm, a learning curve that plots the accuracy computed from the test set as a function of the number of annotated samples used in the model was generated. The area under the learning curve (ALC) was used as the primary metric for evaluation. Our experiments demonstrated that active learners (ALs) significantly outperformed the PL, showing better performance for 177 out of 197 (89.8%) WSD tasks. Further analysis showed that to achieve an average accuracy of 90%, the PL needed 38 annotated samples, while the ALs needed only 24, a 37% reduction in annotation effort. Moreover, we analyzed cases where active learning algorithms did not achieve superior performance and identified three causes: (1) poor models in the early learning stage; (2) easy WSD cases; and (3) difficult WSD cases, which provide useful insight for future improvements. This study demonstrated that integrating active learning strategies with supervised WSD methods could effectively reduce annotation cost and improve the disambiguation models.
Physical Behavior in Older Persons during Daily Life: Insights from Instrumented Shoes.
Moufawad El Achkar, Christopher; Lenoble-Hoskovec, Constanze; Paraschiv-Ionescu, Anisoara; Major, Kristof; Büla, Christophe; Aminian, Kamiar
2016-08-03
Activity level and gait parameters during daily life are important indicators for clinicians because they can provide critical insights into modifications of mobility and function over time. Wearable activity monitoring has been gaining momentum in daily life health assessment. Consequently, this study seeks to validate an algorithm for the classification of daily life activities and to provide a detailed gait analysis in older adults. A system consisting of an inertial sensor combined with a pressure sensing insole has been developed. Using an algorithm that we previously validated during a semi structured protocol, activities in 10 healthy elderly participants were recorded and compared to a wearable reference system over a 4 h recording period at home. Detailed gait parameters were calculated from inertial sensors. Dynamics of physical behavior were characterized using barcodes that express the measure of behavioral complexity. Activity classification based on the algorithm led to a 93% accuracy in classifying basic activities of daily life, i.e., sitting, standing, and walking. Gait analysis emphasizes the importance of metrics such as foot clearance in daily life assessment. Results also underline that measures of physical behavior and gait performance are complementary, especially since gait parameters were not correlated to complexity. Participants gave positive feedback regarding the use of the instrumented shoes. These results extend previous observations in showing the concurrent validity of the instrumented shoes compared to a body-worn reference system for daily-life physical behavior monitoring in older adults.
Applying active learning to supervised word sense disambiguation in MEDLINE
Chen, Yukun; Cao, Hongxin; Mei, Qiaozhu; Zheng, Kai; Xu, Hua
2013-01-01
Objectives This study was to assess whether active learning strategies can be integrated with supervised word sense disambiguation (WSD) methods, thus reducing the number of annotated samples, while keeping or improving the quality of disambiguation models. Methods We developed support vector machine (SVM) classifiers to disambiguate 197 ambiguous terms and abbreviations in the MSH WSD collection. Three different uncertainty sampling-based active learning algorithms were implemented with the SVM classifiers and were compared with a passive learner (PL) based on random sampling. For each ambiguous term and each learning algorithm, a learning curve that plots the accuracy computed from the test set as a function of the number of annotated samples used in the model was generated. The area under the learning curve (ALC) was used as the primary metric for evaluation. Results Our experiments demonstrated that active learners (ALs) significantly outperformed the PL, showing better performance for 177 out of 197 (89.8%) WSD tasks. Further analysis showed that to achieve an average accuracy of 90%, the PL needed 38 annotated samples, while the ALs needed only 24, a 37% reduction in annotation effort. Moreover, we analyzed cases where active learning algorithms did not achieve superior performance and identified three causes: (1) poor models in the early learning stage; (2) easy WSD cases; and (3) difficult WSD cases, which provide useful insight for future improvements. Conclusions This study demonstrated that integrating active learning strategies with supervised WSD methods could effectively reduce annotation cost and improve the disambiguation models. PMID:23364851
Cabanas-Sánchez, Verónica; Higueras-Fresnillo, Sara; De la Cámara, Miguel Ángel; Veiga, Oscar L; Martinez-Gomez, David
2018-05-16
The aims of the present study were (i) to develop automated algorithms to identify the sleep period time in 24 h data from the Intelligent Device for Energy Expenditure and Activity (IDEEA) in older adults, and (ii) to analyze the agreement between these algorithms to identify the sleep period time as compared to self-reported data and expert visual analysis of accelerometer raw data. This study comprised 50 participants, aged 65-85 years. Fourteen automated algorithms were developed. Participants reported their bedtime and waking time on the days on which they wore the device. A well-trained expert reviewed each IDEEA file in order to visually identify bedtime and waking time on each day. To explore the agreement between methods, Pearson correlations, mean differences, mean percentage errors, accuracy, sensitivity and specificity, and the Bland-Altman method were calculated. With 87 d of valid data, algorithms 6, 7, 11 and 12 achieved higher levels of agreement in determining sleep period time when compared to self-reported data (mean difference = -0.34 to 0.01 h d -1 ; mean absolute error = 10.66%-11.44%; r = 0.515-0.686; accuracy = 95.0%-95.6%; sensitivity = 93.0%-95.8%; specificity = 95.7%-96.4%) and expert visual analysis (mean difference = -0.04 to 0.31 h d -1 ; mean absolute error = 5.0%-6.97%; r = 0.620-0.766; accuracy = 97.2%-98.0%; sensitivity = 94.5%-97.6%; specificity = 98.4%-98.8%). Bland-Altman plots showed no systematic biases in these comparisons (all p > 0.05). Differences between methods did not vary significantly by gender, age, obesity, self-rated health, or the presence of chronic conditions. These four algorithms can be used to identify easily and with adequate accuracy the sleep period time using the IDEEA activity monitor from 24 h free-living data in older adults.
Arakelyan, Arsen; Nersisyan, Lilit; Petrek, Martin; Löffler-Wirth, Henry; Binder, Hans
2016-01-01
Lung diseases are described by a wide variety of developmental mechanisms and clinical manifestations. Accurate classification and diagnosis of lung diseases are the bases for development of effective treatments. While extensive studies are conducted toward characterization of various lung diseases at molecular level, no systematic approach has been developed so far. Here we have applied a methodology for pathway-centered mining of high throughput gene expression data to describe a wide range of lung diseases in the light of shared and specific pathway activity profiles. We have applied an algorithm combining a Pathway Signal Flow (PSF) algorithm for estimation of pathway activity deregulation states in lung diseases and malignancies, and a Self Organizing Maps algorithm for classification and clustering of the pathway activity profiles. The analysis results allowed clearly distinguish between cancer and non-cancer lung diseases. Lung cancers were characterized by pathways implicated in cell proliferation, metabolism, while non-malignant lung diseases were characterized by deregulations in pathways involved in immune/inflammatory response and fibrotic tissue remodeling. In contrast to lung malignancies, chronic lung diseases had relatively heterogeneous pathway deregulation profiles. We identified three groups of interstitial lung diseases and showed that the development of characteristic pathological processes, such as fibrosis, can be initiated by deregulations in different signaling pathways. In conclusion, this paper describes the pathobiology of lung diseases from systems viewpoint using pathway centered high-dimensional data mining approach. Our results contribute largely to current understanding of pathological events in lung cancers and non-malignant lung diseases. Moreover, this paper provides new insight into molecular mechanisms of a number of interstitial lung diseases that have been studied to a lesser extent. PMID:27200087
Developing a system for blind acoustic source localization and separation
NASA Astrophysics Data System (ADS)
Kulkarni, Raghavendra
This dissertation presents innovate methodologies for locating, extracting, and separating multiple incoherent sound sources in three-dimensional (3D) space; and applications of the time reversal (TR) algorithm to pinpoint the hyper active neural activities inside the brain auditory structure that are correlated to the tinnitus pathology. Specifically, an acoustic modeling based method is developed for locating arbitrary and incoherent sound sources in 3D space in real time by using a minimal number of microphones, and the Point Source Separation (PSS) method is developed for extracting target signals from directly measured mixed signals. Combining these two approaches leads to a novel technology known as Blind Sources Localization and Separation (BSLS) that enables one to locate multiple incoherent sound signals in 3D space and separate original individual sources simultaneously, based on the directly measured mixed signals. These technologies have been validated through numerical simulations and experiments conducted in various non-ideal environments where there are non-negligible, unspecified sound reflections and reverberation as well as interferences from random background noise. Another innovation presented in this dissertation is concerned with applications of the TR algorithm to pinpoint the exact locations of hyper-active neurons in the brain auditory structure that are directly correlated to the tinnitus perception. Benchmark tests conducted on normal rats have confirmed the localization results provided by the TR algorithm. Results demonstrate that the spatial resolution of this source localization can be as high as the micrometer level. This high precision localization may lead to a paradigm shift in tinnitus diagnosis, which may in turn produce a more cost-effective treatment for tinnitus than any of the existing ones.
Feasibility of the MUSIC Algorithm for the Active Protection System
2001-03-01
Feasibility of the MUSIC Algorithm for the Active Protection System ARL-MR-501 March 2001 Canh Ly Approved for public release; distribution... MUSIC Algorithm for the Active Protection System Canh Ly Sensors and Electron Devices Directorate Approved for public release; distribution unlimited...This report compares the accuracy of the doppler frequency of an incoming projectile with the use of the MUSIC (multiple signal classification
Wang, Handing; Jin, Yaochu; Doherty, John
2017-09-01
Function evaluations (FEs) of many real-world optimization problems are time or resource consuming, posing a serious challenge to the application of evolutionary algorithms (EAs) to solve these problems. To address this challenge, the research on surrogate-assisted EAs has attracted increasing attention from both academia and industry over the past decades. However, most existing surrogate-assisted EAs (SAEAs) either still require thousands of expensive FEs to obtain acceptable solutions, or are only applied to very low-dimensional problems. In this paper, a novel surrogate-assisted particle swarm optimization (PSO) inspired from committee-based active learning (CAL) is proposed. In the proposed algorithm, a global model management strategy inspired from CAL is developed, which searches for the best and most uncertain solutions according to a surrogate ensemble using a PSO algorithm and evaluates these solutions using the expensive objective function. In addition, a local surrogate model is built around the best solution obtained so far. Then, a PSO algorithm searches on the local surrogate to find its optimum and evaluates it. The evolutionary search using the global model management strategy switches to the local search once no further improvement can be observed, and vice versa. This iterative search process continues until the computational budget is exhausted. Experimental results comparing the proposed algorithm with a few state-of-the-art SAEAs on both benchmark problems up to 30 decision variables as well as an airfoil design problem demonstrate that the proposed algorithm is able to achieve better or competitive solutions with a limited budget of hundreds of exact FEs.
NASA Astrophysics Data System (ADS)
Colliander, A.; Jackson, T. J.; Chan, S.; Bindlish, R.; O'Neill, P. E.; Chazanoff, S. L.; McNairn, H.; Bullock, P.; Powers, J.; Wiseman, G.; Berg, A. A.; Magagi, R.; Njoku, E. G.
2014-12-01
NASA's (National Aeronautics and Space Administration) Soil Moisture Active Passive (SMAP) mission is scheduled for launch in early January 2015. For pre-launch soil moisture algorithm development and validation, the SMAP project and NASA coordinated a SMAP Validation Experiment 2012 (SMAPVEX12) together with Agriculture and Agri-Food Canada in the vicinity of Winnipeg, Canada in June 7-July 19, 2012. Coincident active and passive airborne L-band data were acquired using the Passive Active L-band System (PALS) on 17 days during the experiment. Simultaneously with the PALS measurements, soil moisture ground truth data were collected manually. The vegetation and surface roughness were sampled on non-flight days. The SMAP mission will produce surface (top 5 cm) soil moisture products a) using a combination of its L-band radiometer and SAR (Synthetic Aperture Radar) measurements, b) using the radiometer measurement only, and c) using the SAR measurements only. The SMAPVEX12 data are being utilized for the development and testing of the algorithms applied for generating these soil moisture products. This talk will focus on presenting results of retrieving surface soil moisture using the PALS radiometer. The issues that this retrieval faces are very similar to those faced by the global algorithm using the SMAP radiometer. However, the different spatial resolution of the two observations has to be accounted for in the analysis. The PALS 3 dB footprint in the experiment was on the order of 1 km, whereas the SMAP radiometer has a footprint of about 40 km. In this talk forward modeled brightness temperature over the manually sampled fields and the retrieved soil moisture over the entire experiment domain are presented and discussed. In order to provide a retrieval product similar to that of the SMAP passive algorithm, various ancillary information had to be obtained for the SMAPVEX12 domain. In many cases there are multiple options on how to choose and reprocess these data. The derivation of these data elements and their impact on the retrieval and the spatial scales of the different observations are also discussed. In particular, land cover and soil type heterogeneity have a dramatic impact on parameterization of the algorithm when going from finer to coarser spatial resolutions.
NASA Astrophysics Data System (ADS)
Merrill, S.; Horowitz, J.; Traino, A. C.; Chipkin, S. R.; Hollot, C. V.; Chait, Y.
2011-02-01
Calculation of the therapeutic activity of radioiodine 131I for individualized dosimetry in the treatment of Graves' disease requires an accurate estimate of the thyroid absorbed radiation dose based on a tracer activity administration of 131I. Common approaches (Marinelli-Quimby formula, MIRD algorithm) use, respectively, the effective half-life of radioiodine in the thyroid and the time-integrated activity. Many physicians perform one, two, or at most three tracer dose activity measurements at various times and calculate the required therapeutic activity by ad hoc methods. In this paper, we study the accuracy of estimates of four 'target variables': time-integrated activity coefficient, time of maximum activity, maximum activity, and effective half-life in the gland. Clinical data from 41 patients who underwent 131I therapy for Graves' disease at the University Hospital in Pisa, Italy, are used for analysis. The radioiodine kinetics are described using a nonlinear mixed-effects model. The distributions of the target variables in the patient population are characterized. Using minimum root mean squared error as the criterion, optimal 1-, 2-, and 3-point sampling schedules are determined for estimation of the target variables, and probabilistic bounds are given for the errors under the optimal times. An algorithm is developed for computing the optimal 1-, 2-, and 3-point sampling schedules for the target variables. This algorithm is implemented in a freely available software tool. Taking into consideration 131I effective half-life in the thyroid and measurement noise, the optimal 1-point time for time-integrated activity coefficient is a measurement 1 week following the tracer dose. Additional measurements give only a slight improvement in accuracy.
Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch
2017-06-06
An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.
Wang, Yuliang; Zhang, Zaicheng; Wang, Huimin; Bi, Shusheng
2015-01-01
Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells. PMID:26066315
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rautman, Christopher Arthur; Stein, Joshua S.
2003-01-01
Existing paper-based site characterization models of salt domes at the four active U.S. Strategic Petroleum Reserve sites have been converted to digital format and visualized using modern computer software. The four sites are the Bayou Choctaw dome in Iberville Parish, Louisiana; the Big Hill dome in Jefferson County, Texas; the Bryan Mound dome in Brazoria County, Texas; and the West Hackberry dome in Cameron Parish, Louisiana. A new modeling algorithm has been developed to overcome limitations of many standard geological modeling software packages in order to deal with structurally overhanging salt margins that are typical of many salt domes. Thismore » algorithm, and the implementing computer program, make use of the existing interpretive modeling conducted manually using professional geological judgement and presented in two dimensions in the original site characterization reports as structure contour maps on the top of salt. The algorithm makes use of concepts of finite-element meshes of general engineering usage. Although the specific implementation of the algorithm described in this report and the resulting output files are tailored to the modeling and visualization software used to construct the figures contained herein, the algorithm itself is generic and other implementations and output formats are possible. The graphical visualizations of the salt domes at the four Strategic Petroleum Reserve sites are believed to be major improvements over the previously available two-dimensional representations of the domes via conventional geologic drawings (cross sections and contour maps). Additionally, the numerical mesh files produced by this modeling activity are available for import into and display by other software routines. The mesh data are not explicitly tabulated in this report; however an electronic version in simple ASCII format is included on a PC-based compact disk.« less
Phase 1 Development Report for the SESSA Toolkit
2014-09-01
data acquisition, data management, and data analysis. SESSA was designed to meet forensic crime scene needs as defined by the DoD’s Military Criminal...on the design , functional attributes, algorithm development, system architecture, and software programming include: Robert Knowlton, Brad Melton...Building Restoration Operations Optimization Model (BROOM). BROOM (Knowlton et al., 2012) was designed for consequence management activities (e.g
VIIRS validation and algorithm development efforts in coastal and inland Waters
NASA Astrophysics Data System (ADS)
Stengel, E.; Ondrusek, M.
2016-02-01
Accurate satellite ocean color measurements in coastal and inland waters are more challenging than open-ocean measurements. Complex water and atmospheric conditions can limit the utilization of remote sensing data in coastal waters where it is most needed. The Coastal Optical Characterization Experiment (COCE) is an ongoing project at NOAA/NESDIS/STAR Satellite Oceanography and Climatology Division. The primary goals of COCE are satellite ocean color validation and application development. Currently, this effort concentrates on the initialization and validation of the Joint Polar Satellite System (JPSS) VIIRS sensor using a Satlantic HyperPro II radiometer as a validation tool. A report on VIIRS performance in coastal waters will be given by presenting comparisons between in situ ground truth measurements and VIIRS retrievals made in the Chesapeake Bay, and inland waters of the Gulf of Mexico and Puerto Rico. The COCE application development effort focuses on developing new ocean color satellite remote sensing tools for monitoring relevant coastal ocean parameters. A new VIIRS total suspended matter algorithm will be presented for the Chesapeake Bay. These activities improve the utility of ocean color satellite data in monitoring and analyzing coastal and oceanic processes. Progress on these activities will be reported.
Liu, Qi; Yang, Yu; Chen, Chun; Bu, Jiajun; Zhang, Yin; Ye, Xiuzi
2008-03-31
With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such data. Although several algorithms for compressing DNA sequences have been proposed, none of them are suitable for the compression of RNA sequences with their secondary structures simultaneously. This kind of compression not only facilitates the maintenance of RNA data, but also supplies a novel way to measure the informational complexity of RNA structural data, raising the possibility of studying the relationship between the functional activities of RNA structures and their complexities, as well as various structural properties of RNA based on compression. RNACompress employs an efficient grammar-based model to compress RNA sequences and their secondary structures. The main goals of this algorithm are two fold: (1) present a robust and effective way for RNA structural data compression; (2) design a suitable model to represent RNA secondary structure as well as derive the informational complexity of the structural data based on compression. Our extensive tests have shown that RNACompress achieves a universally better compression ratio compared with other sequence-specific or common text-specific compression algorithms, such as Gencompress, winrar and gzip. Moreover, a test of the activities of distinct GTP-binding RNAs (aptamers) compared with their structural complexity shows that our defined informational complexity can be used to describe how complexity varies with activity. These results lead to an objective means of comparing the functional properties of heteropolymers from the information perspective. A universal algorithm for the compression of RNA secondary structure as well as the evaluation of its informational complexity is discussed in this paper. We have developed RNACompress, as a useful tool for academic users. Extensive tests have shown that RNACompress is a universally efficient algorithm for the compression of RNA sequences with their secondary structures. RNACompress also serves as a good measurement of the informational complexity of RNA secondary structure, which can be used to study the functional activities of RNA molecules.
Liu, Qi; Yang, Yu; Chen, Chun; Bu, Jiajun; Zhang, Yin; Ye, Xiuzi
2008-01-01
Background With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such data. Although several algorithms for compressing DNA sequences have been proposed, none of them are suitable for the compression of RNA sequences with their secondary structures simultaneously. This kind of compression not only facilitates the maintenance of RNA data, but also supplies a novel way to measure the informational complexity of RNA structural data, raising the possibility of studying the relationship between the functional activities of RNA structures and their complexities, as well as various structural properties of RNA based on compression. Results RNACompress employs an efficient grammar-based model to compress RNA sequences and their secondary structures. The main goals of this algorithm are two fold: (1) present a robust and effective way for RNA structural data compression; (2) design a suitable model to represent RNA secondary structure as well as derive the informational complexity of the structural data based on compression. Our extensive tests have shown that RNACompress achieves a universally better compression ratio compared with other sequence-specific or common text-specific compression algorithms, such as Gencompress, winrar and gzip. Moreover, a test of the activities of distinct GTP-binding RNAs (aptamers) compared with their structural complexity shows that our defined informational complexity can be used to describe how complexity varies with activity. These results lead to an objective means of comparing the functional properties of heteropolymers from the information perspective. Conclusion A universal algorithm for the compression of RNA secondary structure as well as the evaluation of its informational complexity is discussed in this paper. We have developed RNACompress, as a useful tool for academic users. Extensive tests have shown that RNACompress is a universally efficient algorithm for the compression of RNA sequences with their secondary structures. RNACompress also serves as a good measurement of the informational complexity of RNA secondary structure, which can be used to study the functional activities of RNA molecules. PMID:18373878
Gunay, Osman; Toreyin, Behçet Ugur; Kose, Kivanc; Cetin, A Enis
2012-05-01
In this paper, an entropy-functional-based online adaptive decision fusion (EADF) framework is developed for image analysis and computer vision applications. In this framework, it is assumed that the compound algorithm consists of several subalgorithms, each of which yields its own decision as a real number centered around zero, representing the confidence level of that particular subalgorithm. Decision values are linearly combined with weights that are updated online according to an active fusion method based on performing entropic projections onto convex sets describing subalgorithms. It is assumed that there is an oracle, who is usually a human operator, providing feedback to the decision fusion method. A video-based wildfire detection system was developed to evaluate the performance of the decision fusion algorithm. In this case, image data arrive sequentially, and the oracle is the security guard of the forest lookout tower, verifying the decision of the combined algorithm. The simulation results are presented.
Actigraphy features for predicting mobility disability in older adults
USDA-ARS?s Scientific Manuscript database
Actigraphy has attracted much attention for assessing physical activity in the past decade. Many algorithms have been developed to automate the analysis process, but none has targeted a general model to discover related features for detecting or predicting mobility function, or more specifically, mo...
Simple, Scalable, Script-based, Science Processor for Measurements - Data Mining Edition (S4PM-DME)
NASA Astrophysics Data System (ADS)
Pham, L. B.; Eng, E. K.; Lynnes, C. S.; Berrick, S. W.; Vollmer, B. E.
2005-12-01
The S4PM-DME is the Goddard Earth Sciences Distributed Active Archive Center's (GES DAAC) web-based data mining environment. The S4PM-DME replaces the Near-line Archive Data Mining (NADM) system with a better web environment and a richer set of production rules. S4PM-DME enables registered users to submit and execute custom data mining algorithms. The S4PM-DME system uses the GES DAAC developed Simple Scalable Script-based Science Processor for Measurements (S4PM) to automate tasks and perform the actual data processing. A web interface allows the user to access the S4PM-DME system. The user first develops personalized data mining algorithm on his/her home platform and then uploads them to the S4PM-DME system. Algorithms in C and FORTRAN languages are currently supported. The user developed algorithm is automatically audited for any potential security problems before it is installed within the S4PM-DME system and made available to the user. Once the algorithm has been installed the user can promote the algorithm to the "operational" environment. From here the user can search and order the data available in the GES DAAC archive for his/her science algorithm. The user can also set up a processing subscription. The subscription will automatically process new data as it becomes available in the GES DAAC archive. The generated mined data products are then made available for FTP pickup. The benefits of using S4PM-DME are 1) to decrease the downloading time it typically takes a user to transfer the GES DAAC data to his/her system thus off-load the heavy network traffic, 2) to free-up the load on their system, and last 3) to utilize the rich and abundance ocean, atmosphere data from the MODIS and AIRS instruments available from the GES DAAC.
USDA-ARS?s Scientific Manuscript database
Active-optical reflectance sensors (AORS) use corn (Zea mays L.) plant tissue as a bioassay of crop N status to determine future N requirements. However, studies have shown AORS algorithms used for making N fertilizer recommendations are not consistently accurate. Thus, AORS algorithm improvements s...
Hyperspace geography: visualizing fitness landscapes beyond 4D.
Wiles, Janet; Tonkes, Bradley
2006-01-01
Human perception is finely tuned to extract structure about the 4D world of time and space as well as properties such as color and texture. Developing intuitions about spatial structure beyond 4D requires exploiting other perceptual and cognitive abilities. One of the most natural ways to explore complex spaces is for a user to actively navigate through them, using local explorations and global summaries to develop intuitions about structure, and then testing the developing ideas by further exploration. This article provides a brief overview of a technique for visualizing surfaces defined over moderate-dimensional binary spaces, by recursively unfolding them onto a 2D hypergraph. We briefly summarize the uses of a freely available Web-based visualization tool, Hyperspace Graph Paper (HSGP), for exploring fitness landscapes and search algorithms in evolutionary computation. HSGP provides a way for a user to actively explore a landscape, from simple tasks such as mapping the neighborhood structure of different points, to seeing global properties such as the size and distribution of basins of attraction or how different search algorithms interact with landscape structure. It has been most useful for exploring recursive and repetitive landscapes, and its strength is that it allows intuitions to be developed through active navigation by the user, and exploits the visual system's ability to detect pattern and texture. The technique is most effective when applied to continuous functions over Boolean variables using 4 to 16 dimensions.
Onboard Science and Applications Algorithm for Hyperspectral Data Reduction
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Davies, Ashley G.; Silverman, Dorothy; Mandl, Daniel
2012-01-01
An onboard processing mission concept is under development for a possible Direct Broadcast capability for the HyspIRI mission, a Hyperspectral remote sensing mission under consideration for launch in the next decade. The concept would intelligently spectrally and spatially subsample the data as well as generate science products onboard to enable return of key rapid response science and applications information despite limited downlink bandwidth. This rapid data delivery concept focuses on wildfires and volcanoes as primary applications, but also has applications to vegetation, coastal flooding, dust, and snow/ice applications. Operationally, the HyspIRI team would define a set of spatial regions of interest where specific algorithms would be executed. For example, known coastal areas would have certain products or bands downlinked, ocean areas might have other bands downlinked, and during fire seasons other areas would be processed for active fire detections. Ground operations would automatically generate the mission plans specifying the highest priority tasks executable within onboard computation, setup, and data downlink constraints. The spectral bands of the TIR (thermal infrared) instrument can accurately detect the thermal signature of fires and send down alerts, as well as the thermal and VSWIR (visible to short-wave infrared) data corresponding to the active fires. Active volcanism also produces a distinctive thermal signature that can be detected onboard to enable spatial subsampling. Onboard algorithms and ground-based algorithms suitable for onboard deployment are mature. On HyspIRI, the algorithm would perform a table-driven temperature inversion from several spectral TIR bands, and then trigger downlink of the entire spectrum for each of the hot pixels identified. Ocean and coastal applications include sea surface temperature (using a small spectral subset of TIR data, but requiring considerable ancillary data), and ocean color applications to track biological activity such as harmful algal blooms. Measuring surface water extent to track flooding is another rapid response product leveraging VSWIR spectral information.
Learning to Control Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Subramanian, Devika
2004-01-01
Advanced life support systems have many interacting processes and limited resources. Controlling and optimizing advanced life support systems presents unique challenges. In particular, advanced life support systems are nonlinear coupled dynamical systems and it is difficult for humans to take all interactions into account to design an effective control strategy. In this project. we developed several reinforcement learning controllers that actively explore the space of possible control strategies, guided by rewards from a user specified long term objective function. We evaluated these controllers using a discrete event simulation of an advanced life support system. This simulation, called BioSim, designed by Nasa scientists David Kortenkamp and Scott Bell has multiple, interacting life support modules including crew, food production, air revitalization, water recovery, solid waste incineration and power. They are implemented in a consumer/producer relationship in which certain modules produce resources that are consumed by other modules. Stores hold resources between modules. Control of this simulation is via adjusting flows of resources between modules and into/out of stores. We developed adaptive algorithms that control the flow of resources in BioSim. Our learning algorithms discovered several ingenious strategies for maximizing mission length by controlling the air and water recycling systems as well as crop planting schedules. By exploiting non-linearities in the overall system dynamics, the learned controllers easily out- performed controllers written by human experts. In sum, we accomplished three goals. We (1) developed foundations for learning models of coupled dynamical systems by active exploration of the state space, (2) developed and tested algorithms that learn to efficiently control air and water recycling processes as well as crop scheduling in Biosim, and (3) developed an understanding of the role machine learning in designing control systems for advanced life support.
Application of the SP algorithm to the INTERMAGNET magnetograms of the disturbed geomagnetic field
NASA Astrophysics Data System (ADS)
Sidorov, R. V.; Soloviev, A. A.; Bogoutdinov, Sh. R.
2012-05-01
The algorithmic system developed in the Laboratory of Geoinformatics at the Geophysical Center, Russian Academy of Sciences, which is intended for recognizing spikes on the magnetograms from the global network INTERMAGNET provides the possibility to carry out retrospective analysis of the magnetograms from the World Data Centers. Application of this system to the analysis of the magnetograms allows automating the job of the experts-interpreters on identifying the artificial spikes in the INTERMAGNET data. The present paper is focused on the SP algorithm (abbreviated from SPIKE) which recognizes artificial spikes on the records of the geomagnetic field. Initially, this algorithm was trained on the magnetograms of 2007 and 2008, which recorded the quiet geomagnetic field. The results of training and testing showed that the algorithm is quite efficient. Applying this method to the problem of recognizing spikes on the data for periods of enhanced geomagnetic activity is a separate task. In this short communication, we present the results of applying the SP algorithm trained on the data of 2007 to the INTERMAGNET magnetograms for 2003 and 2005 sampled every minute. This analysis shows that the SP algorithm does not exhibit a worse performance if applied to the records of a disturbed geomagnetic field.
Palmisano, Pietro; Ziacchi, Matteo; Biffi, Mauro; Ricci, Renato P; Landolina, Maurizio; Zoni-Berisso, Massimo; Occhetta, Eraldo; Maglia, Giampiero; Botto, Gianluca; Padeletti, Luigi; Boriani, Giuseppe
2018-04-01
: The purpose of this two-part consensus document is to provide specific suggestions (based on an extensive literature review) on appropriate pacemaker setting in relation to patients' clinical features. In part 2, criteria for pacemaker choice and programming in atrioventricular blocks and neurally mediate syncope are proposed. The atrioventricular blocks can be paroxysmal or persistent, isolated or associated with sinus node disease. Neurally mediated syncope can be related to carotid sinus syndrome or cardioinhibitory vasovagal syncope. In sinus rhythm, with persistent atrioventricular block, we considered appropriate the activation of mode-switch algorithms, and algorithms for auto-adaptive management of the ventricular pacing output. If the atrioventricular block is paroxysmal, in addition to algorithms mentioned above, algorithms to maximize intrinsic atrioventricular conduction should be activated. When sinus node disease is associated with atrioventricular block, the activation of rate-responsive function in patients with chronotropic incompetence is appropriate. In permanent atrial fibrillation with atrioventricular block, algorithms for auto-adaptive management of the ventricular pacing output should be activated. If the atrioventricular block is persistent, the activation of rate-responsive function is appropriate. In carotid sinus syndrome, adequate rate hysteresis should be programmed. In vasovagal syncope, specialized sensing and pacing algorithms designed for reflex syncope prevention should be activated.
NASA Astrophysics Data System (ADS)
Lee, Donghoon; Choi, Sunghoon; Kim, Hee-Joung
2018-03-01
When processing medical images, image denoising is an important pre-processing step. Various image denoising algorithms have been developed in the past few decades. Recently, image denoising using the deep learning method has shown excellent performance compared to conventional image denoising algorithms. In this study, we introduce an image denoising technique based on a convolutional denoising autoencoder (CDAE) and evaluate clinical applications by comparing existing image denoising algorithms. We train the proposed CDAE model using 3000 chest radiograms training data. To evaluate the performance of the developed CDAE model, we compare it with conventional denoising algorithms including median filter, total variation (TV) minimization, and non-local mean (NLM) algorithms. Furthermore, to verify the clinical effectiveness of the developed denoising model with CDAE, we investigate the performance of the developed denoising algorithm on chest radiograms acquired from real patients. The results demonstrate that the proposed denoising algorithm developed using CDAE achieves a superior noise-reduction effect in chest radiograms compared to TV minimization and NLM algorithms, which are state-of-the-art algorithms for image noise reduction. For example, the peak signal-to-noise ratio and structure similarity index measure of CDAE were at least 10% higher compared to conventional denoising algorithms. In conclusion, the image denoising algorithm developed using CDAE effectively eliminated noise without loss of information on anatomical structures in chest radiograms. It is expected that the proposed denoising algorithm developed using CDAE will be effective for medical images with microscopic anatomical structures, such as terminal bronchioles.
Zhang, Fan; Liu, Ming; Harper, Stephen; Lee, Michael; Huang, He
2014-07-22
To enable intuitive operation of powered artificial legs, an interface between user and prosthesis that can recognize the user's movement intent is desired. A novel neural-machine interface (NMI) based on neuromuscular-mechanical fusion developed in our previous study has demonstrated a great potential to accurately identify the intended movement of transfemoral amputees. However, this interface has not yet been integrated with a powered prosthetic leg for true neural control. This study aimed to report (1) a flexible platform to implement and optimize neural control of powered lower limb prosthesis and (2) an experimental setup and protocol to evaluate neural prosthesis control on patients with lower limb amputations. First a platform based on a PC and a visual programming environment were developed to implement the prosthesis control algorithms, including NMI training algorithm, NMI online testing algorithm, and intrinsic control algorithm. To demonstrate the function of this platform, in this study the NMI based on neuromuscular-mechanical fusion was hierarchically integrated with intrinsic control of a prototypical transfemoral prosthesis. One patient with a unilateral transfemoral amputation was recruited to evaluate our implemented neural controller when performing activities, such as standing, level-ground walking, ramp ascent, and ramp descent continuously in the laboratory. A novel experimental setup and protocol were developed in order to test the new prosthesis control safely and efficiently. The presented proof-of-concept platform and experimental setup and protocol could aid the future development and application of neurally-controlled powered artificial legs.
Supersonic reacting internal flowfields
NASA Astrophysics Data System (ADS)
Drummond, J. P.
The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flowfields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.
Supersonic reacting internal flow fields
NASA Technical Reports Server (NTRS)
Drummond, J. Philip
1989-01-01
The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flow fields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.
A New Algorithm with Plane Waves and Wavelets for Random Velocity Fields with Many Spatial Scales
NASA Astrophysics Data System (ADS)
Elliott, Frank W.; Majda, Andrew J.
1995-03-01
A new Monte Carlo algorithm for constructing and sampling stationary isotropic Gaussian random fields with power-law energy spectrum, infrared divergence, and fractal self-similar scaling is developed here. The theoretical basis for this algorithm involves the fact that such a random field is well approximated by a superposition of random one-dimensional plane waves involving a fixed finite number of directions. In general each one-dimensional plane wave is the sum of a random shear layer and a random acoustical wave. These one-dimensional random plane waves are then simulated by a wavelet Monte Carlo method for a single space variable developed recently by the authors. The computational results reported in this paper demonstrate remarkable low variance and economical representation of such Gaussian random fields through this new algorithm. In particular, the velocity structure function for an imcorepressible isotropic Gaussian random field in two space dimensions with the Kolmogoroff spectrum can be simulated accurately over 12 decades with only 100 realizations of the algorithm with the scaling exponent accurate to 1.1% and the constant prefactor accurate to 6%; in fact, the exponent of the velocity structure function can be computed over 12 decades within 3.3% with only 10 realizations. Furthermore, only 46,592 active computational elements are utilized in each realization to achieve these results for 12 decades of scaling behavior.
Active impulsive noise control using maximum correntropy with adaptive kernel size
NASA Astrophysics Data System (ADS)
Lu, Lu; Zhao, Haiquan
2017-03-01
The active noise control (ANC) based on the principle of superposition is an attractive method to attenuate the noise signals. However, the impulsive noise in the ANC systems will degrade the performance of the controller. In this paper, a filtered-x recursive maximum correntropy (FxRMC) algorithm is proposed based on the maximum correntropy criterion (MCC) to reduce the effect of outliers. The proposed FxRMC algorithm does not requires any priori information of the noise characteristics and outperforms the filtered-x least mean square (FxLMS) algorithm for impulsive noise. Meanwhile, in order to adjust the kernel size of FxRMC algorithm online, a recursive approach is proposed through taking into account the past estimates of error signals over a sliding window. Simulation and experimental results in the context of active impulsive noise control demonstrate that the proposed algorithms achieve much better performance than the existing algorithms in various noise environments.
Madenjian, Charles P.; David, Solomon R.; Pothoven, Steven A.
2012-01-01
We evaluated the performance of the Wisconsin bioenergetics model for lake trout Salvelinus namaycush that were fed ad libitum in laboratory tanks under regimes of low activity and high activity. In addition, we compared model performance under two different model algorithms: (1) balancing the lake trout energy budget on day t based on lake trout energy density on day t and (2) balancing the lake trout energy budget on day t based on lake trout energy density on day t + 1. Results indicated that the model significantly underestimated consumption for both inactive and active lake trout when algorithm 1 was used and that the degree of underestimation was similar for the two activity levels. In contrast, model performance substantially improved when using algorithm 2, as no detectable bias was found in model predictions of consumption for inactive fish and only a slight degree of overestimation was detected for active fish. The energy budget was accurately balanced by using algorithm 2 but not by using algorithm 1. Based on the results of this study, we recommend the use of algorithm 2 to estimate food consumption by fish in the field. Our study results highlight the importance of accurately accounting for changes in fish energy density when balancing the energy budget; furthermore, these results have implications for the science of evaluating fish bioenergetics model performance and for more accurate estimation of food consumption by fish in the field when fish energy density undergoes relatively rapid changes.
Retrieving the properties of ice-phase precipitation with multi-frequency radar measurements
NASA Astrophysics Data System (ADS)
Mace, G. G.; Gergely, M.; Mascio, J.
2017-12-01
The objective of most retrieval algorithms applied to remote sensing measurements is the microphysical properties that a model might predict such as condensed water content, particle number, or effective size. However, because ice crystals grow and aggregate into complex non spherical shapes, the microphysical properties of interest are very much dependent on the physical characteristics of the precipitation such as how mass and crystal area are distributed as a function of particle size. Such physical properties also have a strong influence on how microwave electromagnetic energy scatters from ice crystals causing significant ambiguity in retrieval algorithms. In fact, passive and active microwave remote sensing measurements are typically nearly as sensitive to the ice crystal physical properties as they are to the microphysical characteristics that are typically the aim of the retrieval algorithm. There has, however, been active development of multi frequency algorithms recently that attempt to ameliorate and even exploit this sensitivity. In this paper, we will review these approaches and present practical applications of retrieving ice crystal properties such as mass- and area dimensional relationships from single and dual frequency radar measurements of precipitating ice using data collected aboard ship in the Southern Ocean and from remote sensors in the Rocky Mountains of the Western U.S.
Parra-Ruiz, Jorge; Ramos, V; Dueñas, C; Coronado-Álvarez, N M; Cabo-Magadán, R; Portillo-Tuñón, V; Vinuesa, D; Muñoz-Medina, L; Hernández-Quero, J
2015-10-01
Tuberculous meningitis (TBM) is one of the most serious and difficult to diagnose manifestations of TB. An ADA value >9.5 IU/L has great sensitivity and specificity. However, all available studies have been conducted in areas of high endemicity, so we sought to determine the accuracy of ADA in a low endemicity area. This retrospective study included 190 patients (105 men) who had ADA tested in CSF for some reason. Patients were classified as probable/certain TBM or non-TBM based on clinical and Thwaite's criteria. Optimal ADA cutoff was established by ROC curves and a predictive algorithm based on ADA and other CSF biochemical parameters was generated. Eleven patients were classified as probable/certain TBM. In a low endemicity area, the best ADA cutoff was 11.5 IU/L with 91 % sensitivity and 77.7 % specificity. We also developed a predictive algorithm based on the combination of ADA (>11.5 IU/L), glucose (<65 mg/dL) and leukocytes (≥13.5 cell/mm(3)) with increased accuracy (Se: 91 % Sp: 88 %). Optimal ADA cutoff value in areas of low TB endemicity is higher than previously reported. Our algorithm is more accurate than ADA activity alone with better sensitivity and specificity than previously reported algorithms.
Voltage scheduling for low power/energy
NASA Astrophysics Data System (ADS)
Manzak, Ali
2001-07-01
Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned to lower voltage levels (thus reducing the power consumption). A polynomial time resource and latency constrained scheduling algorithm is developed to distribute the available slack among the nodes such that power consumption is minimum. The algorithm is iterative and utilizes the slack based on the Lagrange multiplier method.
Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces
2011-02-28
Final Report for AFOSR #FA9550-08-1-0422 Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces August 1, 2008 to November 30...focused on developing high level general purpose algorithms , such as Tabu Search and Genetic Algorithms . However, understanding of when and why these... algorithms perform well still lags. Our project extended the theory of certain combi- natorial optimization problems to develop analytical
Detection of person borne IEDs using multiple cooperative sensors
NASA Astrophysics Data System (ADS)
MacIntosh, Scott; Deming, Ross; Hansen, Thorkild; Kishan, Neel; Tang, Ling; Shea, Jing; Lang, Stephen
2011-06-01
The use of multiple cooperative sensors for the detection of person borne IEDs is investigated. The purpose of the effort is to evaluate the performance benefits of adding multiple sensor data streams into an aided threat detection algorithm, and a quantitative analysis of which sensor data combinations improve overall detection performance. Testing includes both mannequins and human subjects with simulated suicide bomb devices of various configurations, materials, sizes and metal content. Aided threat recognition algorithms are being developed to test detection performance of individual sensors against combined fused sensors inputs. Sensors investigated include active and passive millimeter wave imaging systems, passive infrared, 3-D profiling sensors and acoustic imaging. The paper describes the experimental set-up and outlines the methodology behind a decision fusion algorithm-based on the concept of a "body model".
NASA Astrophysics Data System (ADS)
Dumouchel, Tyler; Thorn, Stephanie; Kordos, Myra; DaSilva, Jean; Beanlands, Rob S. B.; deKemp, Robert A.
2012-07-01
Quantification in cardiac mouse positron emission tomography (PET) imaging is limited by the imaging spatial resolution. Spillover of left ventricle (LV) myocardial activity into adjacent organs results in partial volume (PV) losses leading to underestimation of myocardial activity. A PV correction method was developed to restore accuracy of the activity distribution for FDG mouse imaging. The PV correction model was based on convolving an LV image estimate with a 3D point spread function. The LV model was described regionally by a five-parameter profile including myocardial, background and blood activities which were separated into three compartments by the endocardial radius and myocardium wall thickness. The PV correction was tested with digital simulations and a physical 3D mouse LV phantom. In vivo cardiac FDG mouse PET imaging was also performed. Following imaging, the mice were sacrificed and the tracer biodistribution in the LV and liver tissue was measured using a gamma-counter. The PV correction algorithm improved recovery from 50% to within 5% of the truth for the simulated and measured phantom data and image uniformity by 5-13%. The PV correction algorithm improved the mean myocardial LV recovery from 0.56 (0.54) to 1.13 (1.10) without (with) scatter and attenuation corrections. The mean image uniformity was improved from 26% (26%) to 17% (16%) without (with) scatter and attenuation corrections applied. Scatter and attenuation corrections were not observed to significantly impact PV-corrected myocardial recovery or image uniformity. Image-based PV correction algorithm can increase the accuracy of PET image activity and improve the uniformity of the activity distribution in normal mice. The algorithm may be applied using different tracers, in transgenic models that affect myocardial uptake, or in different species provided there is sufficient image quality and similar contrast between the myocardium and surrounding structures.
Advances in Landslide Nowcasting: Evaluation of a Global and Regional Modeling Approach
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia Bach; Peters-Lidard, Christa; Adler, Robert; Hong, Yang; Kumar, Sujay; Lerner-Lam, Arthur
2011-01-01
The increasing availability of remotely sensed data offers a new opportunity to address landslide hazard assessment at larger spatial scales. A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that may experience landslide activity. This system combines a calculation of static landslide susceptibility with satellite-derived rainfall estimates and uses a threshold approach to generate a set of nowcasts that classify potentially hazardous areas. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale near real-time landslide hazard assessment efforts, it requires several modifications before it can be fully realized as an operational tool. This study draws upon a prior work s recommendations to develop a new approach for considering landslide susceptibility and hazard at the regional scale. This case study calculates a regional susceptibility map using remotely sensed and in situ information and a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America. The susceptibility map is evaluated with a regional rainfall intensity duration triggering threshold and results are compared with the global algorithm framework for the same event. Evaluation of this regional system suggests that this empirically based approach provides one plausible way to approach some of the data and resolution issues identified in the global assessment. The presented methodology is straightforward to implement, improves upon the global approach, and allows for results to be transferable between regions. The results also highlight several remaining challenges, including the empirical nature of the algorithm framework and adequate information for algorithm validation. Conclusions suggest that integrating additional triggering factors such as soil moisture may help to improve algorithm performance accuracy. The regional algorithm scenario represents an important step forward in advancing regional and global-scale landslide hazard assessment.
USDA-ARS?s Scientific Manuscript database
Infiltration into frozen and unfrozen soils is critical in hydrology, controlling active layer soil water dynamics and influencing runoff. Few Land Surface Models (LSMs) and Hydrological Models (HMs) have been developed, adapted or tested for frozen conditions and permafrost soils. Considering the v...
Evaluation of Sienna Cancer Diagnostics hTERT Antibody on 500 Consecutive Urinary Tract Specimens.
Allison, Derek B; Sharma, Rajni; Cowan, Morgan L; VandenBussche, Christopher J
2018-06-06
Telomerase activity can be detected in up to 90% of urothelial carcinomas (UC). Telomerase activity can also be detected in urinary tract cytology (UTC) specimens and indicate an increased risk of UC. We evaluated the performance of a commercially available antibody that putatively binds the telomerase reverse transcriptase (hTERT) subunit on 500 UTC specimens. Unstained CytospinTM preparations were created from residual urine specimens and were stained using the anti-hTERT antibody (SCD-A7). Two algorithms were developed for concatenating the hTERT result and cytologic diagnosis: a "no indeterminates algorithm," in which a negative cytology and positive hTERT result are considered positive, and a "high-specificity algorithm," in which a negative cytology and positive hTERT result are considered indeterminate (and thus negative for comparison to the gold standard). The "no indeterminates algorithm" and "high-specificity algorithm" yielded a sensitivity of 60.6 and 52.1%, a specificity of 70.4 and 90.7%, a positive predictive value of 39.1 and 63.8%, and a negative predictive value of 85.0 and 85.8%, respectively. A positive hTERT result may identify a subset of patients with an increased risk of high-grade UC (HGUC) who may otherwise not be closely followed, while a negative hTERT immunocytochemistry result is associated with a reduction in risk for HGUC. © 2018 The Author(s) Published by S. Karger AG, Basel.
Jeonghee Kim; Parnell, Claire; Wichmann, Thomas; DeWeerth, Stephen P
2016-08-01
Assessments of tremor characteristics by movement disorder physicians are usually done at single time points in clinic settings, so that the description of the tremor does not take into account the dependence of the tremor on specific behavioral situations. Moreover, treatment-induced changes in tremor or behavior cannot be quantitatively tracked for extended periods of time. We developed a wearable tremor measurement system with tremor and activity recognition algorithms for long-term upper limb behavior tracking, to characterize tremor characteristics and treatment effects in their daily lives. In this pilot study, we collected sensor data of arm movement from three healthy participants using a wrist device that included a 3-axis accelerometer and a 3-axis gyroscope, and classified tremor and activities within scenario tasks which resembled real life situations. Our results show that the system was able to classify the tremor and activities with 89.71% and 74.48% accuracies during the scenario tasks. From this results, we expect to expand our tremor and activity measurement in longer time period.
The NASA Soil Moisture Active Passive (SMAP) Mission - Science and Data Product Development Status
NASA Technical Reports Server (NTRS)
Nloku, E.; Entekhabi, D.; O'Neill, P.
2012-01-01
The Soil Moisture Active Passive (SMAP) mission, planned for launch in late 2014, has the objective of frequent, global mapping of near-surface soil moisture and its freeze-thaw state. The SMAP measurement system utilizes an L-band radar and radiometer sharing a rotating 6-meter mesh reflector antenna. The instruments will operate on a spacecraft in a 685 km polar orbit with 6am/6pm nodal crossings, viewing the surface at a constant 40-degree incidence angle with a 1000-km swath width, providing 3-day global coverage. Data from the instruments will yield global maps of soil moisture and freeze/thaw state at 10 km and 3 km resolutions, respectively, every two to three days. The 10-km soil moisture product will be generated using a combined radar and radiometer retrieval algorithm. SMAP will also provide a radiometer-only soil moisture product at 40-km spatial resolution and a radar-only soil moisture product at 3-km resolution. The relative accuracies of these products will vary regionally and will depend on surface characteristics such as vegetation water content, vegetation type, surface roughness, and landscape heterogeneity. The SMAP soil moisture and freeze/thaw measurements will enable significantly improved estimates of the fluxes of water, energy and carbon between the land and atmosphere. Soil moisture and freeze/thaw controls of these fluxes are key factors in the performance of models used for weather and climate predictions and for quantifYing the global carbon balance. Soil moisture measurements are also of importance in modeling and predicting extreme events such as floods and droughts. The algorithms and data products for SMAP are being developed in the SMAP Science Data System (SDS) Testbed. In the Testbed algorithms are developed and evaluated using simulated SMAP observations as well as observational data from current airborne and spaceborne L-band sensors including data from the SMOS and Aquarius missions. We report here on the development status of the SMAP data products. The Testbed simulations are designed to capture various sources of errors in the products including environment effects, instrument effects (nonideal aspects of the measurement system), and retrieval algorithm errors. The SMAP project has developed a Calibration and Validation (Cal/Val) Plan that is designed to support algorithm development (pre-launch) and data product validation (post-launch). A key component of the Cal/Val Plan is the identification, characterization, and instrumentation of sites that can be used to calibrate and validate the sensor data (Level l) and derived geophysical products (Level 2 and higher).
Prediction of heart abnormality using MLP network
NASA Astrophysics Data System (ADS)
Hashim, Fakroul Ridzuan; Januar, Yulni; Mat, Muhammad Hadzren; Rizman, Zairi Ismael; Awang, Mat Kamil
2018-02-01
Heart abnormality does not choose gender, age and races when it strikes. With no warning signs or symptoms, it can result to a sudden death of the patient. Generally, heart's irregular electrical activity is defined as heart abnormality. Via implementation of Multilayer Perceptron (MLP) network, this paper tries to develop a program that allows the detection of heart abnormality activity. Utilizing several training algorithms with Purelin activation function, an amount of heartbeat signals received through the electrocardiogram (ECG) will be employed to condition the MLP network.
A Random Forest-based ensemble method for activity recognition.
Feng, Zengtao; Mo, Lingfei; Li, Meng
2015-01-01
This paper presents a multi-sensor ensemble approach to human physical activity (PA) recognition, using random forest. We designed an ensemble learning algorithm, which integrates several independent Random Forest classifiers based on different sensor feature sets to build a more stable, more accurate and faster classifier for human activity recognition. To evaluate the algorithm, PA data collected from the PAMAP (Physical Activity Monitoring for Aging People), which is a standard, publicly available database, was utilized to train and test. The experimental results show that the algorithm is able to correctly recognize 19 PA types with an accuracy of 93.44%, while the training is faster than others. The ensemble classifier system based on the RF (Random Forest) algorithm can achieve high recognition accuracy and fast calculation.
Yang, Yongliang; Modares, Hamidreza; Wunsch, Donald C; Yin, Yixin
2018-06-01
This paper develops optimal control protocols for the distributed output synchronization problem of leader-follower multiagent systems with an active leader. Agents are assumed to be heterogeneous with different dynamics and dimensions. The desired trajectory is assumed to be preplanned and is generated by the leader. Other follower agents autonomously synchronize to the leader by interacting with each other using a communication network. The leader is assumed to be active in the sense that it has a nonzero control input so that it can act independently and update its control to keep the followers away from possible danger. A distributed observer is first designed to estimate the leader's state and generate the reference signal for each follower. Then, the output synchronization of leader-follower systems with an active leader is formulated as a distributed optimal tracking problem, and inhomogeneous algebraic Riccati equations (AREs) are derived to solve it. The resulting distributed optimal control protocols not only minimize the steady-state error but also optimize the transient response of the agents. An off-policy reinforcement learning algorithm is developed to solve the inhomogeneous AREs online in real time and without requiring any knowledge of the agents' dynamics. Finally, two simulation examples are conducted to illustrate the effectiveness of the proposed algorithm.
Evaluation of the TOPSAR performance by using passive and active calibrators
NASA Technical Reports Server (NTRS)
Alberti, G.; Moccia, A.; Ponte, S.; Vetrella, S.
1992-01-01
The preliminary analysis of the C-band cross-track interferometric data (XTI) acquired during the MAC Europe 1991 campaign over the Matera test site, in Southern Italy is presented. Twenty three passive calibrators (Corner Reflector, CR) and 3 active calibrators (Active Radar Calibrator, ARC) were deployed over an area characterized by homogeneous background. Contemporaneously to the flight, a ground truth data collection campaign was carried out. The research activity was focused on the development of motion compensation algorithms, in order to improve the height measurement accuracy of the TOPSAR system.
Tansig activation function (of MLP network) for cardiac abnormality detection
NASA Astrophysics Data System (ADS)
Adnan, Ja'afar; Daud, Nik Ghazali Nik; Ishak, Mohd Taufiq; Rizman, Zairi Ismael; Rahman, Muhammad Izzuddin Abd
2018-02-01
Heart abnormality often occurs regardless of gender, age and races. This problem sometimes does not show any symptoms and it can cause a sudden death to the patient. In general, heart abnormality is the irregular electrical activity of the heart. This paper attempts to develop a program that can detect heart abnormality activity through implementation of Multilayer Perceptron (MLP) network. A certain amount of data of the heartbeat signals from the electrocardiogram (ECG) will be used in this project to train the MLP network by using several training algorithms with Tansig activation function.
A novel algorithm for detecting active propulsion in wheelchair users following spinal cord injury.
Popp, Werner L; Brogioli, Michael; Leuenberger, Kaspar; Albisser, Urs; Frotzler, Angela; Curt, Armin; Gassert, Roger; Starkey, Michelle L
2016-03-01
Physical activity in wheelchair-bound individuals can be assessed by monitoring their mobility as this is one of the most intense upper extremity activities they perform. Current accelerometer-based approaches for describing wheelchair mobility do not distinguish between self- and attendant-propulsion and hence may overestimate total physical activity. The aim of this study was to develop and validate an inertial measurement unit based algorithm to monitor wheel kinematics and the type of wheelchair propulsion (self- or attendant-) within a "real-world" situation. Different sensor set-ups were investigated, ranging from a high precision set-up including four sensor modules with a relatively short measurement duration of 24 h, to a less precise set-up with only one module attached at the wheel exceeding one week of measurement because the gyroscope of the sensor was turned off. The "high-precision" algorithm distinguished self- and attendant-propulsion with accuracy greater than 93% whilst the long-term measurement set-up showed an accuracy of 82%. The estimation accuracy of kinematic parameters was greater than 97% for both set-ups. The possibility of having different sensor set-ups allows the use of the inertial measurement units as high precision tools for researchers as well as unobtrusive and simple tools for manual wheelchair users. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk
2018-04-20
Treatments for detection by infrared (IR) signals are higher than for other signals such as radar or sonar because an object detected by the IR sensor cannot easily recognize its detection status. Recently, research for actively reducing IR signal has been conducted to control the IR signal by adjusting the surface temperature of the object. In this paper, we propose an active IR stealth algorithm to synchronize IR signals from the object and the background around the object. The proposed method includes the repulsive particle swarm optimization statistical optimization algorithm to estimate the IR stealth surface temperature, which will result in a synchronization between the IR signals from the object and the surrounding background by setting the inverse distance weighted contrast radiant intensity (CRI) equal to zero. We tested the IR stealth performance in mid wavelength infrared (MWIR) and long wavelength infrared (LWIR) bands for a test plate located at three different positions on a forest scene to verify the proposed method. Our results show that the inverse distance weighted active IR stealth technique proposed in this study is proved to be an effective method for reducing the contrast radiant intensity between the object and background up to 32% as compared to the previous method using the CRI determined as the simple signal difference between the object and the background.
Garnotel, M; Bastian, T; Romero-Ugalde, H M; Maire, A; Dugas, J; Zahariev, A; Doron, M; Jallon, P; Charpentier, G; Franc, S; Blanc, S; Bonnet, S; Simon, C
2018-03-01
Accelerometry is increasingly used to quantify physical activity (PA) and related energy expenditure (EE). Linear regression models designed to derive PAEE from accelerometry-counts have shown their limits, mostly due to the lack of consideration of the nature of activities performed. Here we tested whether a model coupling an automatic activity/posture recognition (AAR) algorithm with an activity-specific count-based model, developed in 61 subjects in laboratory conditions, improved PAEE and total EE (TEE) predictions from a hip-worn triaxial-accelerometer (ActigraphGT3X+) in free-living conditions. Data from two independent subject groups of varying body mass index and age were considered: 20 subjects engaged in a 3-h urban-circuit, with activity-by-activity reference PAEE from combined heart-rate and accelerometry monitoring (Actiheart); and 56 subjects involved in a 14-day trial, with PAEE and TEE measured using the doubly-labeled water method. PAEE was estimated from accelerometry using the activity-specific model coupled to the AAR algorithm (AAR model), a simple linear model (SLM), and equations provided by the companion-software of used activity-devices (Freedson and Actiheart models). AAR-model predictions were in closer agreement with selected references than those from other count-based models, both for PAEE during the urban-circuit (RMSE = 6.19 vs 7.90 for SLM and 9.62 kJ/min for Freedson) and for EE over the 14-day trial, reaching Actiheart performances in the latter (PAEE: RMSE = 0.93 vs. 1.53 for SLM, 1.43 for Freedson, 0.91 MJ/day for Actiheart; TEE: RMSE = 1.05 vs. 1.57 for SLM, 1.70 for Freedson, 0.95 MJ/day for Actiheart). Overall, the AAR model resulted in a 43% increase of daily PAEE variance explained by accelerometry predictions. NEW & NOTEWORTHY Although triaxial accelerometry is widely used in free-living conditions to assess the impact of physical activity energy expenditure (PAEE) on health, its precision and accuracy are often debated. Here we developed and validated an activity-specific model which, coupled with an automatic activity-recognition algorithm, improved the variance explained by the predictions from accelerometry counts by 43% of daily PAEE compared with models relying on a simple relationship between accelerometry counts and EE.
Intelligent error correction method applied on an active pixel sensor based star tracker
NASA Astrophysics Data System (ADS)
Schmidt, Uwe
2005-10-01
Star trackers are opto-electronic sensors used on-board of satellites for the autonomous inertial attitude determination. During the last years star trackers became more and more important in the field of the attitude and orbit control system (AOCS) sensors. High performance star trackers are based up today on charge coupled device (CCD) optical camera heads. The active pixel sensor (APS) technology, introduced in the early 90-ties, allows now the beneficial replacement of CCD detectors by APS detectors with respect to performance, reliability, power, mass and cost. The company's heritage in star tracker design started in the early 80-ties with the launch of the worldwide first fully autonomous star tracker system ASTRO1 to the Russian MIR space station. Jena-Optronik recently developed an active pixel sensor based autonomous star tracker "ASTRO APS" as successor of the CCD based star tracker product series ASTRO1, ASTRO5, ASTRO10 and ASTRO15. Key features of the APS detector technology are, a true xy-address random access, the multiple windowing read out and the on-chip signal processing including the analogue to digital conversion. These features can be used for robust star tracking at high slew rates and under worse conditions like stray light and solar flare induced single event upsets. A special algorithm have been developed to manage the typical APS detector error contributors like fixed pattern noise (FPN), dark signal non-uniformity (DSNU) and white spots. The algorithm works fully autonomous and adapts to e.g. increasing DSNU and up-coming white spots automatically without ground maintenance or re-calibration. In contrast to conventional correction methods the described algorithm does not need calibration data memory like full image sized calibration data sets. The application of the presented algorithm managing the typical APS detector error contributors is a key element for the design of star trackers for long term satellite applications like geostationary telecom platforms.
Developing a Treatment Planning Software Based on TG-43U1 Formalism for Cs-137 LDR Brachytherapy.
Sina, Sedigheh; Faghihi, Reza; Soleimani Meigooni, Ali; Siavashpour, Zahra; Mosleh-Shirazi, Mohammad Amin
2013-08-01
The old Treatment Planning Systems (TPSs) used for intracavitary brachytherapy with Cs-137 Selectron source utilize traditional dose calculation methods, considering each source as a point source. Using such methods introduces significant errors in dose estimation. As of 1995, TG-43 is used as the main dose calculation formalism in treatment TPSs. The purpose of this study is to design and establish a treatment planning software for Cs-137 Solectron brachytherapy source, based on TG-43U1 formalism by applying the effects of the applicator and dummy spacers. Two softwares used for treatment planning of Cs-137 sources in Iran (STPS and PLATO), are based on old formalisms. The purpose of this work is to establish and develop a TPS for Selectron source based on TG-43 formalism. In this planning system, the dosimetry parameters of each pellet in different places inside applicators were obtained by MCNP4c code. Then the dose distribution around every combination of active and inactive pellets was obtained by summing the doses. The accuracy of this algorithm was checked by comparing its results for special combination of active and inactive pellets with MC simulations. Finally, the uncertainty of old dose calculation formalism was investigated by comparing the results of STPS and PLATO softwares with those obtained by the new algorithm. For a typical arrangement of 10 active pellets in the applicator, the percentage difference between doses obtained by the new algorithm at 1cm distance from the tip of the applicator and those obtained by old formalisms is about 30%, while the difference between the results of MCNP and the new algorithm is less than 5%. According to the results, the old dosimetry formalisms, overestimate the dose especially towards the applicator's tip. While the TG-43U1 based software perform the calculations more accurately.
Improving Hip-Worn Accelerometer Estimates of Sitting Using Machine Learning Methods.
Kerr, Jacqueline; Carlson, Jordan; Godbole, Suneeta; Cadmus-Bertram, Lisa; Bellettiere, John; Hartman, Sheri
2018-02-13
To improve estimates of sitting time from hip worn accelerometers used in large cohort studies by employing machine learning methods developed on free living activPAL data. Thirty breast cancer survivors concurrently wore a hip worn accelerometer and a thigh worn activPAL for 7 days. A random forest classifier, trained on the activPAL data, was employed to detect sitting, standing and sit-stand transitions in 5 second windows in the hip worn accelerometer. The classifier estimates were compared to the standard accelerometer cut point and significant differences across different bout lengths were investigated using mixed effect models. Overall, the algorithm predicted the postures with moderate accuracy (stepping 77%, standing 63%, sitting 67%, sit to stand 52% and stand to sit 51%). Daily level analyses indicated that errors in transition estimates were only occurring during sitting bouts of 2 minutes or less. The standard cut point was significantly different from the activPAL across all bout lengths, overestimating short bouts and underestimating long bouts. This is among the first algorithms for sitting and standing for hip worn accelerometer data to be trained from entirely free living activPAL data. The new algorithm detected prolonged sitting which has been shown to be most detrimental to health. Further validation and training in larger cohorts is warranted.This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
Transport delay compensation for computer-generated imagery systems
NASA Technical Reports Server (NTRS)
Mcfarland, Richard E.
1988-01-01
In the problem of pure transport delay in a low-pass system, a trade-off exists with respect to performance within and beyond a frequency bandwidth. When activity beyond the band is attenuated because of other considerations, this trade-off may be used to improve the performance within the band. Specifically, transport delay in computer-generated imagery systems is reduced to a manageable problem by recognizing frequency limits in vehicle activity and manual-control capacity. Based on these limits, a compensation algorithm has been developed for use in aircraft simulation at NASA Ames Research Center. For direct measurement of transport delays, a beam-splitter experiment is presented that accounts for the complete flight simulation environment. Values determined by this experiment are appropriate for use in the compensation algorithm. The algorithm extends the bandwidth of high-frequency flight simulation to well beyond that of normal pilot inputs. Within this bandwidth, the visual scene presentation manifests negligible gain distortion and phase lag. After a year of utilization, two minor exceptions to universal simulation applicability have been identified and subsequently resolved.
A numerical algorithm with preference statements to evaluate the performance of scientists.
Ricker, Martin
Academic evaluation committees have been increasingly receptive for using the number of published indexed articles, as well as citations, to evaluate the performance of scientists. It is, however, impossible to develop a stand-alone, objective numerical algorithm for the evaluation of academic activities, because any evaluation necessarily includes subjective preference statements. In a market, the market prices represent preference statements, but scientists work largely in a non-market context. I propose a numerical algorithm that serves to determine the distribution of reward money in Mexico's evaluation system, which uses relative prices of scientific goods and services as input. The relative prices would be determined by an evaluation committee. In this way, large evaluation systems (like Mexico's Sistema Nacional de Investigadores ) could work semi-automatically, but not arbitrarily or superficially, to determine quantitatively the academic performance of scientists every few years. Data of 73 scientists from the Biology Institute of Mexico's National University are analyzed, and it is shown that the reward assignation and academic priorities depend heavily on those preferences. A maximum number of products or activities to be evaluated is recommended, to encourage quality over quantity.
Modeling and analysis of selected space station communications and tracking subsystems
NASA Technical Reports Server (NTRS)
Richmond, Elmer Raydean
1993-01-01
The Communications and Tracking System on board Space Station Freedom (SSF) provides space-to-ground, space-to-space, audio, and video communications, as well as tracking data reception and processing services. Each major category of service is provided by a communications subsystem which is controlled and monitored by software. Among these subsystems, the Assembly/Contingency Subsystem (ACS) and the Space-to-Ground Subsystem (SGS) provide communications with the ground via the Tracking and Data Relay Satellite (TDRS) System. The ACS is effectively SSF's command link, while the SGS is primarily intended as the data link for SSF payloads. The research activities of this project focused on the ACS and SGS antenna management algorithms identified in the Flight System Software Requirements (FSSR) documentation, including: (1) software modeling and evaluation of antenna management (positioning) algorithms; and (2) analysis and investigation of selected variables and parameters of these antenna management algorithms i.e., descriptions and definitions of ranges, scopes, and dimensions. In a related activity, to assist those responsible for monitoring the development of this flight system software, a brief summary of software metrics concepts, terms, measures, and uses was prepared.
Performance Trend of Different Algorithms for Structural Design Optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
Advances in systems biology: computational algorithms and applications.
Huang, Yufei; Zhao, Zhongming; Xu, Hua; Shyr, Yu; Zhang, Bing
2012-01-01
The 2012 International Conference on Intelligent Biology and Medicine (ICIBM 2012) was held on April 22-24, 2012 in Nashville, Tennessee, USA. The conference featured six technical sessions, one tutorial session, one workshop, and 3 keynote presentations that covered state-of-the-art research activities in genomics, systems biology, and intelligent computing. In addition to a major emphasis on the next generation sequencing (NGS)-driven informatics, ICIBM 2012 aligned significant interests in systems biology and its applications in medicine. We highlight in this editorial the selected papers from the meeting that address the developments of novel algorithms and applications in systems biology.
Automatic and user-centric approaches to video summary evaluation
NASA Astrophysics Data System (ADS)
Taskiran, Cuneyt M.; Bentley, Frank
2007-01-01
Automatic video summarization has become an active research topic in content-based video processing. However, not much emphasis has been placed on developing rigorous summary evaluation methods and developing summarization systems based on a clear understanding of user needs, obtained through user centered design. In this paper we address these two topics and propose an automatic video summary evaluation algorithm adapted from teh text summarization domain.
Recent Developments for Satellite-Based Fire Monitoring in Canada
NASA Astrophysics Data System (ADS)
Abuelgasim, A.; Fraser, R.
2002-05-01
Wildfires in Canadian forests are a major source of natural disturbance. These fires have a tremendous impact on the local environment, humans and wildlife, ecosystem function, weather, and climate. Approximately 9000 fires burn 3 million hectares per year in Canada (based on a 10-year average). While only 2 to 3 percent of these wildfires grow larger than 200 hectares in size, they account for almost 97 percent of the annual area burned. This provides an excellent opportunity to monitor active fires using a combination of low and high resolution sensors for the purpose of determining fire location and burned areas. Given the size of Canada, the use of remote sensing data is a cost-effective way to achieve a synoptic overview of large forest fire activity in near-real time. In 1998 the Canada Centre for Remote Sensing (CCRS) and the Canadian Forest Service (CFS) developed a system for Fire Monitoring, Mapping and Modelling (Fire M3;http://fms.nofc.cfs.nrcan.gc.ca/FireM3/). Fire M3 automatically identifies, monitors, and maps large forest fires on a daily basis using NOAA AVHRR data. These data are processed daily using the GEOCOMP-N satellite image processing system. This presentation will describe recent developments to Fire M3, included the addition of a set of algorithms tailored for NOAA-16 (N-16) data. The two fire detection algorithms are developed for N-16 day and night-time daily data collection. The algorithms exploit both the multi-spectral and thermal information from the AVHRR daily images. The set of N-16 day and night algorithms was used to generate daily active fire maps across North America for the 2001 fire season. Such a combined approach for fire detection leads to an improved detection rate, although day-time detection based on the new 1.6 um channel was much less effective (note - given the low detection rate with day time imagery, I don't think we can make the statement about capturing the diurnal cycle). Selected validation sites in western Canada and the United States showed reasonable correspondence with the location of fires mapped by CFS and those mapped by the USDA Forest Service using conventional means.
Iterative algorithms for large sparse linear systems on parallel computers
NASA Technical Reports Server (NTRS)
Adams, L. M.
1982-01-01
Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.
Development of a method for personal, spatiotemporal exposure assessment.
Adams, Colby; Riggs, Philip; Volckens, John
2009-07-01
This work describes the development and evaluation of a high resolution, space and time-referenced sampling method for personal exposure assessment to airborne particulate matter (PM). This method integrates continuous measures of personal PM levels with the corresponding location-activity (i.e. work/school, home, transit) of the subject. Monitoring equipment include a small, portable global positioning system (GPS) receiver, a miniature aerosol nephelometer, and an ambient temperature monitor to estimate the location, time, and magnitude of personal exposure to particulate matter air pollution. Precision and accuracy of each component, as well as the integrated method performance were tested in a combination of laboratory and field tests. Spatial data was apportioned into pre-determined location-activity categories (i.e. work/school, home, transit) with a simple, temporospatially-based algorithm. The apportioning algorithm was extremely effective with an overall accuracy of 99.6%. This method allows examination of an individual's estimated exposure through space and time, which may provide new insights into exposure-activity relationships not possible with traditional exposure assessment techniques (i.e., time-integrated, filter-based measurements). Furthermore, the method is applicable to any contaminant or stressor that can be measured on an individual with a direct-reading sensor.
An algorithm to detect fire activity using Meteosat: fine tuning and quality assesment
NASA Astrophysics Data System (ADS)
Amraoui, M.; DaCamara, C. C.; Ermida, S. L.
2012-04-01
Hot spot detection by means of sensors on-board geostationary satellites allows studying wildfire activity at hourly and even sub-hourly intervals, an advantage that cannot be met by polar orbiters. Since 1997, the Satellite Application Facility for Land Surface Analysis has been running an operational procedure that allows detecting active fires based on information from Meteosat-8/SEVIRI. This is the so-called Fire Detection and Monitoring (FD&M) product and the procedure takes advantage of the temporal resolution of SEVIRI (one image every 15 min), and relies on information from SEVIRI channels (namely 0.6, 0.8, 3.9, 10.8 and 12.0 μm) together with information on illumination angles. The method is based on heritage from contextual algorithms designed for polar, sun-synchronous instruments, namely NOAA/AVHRR and MODIS/TERRAAQUA. A potential fire pixel is compared with the neighboring ones and the decision is made based on relative thresholds as derived from the pixels in the neighborhood. Generally speaking, the observed fire incidence compares well against hot spots extracted from the global daily active fire product developed by the MODIS Fire Team. However, values of probability of detection (POD) tend to be quite low, a result that may be partially expected by the finer resolution of MODIS. The aim of the present study is to make a systematic assessment of the impacts on POD and False Alarm Ratio (FAR) of the several parameters that are set in the algorithms. Such parameters range from the threshold values of brightness temperature in the IR3.9 and 10.8 channels that are used to select potential fire pixels up to the extent of the background grid and thresholds used to statistically characterize the radiometric departures of a potential pixel from the respective background. The impact of different criteria to identify pixels contaminated by clouds, smoke and sun glint is also evaluated. Finally, the advantages that may be brought to the algorithm by adding contextual tests in the time domain are discussed. The study lays the grounds to the development of improved quality flags that will be integrated in the FD&M product in the nearby future.
Active learning for clinical text classification: is it better than random sampling?
Figueroa, Rosa L; Zeng-Treitler, Qing; Ngo, Long H; Goryachev, Sergey; Wiechmann, Eduardo P
2012-01-01
This study explores active learning algorithms as a way to reduce the requirements for large training sets in medical text classification tasks. Three existing active learning algorithms (distance-based (DIST), diversity-based (DIV), and a combination of both (CMB)) were used to classify text from five datasets. The performance of these algorithms was compared to that of passive learning on the five datasets. We then conducted a novel investigation of the interaction between dataset characteristics and the performance results. Classification accuracy and area under receiver operating characteristics (ROC) curves for each algorithm at different sample sizes were generated. The performance of active learning algorithms was compared with that of passive learning using a weighted mean of paired differences. To determine why the performance varies on different datasets, we measured the diversity and uncertainty of each dataset using relative entropy and correlated the results with the performance differences. The DIST and CMB algorithms performed better than passive learning. With a statistical significance level set at 0.05, DIST outperformed passive learning in all five datasets, while CMB was found to be better than passive learning in four datasets. We found strong correlations between the dataset diversity and the DIV performance, as well as the dataset uncertainty and the performance of the DIST algorithm. For medical text classification, appropriate active learning algorithms can yield performance comparable to that of passive learning with considerably smaller training sets. In particular, our results suggest that DIV performs better on data with higher diversity and DIST on data with lower uncertainty.
Active learning for clinical text classification: is it better than random sampling?
Figueroa, Rosa L; Ngo, Long H; Goryachev, Sergey; Wiechmann, Eduardo P
2012-01-01
Objective This study explores active learning algorithms as a way to reduce the requirements for large training sets in medical text classification tasks. Design Three existing active learning algorithms (distance-based (DIST), diversity-based (DIV), and a combination of both (CMB)) were used to classify text from five datasets. The performance of these algorithms was compared to that of passive learning on the five datasets. We then conducted a novel investigation of the interaction between dataset characteristics and the performance results. Measurements Classification accuracy and area under receiver operating characteristics (ROC) curves for each algorithm at different sample sizes were generated. The performance of active learning algorithms was compared with that of passive learning using a weighted mean of paired differences. To determine why the performance varies on different datasets, we measured the diversity and uncertainty of each dataset using relative entropy and correlated the results with the performance differences. Results The DIST and CMB algorithms performed better than passive learning. With a statistical significance level set at 0.05, DIST outperformed passive learning in all five datasets, while CMB was found to be better than passive learning in four datasets. We found strong correlations between the dataset diversity and the DIV performance, as well as the dataset uncertainty and the performance of the DIST algorithm. Conclusion For medical text classification, appropriate active learning algorithms can yield performance comparable to that of passive learning with considerably smaller training sets. In particular, our results suggest that DIV performs better on data with higher diversity and DIST on data with lower uncertainty. PMID:22707743
Advanced Oil Spill Detection Algorithms For Satellite Based Maritime Environment Monitoring
NASA Astrophysics Data System (ADS)
Radius, Andrea; Azevedo, Rui; Sapage, Tania; Carmo, Paulo
2013-12-01
During the last years, the increasing pollution occurrence and the alarming deterioration of the environmental health conditions of the sea, lead to the need of global monitoring capabilities, namely for marine environment management in terms of oil spill detection and indication of the suspected polluter. The sensitivity of Synthetic Aperture Radar (SAR) to the different phenomena on the sea, especially for oil spill and vessel detection, makes it a key instrument for global pollution monitoring. The SAR performances in maritime pollution monitoring are being operationally explored by a set of service providers on behalf of the European Maritime Safety Agency (EMSA), which has launched in 2007 the CleanSeaNet (CSN) project - a pan-European satellite based oil monitoring service. EDISOFT, which is from the beginning a service provider for CSN, is continuously investing in R&D activities that will ultimately lead to better algorithms and better performance on oil spill detection from SAR imagery. This strategy is being pursued through EDISOFT participation in the FP7 EC Sea-U project and in the Automatic Oil Spill Detection (AOSD) ESA project. The Sea-U project has the aim to improve the current state of oil spill detection algorithms, through the informative content maximization obtained with data fusion, the exploitation of different type of data/ sensors and the development of advanced image processing, segmentation and classification techniques. The AOSD project is closely related to the operational segment, because it is focused on the automation of the oil spill detection processing chain, integrating auxiliary data, like wind information, together with image and geometry analysis techniques. The synergy between these different objectives (R&D versus operational) allowed EDISOFT to develop oil spill detection software, that combines the operational automatic aspect, obtained through dedicated integration of the processing chain in the existing open source NEST software, with new detection, filtering and classification algorithms. Particularly, dedicated filtering algorithm development based on Wavelet filtering was exploited for the improvement of oil spill detection and classification. In this work we present the functionalities of the developed software and the main results in support of the developed algorithm validity.
LSA SAF Meteosat FRP products - Part 1: Algorithms, product contents, and analysis
NASA Astrophysics Data System (ADS)
Wooster, M. J.; Roberts, G.; Freeborn, P. H.; Xu, W.; Govaerts, Y.; Beeby, R.; He, J.; Lattanzio, A.; Fisher, D.; Mullen, R.
2015-11-01
Characterizing changes in landscape fire activity at better than hourly temporal resolution is achievable using thermal observations of actively burning fires made from geostationary Earth Observation (EO) satellites. Over the last decade or more, a series of research and/or operational "active fire" products have been developed from geostationary EO data, often with the aim of supporting biomass burning fuel consumption and trace gas and aerosol emission calculations. Such Fire Radiative Power (FRP) products are generated operationally from Meteosat by the Land Surface Analysis Satellite Applications Facility (LSA SAF) and are available freely every 15 min in both near-real-time and archived form. These products map the location of actively burning fires and characterize their rates of thermal radiative energy release (FRP), which is believed proportional to rates of biomass consumption and smoke emission. The FRP-PIXEL product contains the full spatio-temporal resolution FRP data set derivable from the SEVIRI (Spinning Enhanced Visible and Infrared Imager) imager onboard Meteosat at a 3 km spatial sampling distance (decreasing away from the west African sub-satellite point), whilst the FRP-GRID product is an hourly summary at 5° grid resolution that includes simple bias adjustments for meteorological cloud cover and regional underestimation of FRP caused primarily by underdetection of low FRP fires. Here we describe the enhanced geostationary Fire Thermal Anomaly (FTA) detection algorithm used to deliver these products and detail the methods used to generate the atmospherically corrected FRP and per-pixel uncertainty metrics. Using SEVIRI scene simulations and real SEVIRI data, including from a period of Meteosat-8 "special operations", we describe certain sensor and data pre-processing characteristics that influence SEVIRI's active fire detection and FRP measurement capability, and use these to specify parameters in the FTA algorithm and to make recommendations for the forthcoming Meteosat Third Generation operations in relation to active fire measures. We show that the current SEVIRI FTA algorithm is able to discriminate actively burning fires covering down to 10-4 of a pixel and that it appears more sensitive to fire than other algorithms used to generate many widely exploited active fire products. Finally, we briefly illustrate the information contained within the current Meteosat FRP-PIXEL and FRP-GRID products, providing example analyses for both individual fires and multi-year regional-scale fire activity; the companion paper (Roberts et al., 2015) provides a full product performance evaluation and a demonstration of product use within components of the Copernicus Atmosphere Monitoring Service (CAMS).
2014-09-01
to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging system that...research is to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging ...i) developed time-of- flight extraction algorithms to perform USCT, (ii) developing image reconstruction algorithms for USCT, (iii) developed
Multilateral haptics-based immersive teleoperation for improvised explosive device disposal
NASA Astrophysics Data System (ADS)
Erickson, David; Lacheray, Hervé; Daly, John
2013-05-01
Of great interest to police and military organizations is the development of effective improvised explosive device (IED) disposal (IEDD) technology to aid in activities such as mine field clearing, and bomb disposal. At the same time minimizing risk to personnel. This paper presents new results in the research and development of a next generation mobile immersive teleoperated explosive ordnance disposal system. This system incorporates elements of 3D vision, multilateral teleoperation for high transparency haptic feedback, immersive augmented reality operator control interfaces, and a realistic hardware-in-the-loop (HIL) 3D simulation environment incorporating vehicle and manipulator dynamics for both operator training and algorithm development. In the past year, new algorithms have been developed to facilitate incorporating commercial off-the-shelf (COTS) robotic hardware into the teleoperation system. In particular, a real-time numerical inverse position kinematics algorithm that can be applied to a wide range of manipulators has been implemented, an inertial measurement unit (IMU) attitude stabilization system for manipulators has been developed and experimentally validated, and a voiceoperated manipulator control system has been developed and integrated into the operator control station. The integration of these components into a vehicle simulation environment with half-car vehicle dynamics has also been successfully carried out. A physical half-car plant is currently being constructed for HIL integration with the simulation environment.
Chevrette, Marc G; Aicheler, Fabian; Kohlbacher, Oliver; Currie, Cameron R; Medema, Marnix H
2017-10-15
Nonribosomally synthesized peptides (NRPs) are natural products with widespread applications in medicine and biotechnology. Many algorithms have been developed to predict the substrate specificities of nonribosomal peptide synthetase adenylation (A) domains from DNA sequences, which enables prioritization and dereplication, and integration with other data types in discovery efforts. However, insufficient training data and a lack of clarity regarding prediction quality have impeded optimal use. Here, we introduce prediCAT, a new phylogenetics-inspired algorithm, which quantitatively estimates the degree of predictability of each A-domain. We then systematically benchmarked all algorithms on a newly gathered, independent test set of 434 A-domain sequences, showing that active-site-motif-based algorithms outperform whole-domain-based methods. Subsequently, we developed SANDPUMA, a powerful ensemble algorithm, based on newly trained versions of all high-performing algorithms, which significantly outperforms individual methods. Finally, we deployed SANDPUMA in a systematic investigation of 7635 Actinobacteria genomes, suggesting that NRP chemical diversity is much higher than previously estimated. SANDPUMA has been integrated into the widely used antiSMASH biosynthetic gene cluster analysis pipeline and is also available as an open-source, standalone tool. SANDPUMA is freely available at https://bitbucket.org/chevrm/sandpuma and as a docker image at https://hub.docker.com/r/chevrm/sandpuma/ under the GNU Public License 3 (GPL3). chevrette@wisc.edu or marnix.medema@wur.nl. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Real-time interactive tractography analysis for multimodal brain visualization tool: MultiXplore
NASA Astrophysics Data System (ADS)
Bakhshmand, Saeed M.; de Ribaupierre, Sandrine; Eagleson, Roy
2017-03-01
Most debilitating neurological disorders can have anatomical origins. Yet unlike other body organs, the anatomy alone cannot easily provide an understanding of brain functionality. In fact, addressing the challenge of linking structural and functional connectivity remains in the frontiers of neuroscience. Aggregating multimodal neuroimaging datasets may be critical for developing theories that span brain functionality, global neuroanatomy and internal microstructures. Functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) are main such techniques that are employed to investigate the brain under normal and pathological conditions. FMRI records blood oxygenation level of the grey matter (GM), whereas DTI is able to reveal the underlying structure of the white matter (WM). Brain global activity is assumed to be an integration of GM functional hubs and WM neural pathways that serve to connect them. In this study we developed and evaluated a two-phase algorithm. This algorithm is employed in a 3D interactive connectivity visualization framework and helps to accelerate clustering of virtual neural pathways. In this paper, we will detail an algorithm that makes use of an index-based membership array formed for a whole brain tractography file and corresponding parcellated brain atlas. Next, we demonstrate efficiency of the algorithm by measuring required times for extracting a variety of fiber clusters, which are chosen in such a way to resemble all sizes probable output data files that algorithm will generate. The proposed algorithm facilitates real-time visual inspection of neuroimaging data to further the discovery in structure-function relationship of the brain networks.
Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.
2014-01-01
Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953
Sweeney, Elizabeth M; Vogelstein, Joshua T; Cuzzocreo, Jennifer L; Calabresi, Peter A; Reich, Daniel S; Crainiceanu, Ciprian M; Shinohara, Russell T
2014-01-01
Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.
A Bidirectional Brain-Machine Interface Algorithm That Approximates Arbitrary Force-Fields
Semprini, Marianna; Mussa-Ivaldi, Ferdinando A.; Panzeri, Stefano
2014-01-01
We examine bidirectional brain-machine interfaces that control external devices in a closed loop by decoding motor cortical activity to command the device and by encoding the state of the device by delivering electrical stimuli to sensory areas. Although it is possible to design this artificial sensory-motor interaction while maintaining two independent channels of communication, here we propose a rule that closes the loop between flows of sensory and motor information in a way that approximates a desired dynamical policy expressed as a field of forces acting upon the controlled external device. We previously developed a first implementation of this approach based on linear decoding of neural activity recorded from the motor cortex into a set of forces (a force field) applied to a point mass, and on encoding of position of the point mass into patterns of electrical stimuli delivered to somatosensory areas. However, this previous algorithm had the limitation that it only worked in situations when the position-to-force map to be implemented is invertible. Here we overcome this limitation by developing a new non-linear form of the bidirectional interface that can approximate a virtually unlimited family of continuous fields. The new algorithm bases both the encoding of position information and the decoding of motor cortical activity on an explicit map between spike trains and the state space of the device computed with Multi-Dimensional-Scaling. We present a detailed computational analysis of the performance of the interface and a validation of its robustness by using synthetic neural responses in a simulated sensory-motor loop. PMID:24626393
Reconstructing liver shape and position from MR image slices using an active shape model
NASA Astrophysics Data System (ADS)
Fenchel, Matthias; Thesen, Stefan; Schilling, Andreas
2008-03-01
We present an algorithm for fully automatic reconstruction of 3D position, orientation and shape of the human liver from a sparsely covering set of n 2D MR slice images. Reconstructing the shape of an organ from slice images can be used for scan planning, for surgical planning or other purposes where 3D anatomical knowledge has to be inferred from sparse slices. The algorithm is based on adapting an active shape model of the liver surface to a given set of slice images. The active shape model is created from a training set of liver segmentations from a group of volunteers. The training set is set up with semi-manual segmentations of T1-weighted volumetric MR images. Searching for the optimal shape model that best fits to the image data is done by maximizing a similarity measure based on local appearance at the surface. Two different algorithms for the active shape model search are proposed and compared: both algorithms seek to maximize the a-posteriori probability of the grey level appearance around the surface while constraining the surface to the space of valid shapes. The first algorithm works by using grey value profile statistics in normal direction. The second algorithm uses average and variance images to calculate the local surface appearance on the fly. Both algorithms are validated by fitting the active shape model to abdominal 2D slice images and comparing the shapes, which have been reconstructed, to the manual segmentations and to the results of active shape model searches from 3D image data. The results turn out to be promising and competitive to active shape model segmentations from 3D data.
Gray matter segmentation of the spinal cord with active contours in MR images.
Datta, Esha; Papinutto, Nico; Schlaeger, Regina; Zhu, Alyssa; Carballido-Gamio, Julio; Henry, Roland G
2017-02-15
Fully or partially automated spinal cord gray matter segmentation techniques for spinal cord gray matter segmentation will allow for pivotal spinal cord gray matter measurements in the study of various neurological disorders. The objective of this work was multi-fold: (1) to develop a gray matter segmentation technique that uses registration methods with an existing delineation of the cord edge along with Morphological Geodesic Active Contour (MGAC) models; (2) to assess the accuracy and reproducibility of the newly developed technique on 2D PSIR T1 weighted images; (3) to test how the algorithm performs on different resolutions and other contrasts; (4) to demonstrate how the algorithm can be extended to 3D scans; and (5) to show the clinical potential for multiple sclerosis patients. The MGAC algorithm was developed using a publicly available implementation of a morphological geodesic active contour model and the spinal cord segmentation tool of the software Jim (Xinapse Systems) for initial estimate of the cord boundary. The MGAC algorithm was demonstrated on 2D PSIR images of the C2/C3 level with two different resolutions, 2D T2* weighted images of the C2/C3 level, and a 3D PSIR image. These images were acquired from 45 healthy controls and 58 multiple sclerosis patients selected for the absence of evident lesions at the C2/C3 level. Accuracy was assessed though visual assessment, Hausdorff distances, and Dice similarity coefficients. Reproducibility was assessed through interclass correlation coefficients. Validity was assessed through comparison of segmented gray matter areas in images with different resolution for both manual and MGAC segmentations. Between MGAC and manual segmentations in healthy controls, the mean Dice similarity coefficient was 0.88 (0.82-0.93) and the mean Hausdorff distance was 0.61 (0.46-0.76) mm. The interclass correlation coefficient from test and retest scans of healthy controls was 0.88. The percent change between the manual segmentations from high and low-resolution images was 25%, while the percent change between the MGAC segmentations from high and low resolution images was 13%. Between MGAC and manual segmentations in MS patients, the average Dice similarity coefficient was 0.86 (0.8-0.92) and the average Hausdorff distance was 0.83 (0.29-1.37) mm. We demonstrate that an automatic segmentation technique, based on a morphometric geodesic active contours algorithm, can provide accurate and precise spinal cord gray matter segmentations on 2D PSIR images. We have also shown how this automated technique can potentially be extended to other imaging protocols. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Shahini Shamsabadi, Salar
A web-based PAVEment MONitoring system, PAVEMON, is a GIS oriented platform for accommodating, representing, and leveraging data from a multi-modal mobile sensor system. Stated sensor system consists of acoustic, optical, electromagnetic, and GPS sensors and is capable of producing as much as 1 Terabyte of data per day. Multi-channel raw sensor data (microphone, accelerometer, tire pressure sensor, video) and processed results (road profile, crack density, international roughness index, micro texture depth, etc.) are outputs of this sensor system. By correlating the sensor measurements and positioning data collected in tight time synchronization, PAVEMON attaches a spatial component to all the datasets. These spatially indexed outputs are placed into an Oracle database which integrates seamlessly with PAVEMON's web-based system. The web-based system of PAVEMON consists of two major modules: 1) a GIS module for visualizing and spatial analysis of pavement condition information layers, and 2) a decision-support module for managing maintenance and repair (Mℝ) activities and predicting future budget needs. PAVEMON weaves together sensor data with third-party climate and traffic information from the National Oceanic and Atmospheric Administration (NOAA) and Long Term Pavement Performance (LTPP) databases for an organized data driven approach to conduct pavement management activities. PAVEMON deals with heterogeneous and redundant observations by fusing them for jointly-derived higher-confidence results. A prominent example of the fusion algorithms developed within PAVEMON is a data fusion algorithm used for estimating the overall pavement conditions in terms of ASTM's Pavement Condition Index (PCI). PAVEMON predicts PCI by undertaking a statistical fusion approach and selecting a subset of all the sensor measurements. Other fusion algorithms include noise-removal algorithms to remove false negatives in the sensor data in addition to fusion algorithms developed for identifying features on the road. PAVEMON offers an ideal research and monitoring platform for rapid, intelligent and comprehensive evaluation of tomorrow's transportation infrastructure based on up-to-date data from heterogeneous sensor systems.
Heald, Elizabeth; Hart, Ronald; Kilgore, Kevin; Peckham, P Hunter
2017-06-01
Previous studies have demonstrated the presence of intact axons across a spinal cord lesion, even in those clinically diagnosed with complete spinal cord injury (SCI). These axons may allow volitional motor signals to be transmitted through the injury, even in the absence of visible muscle contraction. To demonstrate the presence of volitional electromyographic (EMG) activity below the lesion in motor complete SCI and to characterize this activity to determine its value for potential use as a neuroprosthetic command source. Twenty-four subjects with complete (AIS A or B), chronic, cervical SCI were tested for the presence of volitional below-injury EMG activity. Surface electrodes recorded from 8 to 12 locations of each lower limb, while participants were asked to attempt specific movements of the lower extremity in response to visual and audio cues. EMG trials were ranked through visual inspection, and were scored using an amplitude threshold algorithm to identify channels of interest with volitional motor unit activity. Significant below-injury muscle activity was identified through visual inspection in 16 of 24 participants, and visual inspection rankings were well correlated to the algorithm scoring. The surface EMG protocol utilized here is relatively simple and noninvasive, ideal for a clinical screening tool. The majority of subjects tested were able to produce a volitional EMG signal below their injury level, and the algorithm developed allows automatic identification of signals of interest. The presence of this volitional activity in the lower extremity could provide an innovative new command signal source for implanted neuroprostheses or other assistive technology.
Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck
2008-04-10
One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.
Materials Selection. Resources in Technology.
ERIC Educational Resources Information Center
Technology Teacher, 1991
1991-01-01
This learning activity develops algorithms to ensure that the process of selecting materials is well defined and sound. These procedures require the use of many databases to provide the designer with information such as physical, mechanical, and chemical properties of the materials under consideration. A design brief, student quiz, and five…
Optimal Control Allocation with Load Sensor Feedback for Active Load Suppression
NASA Technical Reports Server (NTRS)
Miller, Christopher
2017-01-01
These slide sets describe the OCLA formulation and associated algorithms as a set of new technologies in the first practical application of load limiting flight control utilizing load feedback as a primary control measurement. Slide set one describes Experiment Development and slide set two describes Flight-Test Performance.
Estimating surface soil moisture from SMAP observations using a neural network technique
USDA-ARS?s Scientific Manuscript database
A Neural Network (NN) algorithm was developed to estimate global surface soil moisture for April 2015 to June 2016 with a 2-3 day repeat frequency using passive microwave observations from the Soil Moisture Active Passive (SMAP) satellite, surface soil temperatures from the NASA Goddard Earth Observ...
A Calculus Activity with Foundations in Geometric Learning
ERIC Educational Resources Information Center
Wagner, Jennifer; Sharp, Janet
2017-01-01
Calculus, perhaps more than other areas of mathematics, has a reputation for being steeped with procedures. In fact, through the years, it has been noticed of many students getting caught in the trap of trying to memorize algorithms and rules without developing associated concept knowledge. Specifically, students often struggle with the…
User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm.
Bourobou, Serge Thomas Mickala; Yoo, Younghwan
2015-05-21
This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen's temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home.
An automated skin segmentation of Breasts in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.
Lee, Chia-Yen; Chang, Tzu-Fang; Chang, Nai-Yun; Chang, Yeun-Chung
2018-04-18
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is used to diagnose breast disease. Obtaining anatomical information from DCE-MRI requires the skin be manually removed so that blood vessels and tumors can be clearly observed by physicians and radiologists; this requires considerable manpower and time. We develop an automated skin segmentation algorithm where the surface skin is removed rapidly and correctly. The rough skin area is segmented by the active contour model, and analyzed in segments according to the continuity of the skin thickness for accuracy. Blood vessels and mammary glands are retained, which remedies the defect of removing some blood vessels in active contours. After three-dimensional imaging, the DCE-MRIs without the skin can be used to see internal anatomical information for clinical applications. The research showed the Dice's coefficients of the 3D reconstructed images using the proposed algorithm and the active contour model for removing skins are 93.2% and 61.4%, respectively. The time performance of segmenting skins automatically is about 165 times faster than manually. The texture information of the tumors position with/without the skin is compared by the paired t-test yielded all p < 0.05, which suggested the proposed algorithm may enhance observability of tumors at the significance level of 0.05.
NASA Astrophysics Data System (ADS)
Luo, Yugong; Chen, Tao; Li, Keqiang
2015-12-01
The paper presents a novel active distance control strategy for intelligent hybrid electric vehicles (IHEV) with the purpose of guaranteeing an optimal performance in view of the driving functions, optimum safety, fuel economy and ride comfort. Considering the complexity of driving situations, the objects of safety and ride comfort are decoupled from that of fuel economy, and a hierarchical control architecture is adopted to improve the real-time performance and the adaptability. The hierarchical control structure consists of four layers: active distance control object determination, comprehensive driving and braking torque calculation, comprehensive torque distribution and torque coordination. The safety distance control and the emergency stop algorithms are designed to achieve the safety and ride comfort goals. The optimal rule-based energy management algorithm of the hybrid electric system is developed to improve the fuel economy. The torque coordination control strategy is proposed to regulate engine torque, motor torque and hydraulic braking torque to improve the ride comfort. This strategy is verified by simulation and experiment using a forward simulation platform and a prototype vehicle. The results show that the novel control strategy can achieve the integrated and coordinated control of its multiple subsystems, which guarantees top performance of the driving functions and optimum safety, fuel economy and ride comfort.
Deep learning improves prediction of CRISPR-Cpf1 guide RNA activity.
Kim, Hui Kwon; Min, Seonwoo; Song, Myungjae; Jung, Soobin; Choi, Jae Woo; Kim, Younggwang; Lee, Sangeun; Yoon, Sungroh; Kim, Hyongbum Henry
2018-03-01
We present two algorithms to predict the activity of AsCpf1 guide RNAs. Indel frequencies for 15,000 target sequences were used in a deep-learning framework based on a convolutional neural network to train Seq-deepCpf1. We then incorporated chromatin accessibility information to create the better-performing DeepCpf1 algorithm for cell lines for which such information is available and show that both algorithms outperform previous machine learning algorithms on our own and published data sets.
Zhang, Yiyan; Xin, Yi; Li, Qin; Ma, Jianshe; Li, Shuai; Lv, Xiaodan; Lv, Weiqi
2017-11-02
Various kinds of data mining algorithms are continuously raised with the development of related disciplines. The applicable scopes and their performances of these algorithms are different. Hence, finding a suitable algorithm for a dataset is becoming an important emphasis for biomedical researchers to solve practical problems promptly. In this paper, seven kinds of sophisticated active algorithms, namely, C4.5, support vector machine, AdaBoost, k-nearest neighbor, naïve Bayes, random forest, and logistic regression, were selected as the research objects. The seven algorithms were applied to the 12 top-click UCI public datasets with the task of classification, and their performances were compared through induction and analysis. The sample size, number of attributes, number of missing values, and the sample size of each class, correlation coefficients between variables, class entropy of task variable, and the ratio of the sample size of the largest class to the least class were calculated to character the 12 research datasets. The two ensemble algorithms reach high accuracy of classification on most datasets. Moreover, random forest performs better than AdaBoost on the unbalanced dataset of the multi-class task. Simple algorithms, such as the naïve Bayes and logistic regression model are suitable for a small dataset with high correlation between the task and other non-task attribute variables. K-nearest neighbor and C4.5 decision tree algorithms perform well on binary- and multi-class task datasets. Support vector machine is more adept on the balanced small dataset of the binary-class task. No algorithm can maintain the best performance in all datasets. The applicability of the seven data mining algorithms on the datasets with different characteristics was summarized to provide a reference for biomedical researchers or beginners in different fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2016-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
Collaboration space division in collaborative product development based on a genetic algorithm
NASA Astrophysics Data System (ADS)
Qian, Xueming; Ma, Yanqiao; Feng, Huan
2018-02-01
The advance in the global environment, rapidly changing markets, and information technology has created a new stage for design. In such an environment, one strategy for success is the Collaborative Product Development (CPD). Organizing people effectively is the goal of Collaborative Product Development, and it solves the problem with certain foreseeability. The development group activities are influenced not only by the methods and decisions available, but also by correlation among personnel. Grouping the personnel according to their correlation intensity is defined as collaboration space division (CSD). Upon establishment of a correlation matrix (CM) of personnel and an analysis of the collaboration space, the genetic algorithm (GA) and minimum description length (MDL) principle may be used as tools in optimizing collaboration space. The MDL principle is used in setting up an object function, and the GA is used as a methodology. The algorithm encodes spatial information as a chromosome in binary. After repetitious crossover, mutation, selection and multiplication, a robust chromosome is found, which can be decoded into an optimal collaboration space. This new method can calculate the members in sub-spaces and individual groupings within the staff. Furthermore, the intersection of sub-spaces and public persons belonging to all sub-spaces can be determined simultaneously.
NASA Technical Reports Server (NTRS)
Barghouty, A. F.; Falconer, D. A.; Adams, J. H., Jr.
2010-01-01
This presentation describes a new forecasting tool developed for and is currently being tested by NASA s Space Radiation Analysis Group (SRAG) at JSC, which is responsible for the monitoring and forecasting of radiation exposure levels of astronauts. The new software tool is designed for the empirical forecasting of M and X-class flares, coronal mass ejections, as well as solar energetic particle events. Its algorithm is based on an empirical relationship between the various types of events rates and a proxy of the active region s free magnetic energy, determined from a data set of approx.40,000 active-region magnetograms from approx.1,300 active regions observed by SOHO/MDI that have known histories of flare, coronal mass ejection, and solar energetic particle event production. The new tool automatically extracts each strong-field magnetic areas from an MDI full-disk magnetogram, identifies each as an NOAA active region, and measures a proxy of the active region s free magnetic energy from the extracted magnetogram. For each active region, the empirical relationship is then used to convert the free magnetic energy proxy into an expected event rate. The expected event rate in turn can be readily converted into the probability that the active region will produce such an event in a given forward time window. Descriptions of the datasets, algorithm, and software in addition to sample applications and a validation test are presented. Further development and transition of the new tool in anticipation of SDO/HMI is briefly discussed.
Biffi, E.; Ghezzi, D.; Pedrocchi, A.; Ferrigno, G.
2010-01-01
Neurons cultured in vitro on MicroElectrode Array (MEA) devices connect to each other, forming a network. To study electrophysiological activity and long term plasticity effects, long period recording and spike sorter methods are needed. Therefore, on-line and real time analysis, optimization of memory use and data transmission rate improvement become necessary. We developed an algorithm for amplitude-threshold spikes detection, whose performances were verified with (a) statistical analysis on both simulated and real signal and (b) Big O Notation. Moreover, we developed a PCA-hierarchical classifier, evaluated on simulated and real signal. Finally we proposed a spike detection hardware design on FPGA, whose feasibility was verified in terms of CLBs number, memory occupation and temporal requirements; once realized, it will be able to execute on-line detection and real time waveform analysis, reducing data storage problems. PMID:20300592
An Improved Harmonic Current Detection Method Based on Parallel Active Power Filter
NASA Astrophysics Data System (ADS)
Zeng, Zhiwu; Xie, Yunxiang; Wang, Yingpin; Guan, Yuanpeng; Li, Lanfang; Zhang, Xiaoyu
2017-05-01
Harmonic detection technology plays an important role in the applications of active power filter. The accuracy and real-time performance of harmonic detection are the precondition to ensure the compensation performance of Active Power Filter (APF). This paper proposed an improved instantaneous reactive power harmonic current detection algorithm. The algorithm uses an improved ip -iq algorithm which is combined with the moving average value filter. The proposed ip -iq algorithm can remove the αβ and dq coordinate transformation, decreasing the cost of calculation, simplifying the extraction process of fundamental components of load currents, and improving the detection speed. The traditional low-pass filter is replaced by the moving average filter, detecting the harmonic currents more precisely and quickly. Compared with the traditional algorithm, the THD (Total Harmonic Distortion) of the grid currents is reduced from 4.41% to 3.89% for the simulations and from 8.50% to 4.37% for the experiments after the improvement. The results show the proposed algorithm is more accurate and efficient.
Advanced Suspension and Control Algorithm for U.S. Army Ground Vehicles
2013-04-01
Army Materiel Systems Analysis Activity (AMSAA), for his assistance and guidance in building a multibody vehicle dynamics model of a typical light...Mobility Multipurpose Wheeled Vehicle [HMMWV] model) that was developed in collaboration with the U.S. Army Materiel Systems Analysis Activity (5) is...control weight for GPC With Explicit Disturbance was R = 1.0e-7 over the entire speed range. To simplify analysis , the control weights for the other two
Wave front sensing for next generation earth observation telescope
NASA Astrophysics Data System (ADS)
Delvit, J.-M.; Thiebaut, C.; Latry, C.; Blanchet, G.
2017-09-01
High resolution observations systems are highly dependent on optics quality and are usually designed to be nearly diffraction limited. Such a performance allows to set a Nyquist frequency closer to the cut off frequency, or equivalently to minimize the pupil diameter for a given ground sampling distance target. Up to now, defocus is the only aberration that is allowed to evolve slowly and that may be inflight corrected, using an open loop correction based upon ground estimation and refocusing command upload. For instance, Pleiades satellites defocus is assessed from star acquisitions and refocusing is done with a thermal actuation of the M2 mirror. Next generation systems under study at CNES should include active optics in order to allow evolving aberrations not only limited to defocus, due for instance to in orbit thermal variable conditions. Active optics relies on aberration estimations through an onboard Wave Front Sensor (WFS). One option is using a Shack Hartmann. The Shack-Hartmann wave-front sensor could be used on extended scenes (unknown landscapes). A wave-front computation algorithm should then be implemented on-board the satellite to provide the control loop wave-front error measure. In the worst case scenario, this measure should be computed before each image acquisition. A robust and fast shift estimation algorithm between Shack-Hartmann images is then needed to fulfill this last requirement. A fast gradient-based algorithm using optical flows with a Lucas-Kanade method has been studied and implemented on an electronic device developed by CNES. Measurement accuracy depends on the Wave Front Error (WFE), the landscape frequency content, the number of searched aberrations, the a priori knowledge of high order aberrations and the characteristics of the sensor. CNES has realized a full scale sensitivity analysis on the whole parameter set with our internally developed algorithm.
Graphical programming interface: A development environment for MRI methods.
Zwart, Nicholas R; Pipe, James G
2015-11-01
To introduce a multiplatform, Python language-based, development environment called graphical programming interface for prototyping MRI techniques. The interface allows developers to interact with their scientific algorithm prototypes visually in an event-driven environment making tasks such as parameterization, algorithm testing, data manipulation, and visualization an integrated part of the work-flow. Algorithm developers extend the built-in functionality through simple code interfaces designed to facilitate rapid implementation. This article shows several examples of algorithms developed in graphical programming interface including the non-Cartesian MR reconstruction algorithms for PROPELLER and spiral as well as spin simulation and trajectory visualization of a FLORET example. The graphical programming interface framework is shown to be a versatile prototyping environment for developing numeric algorithms used in the latest MR techniques. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.
2016-05-01
The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.
Fast Back-Propagation Learning Using Steep Activation Functions and Automatic Weight
Tai-Hoon Cho; Richard W. Conners; Philip A. Araman
1992-01-01
In this paper, several back-propagation (BP) learning speed-up algorithms that employ the ãgainä parameter, i.e., steepness of the activation function, are examined. Simulations will show that increasing the gain seemingly increases the speed of convergence and that these algorithms can converge faster than the standard BP learning algorithm on some problems. However,...
Design of Genetic Algorithms for Topology Control of Unmanned Vehicles
2010-01-01
decentralised topology control mechanism distributed among active running software agents to achieve a uniform spread of terrestrial unmanned vehicles...14. ABSTRACT We present genetic algorithms (GAs) as a decentralised topology control mechanism distributed among active running software agents to...inspired topology control algorithm. The topology control of UVs using a decentralised solution over an unknown geographical terrain is a challenging
New development of the image matching algorithm
NASA Astrophysics Data System (ADS)
Zhang, Xiaoqiang; Feng, Zhao
2018-04-01
To study the image matching algorithm, algorithm four elements are described, i.e., similarity measurement, feature space, search space and search strategy. Four common indexes for evaluating the image matching algorithm are described, i.e., matching accuracy, matching efficiency, robustness and universality. Meanwhile, this paper describes the principle of image matching algorithm based on the gray value, image matching algorithm based on the feature, image matching algorithm based on the frequency domain analysis, image matching algorithm based on the neural network and image matching algorithm based on the semantic recognition, and analyzes their characteristics and latest research achievements. Finally, the development trend of image matching algorithm is discussed. This study is significant for the algorithm improvement, new algorithm design and algorithm selection in practice.
NASA Astrophysics Data System (ADS)
Zhang, Tianran; Wooster, Martin
2016-04-01
Until recently, crop residues have been the second largest industrial waste product produced in China and field-based burning of crop residues is considered to remain extremely widespread, with impacts on air quality and potential negative effects on health, public transportation. However, due to the small size and perhaps short-lived nature of the individual burns, the extent of the activity and its spatial variability remains somewhat unclear. Satellite EO data has been used to gauge the timing and magnitude of Chinese crop burning, but current approaches very likely miss significant amounts of the activity because the individual burned areas are either too small to detect with frequently acquired moderate spatial resolution data such as MODIS. The Visible Infrared Imaging Radiometer Suite (VIIRS) on-board Suomi-NPP (National Polar-orbiting Partnership) satellite launched on October, 2011 has one set of multi-spectral channels providing full global coverage at 375 m nadir spatial resolutions. It is expected that the 375 m spatial resolution "I-band" imagery provided by VIIRS will allow active fires to be detected that are ~ 10× smaller than those that can be detected by MODIS. In this study the new small fire detection algorithm is built based on VIIRS-I band global fire detection algorithm and hot spot detection algorithm for the BIRD satellite mission. VIIRS-I band imagery data will be used to identify agricultural fire activity across Eastern China. A 30 m spatial resolution global land cover data map is used for false alarm masking. The ground-based validation is performed using images taken from UAV. The fire detection result is been compared with active fire product from the long-standing MODIS sensor onboard the TERRA and AQUA satellites, which shows small fires missed from traditional MODIS fire product may count for over 1/3 of total fire energy in Eastern China.
A Multiuser Detector Based on Artificial Bee Colony Algorithm for DS-UWB Systems
Liu, Xiaohui
2013-01-01
Artificial Bee Colony (ABC) algorithm is an optimization algorithm based on the intelligent behavior of honey bee swarm. The ABC algorithm was developed to solve optimizing numerical problems and revealed premising results in processing time and solution quality. In ABC, a colony of artificial bees search for rich artificial food sources; the optimizing numerical problems are converted to the problem of finding the best parameter which minimizes an objective function. Then, the artificial bees randomly discover a population of initial solutions and then iteratively improve them by employing the behavior: moving towards better solutions by means of a neighbor search mechanism while abandoning poor solutions. In this paper, an efficient multiuser detector based on a suboptimal code mapping multiuser detector and artificial bee colony algorithm (SCM-ABC-MUD) is proposed and implemented in direct-sequence ultra-wideband (DS-UWB) systems under the additive white Gaussian noise (AWGN) channel. The simulation results demonstrate that the BER and the near-far effect resistance performances of this proposed algorithm are quite close to those of the optimum multiuser detector (OMD) while its computational complexity is much lower than that of OMD. Furthermore, the BER performance of SCM-ABC-MUD is not sensitive to the number of active users and can obtain a large system capacity. PMID:23983638
Verification of Minimum Detectable Activity for Radiological Threat Source Search
NASA Astrophysics Data System (ADS)
Gardiner, Hannah; Myjak, Mitchell; Baciak, James; Detwiler, Rebecca; Seifert, Carolyn
2015-10-01
The Department of Homeland Security's Domestic Nuclear Detection Office is working to develop advanced technologies that will improve the ability to detect, localize, and identify radiological and nuclear sources from airborne platforms. The Airborne Radiological Enhanced-sensor System (ARES) program is developing advanced data fusion algorithms for analyzing data from a helicopter-mounted radiation detector. This detector platform provides a rapid, wide-area assessment of radiological conditions at ground level. The NSCRAD (Nuisance-rejection Spectral Comparison Ratios for Anomaly Detection) algorithm was developed to distinguish low-count sources of interest from benign naturally occurring radiation and irrelevant nuisance sources. It uses a number of broad, overlapping regions of interest to statistically compare each newly measured spectrum with the current estimate for the background to identify anomalies. We recently developed a method to estimate the minimum detectable activity (MDA) of NSCRAD in real time. We present this method here and report on the MDA verification using both laboratory measurements and simulated injects on measured backgrounds at or near the detection limits. This work is supported by the US Department of Homeland Security, Domestic Nuclear Detection Office, under competitively awarded contract/IAA HSHQDC-12-X-00376. This support does not constitute an express or implied endorsement on the part of the Gov't.
Trong Bui, Duong; Nguyen, Nhan Duc; Jeong, Gu-Min
2018-06-25
Human activity recognition and pedestrian dead reckoning are an interesting field because of their importance utilities in daily life healthcare. Currently, these fields are facing many challenges, one of which is the lack of a robust algorithm with high performance. This paper proposes a new method to implement a robust step detection and adaptive distance estimation algorithm based on the classification of five daily wrist activities during walking at various speeds using a smart band. The key idea is that the non-parametric adaptive distance estimator is performed after two activity classifiers and a robust step detector. In this study, two classifiers perform two phases of recognizing five wrist activities during walking. Then, a robust step detection algorithm, which is integrated with an adaptive threshold, peak and valley correction algorithm, is applied to the classified activities to detect the walking steps. In addition, the misclassification activities are fed back to the previous layer. Finally, three adaptive distance estimators, which are based on a non-parametric model of the average walking speed, calculate the length of each strike. The experimental results show that the average classification accuracy is about 99%, and the accuracy of the step detection is 98.7%. The error of the estimated distance is 2.2⁻4.2% depending on the type of wrist activities.
2011-01-01
Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025
Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott
2011-07-28
Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.
NASA Technical Reports Server (NTRS)
Mannino, Antonio; Russ, Mary E.; Hooker, Stanford B.
2007-01-01
In coastal ocean waters, distributions of dissolved organic carbon (DOC) and chromophoric dissolved organic matter (CDOM) vary seasonally and interannually due to multiple source inputs and removal processes. We conducted several oceanographic cruises within the continental margin of the U.S. Middle Atlantic Bight (MAB) to collect field measurements in order to develop algorithms to retrieve CDOM and DOC from NASA's MODIS-Aqua and SeaWiFS satellite sensors. In order to develop empirical algorithms for CDOM and DOC, we correlated the CDOM absorption coefficient (a(sub cdom)) with in situ radiometry (remote sensing reflectance, Rrs, band ratios) and then correlated DOC to Rrs band ratios through the CDOM to DOC relationships. Our validation analyses demonstrate successful retrieval of DOC and CDOM from coastal ocean waters using the MODIS-Aqua and SeaWiFS satellite sensors with mean absolute percent differences from field measurements of < 9 %for DOC, 20% for a(sub cdom)(355)1,6 % for a(sub cdom)(443), and 12% for the CDOM spectral slope. To our knowledge, the algorithms presented here represent the first validated algorithms for satellite retrieval of a(sub cdom) DOC, and CDOM spectral slope in the coastal ocean. The satellite-derived DOC and a(sub cdom) products demonstrate the seasonal net ecosystem production of DOC and photooxidation of CDOM from spring to fall. With accurate satellite retrievals of CDOM and DOC, we will be able to apply satellite observations to investigate interannual and decadal-scale variability in surface CDOM and DOC within continental margins and monitor impacts of climate change and anthropogenic activities on coastal ecosystems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2015-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
Applications of active adaptive noise control to jet engines
NASA Technical Reports Server (NTRS)
Shoureshi, Rahmat; Brackney, Larry
1993-01-01
During phase 2 research on the application of active noise control to jet engines, the development of multiple-input/multiple-output (MIMO) active adaptive noise control algorithms and acoustic/controls models for turbofan engines were considered. Specific goals for this research phase included: (1) implementation of a MIMO adaptive minimum variance active noise controller; and (2) turbofan engine model development. A minimum variance control law for adaptive active noise control has been developed, simulated, and implemented for single-input/single-output (SISO) systems. Since acoustic systems tend to be distributed, multiple sensors, and actuators are more appropriate. As such, the SISO minimum variance controller was extended to the MIMO case. Simulation and experimental results are presented. A state-space model of a simplified gas turbine engine is developed using the bond graph technique. The model retains important system behavior, yet is of low enough order to be useful for controller design. Expansion of the model to include multiple stages and spools is also discussed.
NASA Astrophysics Data System (ADS)
Prins, Elaine M.; Feltz, Joleen M.; Menzel, W. Paul; Ward, Darold E.
1998-12-01
The launch of the eighth Geostationary Operational Environmental Satellite (GOES-8) in 1994 introduced an improved capability for diurnal fire and smoke monitoring throughout the western hemisphere. In South America the GOES-8 automated biomass burning algorithm (ABBA) and the automated smoke/aerosol detection algorithm (ASADA) are being used to monitor biomass burning. This paper outlines GOES-8 ABBA and ASADA development activities and summarizes results for the Smoke, Clouds, and Radiation in Brazil (SCAR-B) experiment and the 1995 fire season. GOES-8 ABBA results document the diurnal, spatial, and seasonal variability in fire activity throughout South America. A validation exercise compares GOES-8 ABBA results with ground truth measurements for two SCAR-B prescribed burns. GOES-8 ASADA aerosol coverage and derived albedo results provide an overview of the extent of daily and seasonal smoke coverage and relative intensities. Day-to-day variability in smoke extent closely tracks fluctuations in fire activity.
Healthy and wellbeing activities' promotion using a Big Data approach.
Gachet Páez, Diego; de Buenaga Rodríguez, Manuel; Puertas Sánz, Enrique; Villalba, María Teresa; Muñoz Gil, Rafael
2018-06-01
The aging population and economic crisis specially in developed countries have as a consequence the reduction in funds dedicated to health care; it is then desirable to optimize the costs of public and private healthcare systems, reducing the affluence of chronic and dependent people to care centers; promoting healthy lifestyle and activities can allow people to avoid chronic diseases as for example hypertension. In this article, we describe a system for promoting an active and healthy lifestyle for people and to recommend with guidelines and valuable information about their habits. The proposed system is being developed around the Big Data paradigm using bio-signal sensors and machine-learning algorithms for recommendations.
Developing NOAA's Climate Data Records From AVHRR and Other Data
NASA Astrophysics Data System (ADS)
Privette, J. L.; Bates, J. J.; Kearns, E. J.
2010-12-01
As part of the provisional NOAA Climate Service, NOAA is providing leadership in the development of authoritative, measurement-based information on climate change and variability. NOAA’s National Climatic Data Center (NCDC) recently initiated a satellite Climate Data Record Program (CDRP) to provide sustained and objective climate information derived from meteorological satellite data that NOAA has collected over the past 30+ years - particularly from its Polar Orbiting Environmental Satellites (POES) program. These are the longest sustained global measurement records in the world and represent billions of dollars of investment. NOAA is now applying advanced analysis methods -- which have improved remarkably over the last decade -- to the POES AVHRR and other instrument data. Data from other satellite programs, including NASA and international research programs and the Defense Meteorological Satellite Program (DMSP), are also being used. This process will unravel the underlying climate trend and variability information and return new value from the records. In parallel, NCDC will extend these records by applying the same methods to present-day and future satellite measurements, including the Joint Polar Satellite System (JPSS) and Jason-3. In this presentation, we will describe the AVHRR-related algorithm development activities that CDRP recently selected and funded through open competitions. We will particularly discuss some of the technical challenges related to adapting and using AVHRR algorithms with the VIIRS data that should become available with the launch of the NPOESS Preparatory Project (NPP) satellite in early 2012. We will also describe IT system development activities that will provide data processing and reprocessing, storage and management. We will also outline the maturing Program framework, including the strategies for coding and development standards, community reviews, independent program oversight, and research-to-operations algorithm migration and execution. Timeline of NOAA's polar orbiters that carried AVHRR. NOAA's approach to flying the same or similar instruments sequentially is well-suited to CDR development.
Huang, Xiaoqiang; Han, Kehang; Zhu, Yushan
2013-01-01
A systematic optimization model for binding sequence selection in computational enzyme design was developed based on the transition state theory of enzyme catalysis and graph-theoretical modeling. The saddle point on the free energy surface of the reaction system was represented by catalytic geometrical constraints, and the binding energy between the active site and transition state was minimized to reduce the activation energy barrier. The resulting hyperscale combinatorial optimization problem was tackled using a novel heuristic global optimization algorithm, which was inspired and tested by the protein core sequence selection problem. The sequence recapitulation tests on native active sites for two enzyme catalyzed hydrolytic reactions were applied to evaluate the predictive power of the design methodology. The results of the calculation show that most of the native binding sites can be successfully identified if the catalytic geometrical constraints and the structural motifs of the substrate are taken into account. Reliably predicting active site sequences may have significant implications for the creation of novel enzymes that are capable of catalyzing targeted chemical reactions. PMID:23649589
Design and implementation of an intelligent belt system using accelerometer.
Liu, Botong; Wang, Duo; Li, Sha; Nie, Xuhui; Xu, Shan; Jiao, Bingli; Duan, Xiaohui; Huang, Anpeng
2015-01-01
Activity monitor systems are increasing used recently. They are important for athletes and casual users to manage physical activity during daily exercises. In this paper, we use a triaxial accelerometer to design and implement an intelligent belt system, which can detect the user's step and flapping motion. In our system, a wearable intelligent belt is worn on the user's waist to collect activity acceleration signals. We present a step detection algorithm to detect real-time human step, which has high accuracy and low complexity. In our system, an Android App is developed to manage the intelligent belt. We also propose a protocol, which can guarantee data transmission between smartphones and wearable belt effectively and efficiently. In addition, when users flap the belt in emergency, the smartphone will receive alarm signal sending by the belt, and then notifies the emergency contact person, which can be really helpful for users in danger. Our experiment results show our system can detect physical activities with high accuracy (overall accuracy of our algorithm is above 95%) and has an effective alarm subsystem, which is significant for the practical use.
Scanlon, John M; Sherony, Rini; Gabler, Hampton C
2016-09-01
Intersection crashes resulted in over 5,000 fatalities in the United States in 2014. Intersection Advanced Driver Assistance Systems (I-ADAS) are active safety systems that seek to help drivers safely traverse intersections. I-ADAS uses onboard sensors to detect oncoming vehicles and, in the event of an imminent crash, can either alert the driver or take autonomous evasive action. The objective of this study was to develop and evaluate a predictive model for detecting whether a stop sign violation was imminent. Passenger vehicle intersection approaches were extracted from a data set of typical driver behavior (100-Car Naturalistic Driving Study) and violations (event data recorders downloaded from real-world crashes) and were assigned weighting factors based on real-world frequency. A k-fold cross-validation procedure was then used to develop and evaluate 3 hypothetical stop sign warning algorithms (i.e., early, intermediate, and delayed) for detecting an impending violation during the intersection approach. Violation detection models were developed using logistic regression models that evaluate likelihood of a violation at various locations along the intersection approach. Two potential indicators of driver intent to stop-that is, required deceleration parameter (RDP) and brake application-were used to develop the predictive models. The earliest violation detection opportunity was then evaluated for each detection algorithm in order to (1) evaluate the violation detection accuracy and (2) compare braking demand versus maximum braking capabilities. A total of 38 violating and 658 nonviolating approaches were used in the analysis. All 3 algorithms were able to detect a violation at some point during the intersection approach. The early detection algorithm, as designed, was able to detect violations earlier than all other algorithms during the intersection approach but gave false alarms for 22.3% of approaches. In contrast, the delayed detection algorithm sacrificed some time for detecting violations but was able to substantially reduce false alarms to only 3.3% of all nonviolating approaches. Given good surface conditions (maximum braking capabilities = 0.8 g) and maximum effort, most drivers (55.3-71.1%) would be able to stop the vehicle regardless of the detection algorithm. However, given poor surface conditions (maximum braking capabilities = 0.4 g), few drivers (10.5-26.3%) would be able to stop the vehicle. Automatic emergency braking (AEB) would allow for early braking prior to driver reaction. If equipped with an AEB system, the results suggest that, even for the poor surface conditions scenario, over one half (55.3-65.8%) of the vehicles could have been stopped. This study demonstrates the potential of I-ADAS to incorporate a stop sign violation detection algorithm. Repeating the analysis on a larger, more extensive data set will allow for the development of a more comprehensive algorithm to further validate the findings.
Active mask segmentation of fluorescence microscope images.
Srinivasa, Gowri; Fickus, Matthew C; Guo, Yusong; Linstedt, Adam D; Kovacević, Jelena
2009-08-01
We propose a new active mask algorithm for the segmentation of fluorescence microscope images of punctate patterns. It combines the (a) flexibility offered by active-contour methods, (b) speed offered by multiresolution methods, (c) smoothing offered by multiscale methods, and (d) statistical modeling offered by region-growing methods into a fast and accurate segmentation tool. The framework moves from the idea of the "contour" to that of "inside and outside," or masks, allowing for easy multidimensional segmentation. It adapts to the topology of the image through the use of multiple masks. The algorithm is almost invariant under initialization, allowing for random initialization, and uses a few easily tunable parameters. Experiments show that the active mask algorithm matches the ground truth well and outperforms the algorithm widely used in fluorescence microscopy, seeded watershed, both qualitatively, as well as quantitatively.
NASA Astrophysics Data System (ADS)
Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.
2017-02-01
We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010-2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite. We detected active regions (ARs) from the full-disk magnetogram, from which ˜60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishizuka, N.; Kubo, Y.; Den, M.
We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutralmore » lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.« less
Evaluation and Application of Satellite-Based Latent Heating Profile Estimation Methods
NASA Technical Reports Server (NTRS)
Olson, William S.; Grecu, Mircea; Yang, Song; Tao, Wei-Kuo
2004-01-01
In recent years, methods for estimating atmospheric latent heating vertical structure from both passive and active microwave remote sensing have matured to the point where quantitative evaluation of these methods is the next logical step. Two approaches for heating algorithm evaluation are proposed: First, application of heating algorithms to synthetic data, based upon cloud-resolving model simulations, can be used to test the internal consistency of heating estimates in the absence of systematic errors in physical assumptions. Second, comparisons of satellite-retrieved vertical heating structures to independent ground-based estimates, such as rawinsonde-derived analyses of heating, provide an additional test. The two approaches are complementary, since systematic errors in heating indicated by the second approach may be confirmed by the first. A passive microwave and combined passive/active microwave heating retrieval algorithm are evaluated using the described approaches. In general, the passive microwave algorithm heating profile estimates are subject to biases due to the limited vertical heating structure information contained in the passive microwave observations. These biases may be partly overcome by including more environment-specific a priori information into the algorithm s database of candidate solution profiles. The combined passive/active microwave algorithm utilizes the much higher-resolution vertical structure information provided by spaceborne radar data to produce less biased estimates; however, the global spatio-temporal sampling by spaceborne radar is limited. In the present study, the passive/active microwave algorithm is used to construct a more physically-consistent and environment-specific set of candidate solution profiles for the passive microwave algorithm and to help evaluate errors in the passive algorithm s heating estimates. Although satellite estimates of latent heating are based upon instantaneous, footprint- scale data, suppression of random errors requires averaging to at least half-degree resolution. Analysis of mesoscale and larger space-time scale phenomena based upon passive and passive/active microwave heating estimates from TRMM, SSMI, and AMSR data will be presented at the conference.
NASA Astrophysics Data System (ADS)
Ahangaran, Daryoush Kaveh; Yasrebi, Amir Bijan; Wetherelt, Andy; Foster, Patrick
2012-10-01
Application of fully automated systems for truck dispatching plays a major role in decreasing the transportation costs which often represent the majority of costs spent on open pit mining. Consequently, the application of a truck dispatching system has become fundamentally important in most of the world's open pit mines. Recent experiences indicate that by decreasing a truck's travelling time and the associated waiting time of its associated shovel then due to the application of a truck dispatching system the rate of production will be considerably improved. Computer-based truck dispatching systems using algorithms, advanced and accurate software are examples of these innovations. Developing an algorithm of a computer- based program appropriated to a specific mine's conditions is considered as one of the most important activities in connection with computer-based dispatching in open pit mines. In this paper the changing trend of programming and dispatching control algorithms and automation conditions will be discussed. Furthermore, since the transportation fleet of most mines use trucks with different capacities, innovative methods, operational optimisation techniques and the best possible methods for developing the required algorithm for real-time dispatching are selected by conducting research on mathematical-based planning methods. Finally, a real-time dispatching model compatible with the requirement of trucks with different capacities is developed by using two techniques of flow networks and integer programming.
Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity
Louis, S.J.; Raines, G.L.
2003-01-01
We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.
Reconstructing cortical current density by exploring sparseness in the transform domain
NASA Astrophysics Data System (ADS)
Ding, Lei
2009-05-01
In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.
Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W
2012-01-01
The Food and Drug Administration's (FDA) Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of acute respiratory failure (ARF). PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify ARF, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on ARF algorithms and validation estimates. Only two studies provided codes for ARF, each using related yet different ICD-9 codes (i.e., ICD-9 codes 518.8, "other diseases of lung," and 518.81, "acute respiratory failure"). Neither study provided validation estimates. Research needs to be conducted on designing validation studies to test ARF algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.
Flight evaluation of a computer aided low-altitude helicopter flight guidance system
NASA Technical Reports Server (NTRS)
Swenson, Harry N.; Jones, Raymond D.; Clark, Raymond
1993-01-01
The Flight Systems Development branch of the U.S. Army's Avionics Research and Development Activity (AVRADA) and NASA Ames Research Center developed for flight testing a Computer Aided Low-Altitude Helicopter Flight (CALAHF) guidance system. The system includes a trajectory-generation algorithm which uses dynamic programming and a helmet-mounted display (HMD) presentation of a pathway-in-the-sky, a phantom aircraft, and flight-path vector/predictor guidance symbology. The trajectory-generation algorithm uses knowledge of the global mission requirements, a digital terrain map, aircraft performance capabilities, and precision navigation information to determine a trajectory between mission waypoints that seeks valleys to minimize threat exposure. This system was developed and evaluated through extensive use of piloted simulation and has demonstrated a 'pilot centered' concept of automated and integrated navigation and terrain mission planning flight guidance. This system has shown a significant improvement in pilot situational awareness, and mission effectiveness as well as a decrease in training and proficiency time required for a near terrain, nighttime, adverse weather system.
Advanced technology development for remote triage applications in bleeding combat casualties.
Ryan, Kathy L; Rickards, Caroline A; Hinojosa-Laborde, Carmen; Gerhardt, Robert T; Cain, Jeffrey; Convertino, Victor A
2011-01-01
Combat developers within the Army have envisioned development of a "wear-and-forget" physiological status monitor (PSM) that will enhance far forward capabilities for assessment of Warrior readiness for battle, as well as for remote triage, diagnosis and decision-making once Soldiers are injured. This paper will review recent work testing remote triage system prototypes in both the laboratory and during field exercises. Current PSM prototypes measure the electrocardiogram and respiration, but we have shown that information derived from these measurements alone will not be suited for specific, accurate triage of combat injuries. Because of this, we have suggested that development of a capability to provide a metric of circulating blood volume status is required for remote triage. Recently, volume status has been successfully modeled using low-level physiological signals obtained from wearable devices as input to machine-learning algorithms; these algorithms are already able to discriminate between a state of physical activity (common in combat) and that of central hypovolemia, and thus show promise for use in wearable remote triage devices.
Self-Grading: A Simple Strategy for Formative Assessment in Activity-Based Instruction.
ERIC Educational Resources Information Center
Ulmer, M. B.
This paper discusses the author's personal experiences in developing and implementing a problem-based college mathematics course for liberal arts majors. This project was initiated in response to the realization that most students are dependent on "patterning" learning algorithms and have no expectation that self-initiated thinking is a…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-06
... hash algorithms in many computer network applications. On February 11, 2011, NIST published a notice in... Information Security Management Act (FISMA) of 2002 (Pub. L. 107-347), the Secretary of Commerce is authorized to approve Federal Information Processing Standards (FIPS). NIST activities to develop computer...
Teaching iSTART to Understand Spanish
ERIC Educational Resources Information Center
Dascalu, Mihai; Jacovina, Matthew E.; Soto, Christian M.; Allen, Laura K.; Dai, Jianmin; Guerrero, Tricia A.; McNamara, Danielle S.
2017-01-01
iSTART is a web-based reading comprehension tutor. A recent translation of iSTART from English to Spanish has made the system available to a new audience. In this paper, we outline several challenges that arose during the development process, specifically focusing on the algorithms that drive the feedback. Several iSTART activities encourage…
Smart concrete slabs with embedded tubular PZT transducers for damage detection
NASA Astrophysics Data System (ADS)
Gao, Weihang; Huo, Linsheng; Li, Hongnan; Song, Gangbing
2018-02-01
The objective of this study is to develop a new concept and methodology of smart concrete slab (SCS) with embedded tubular lead zirconate titanate transducer array for image based damage detection. Stress waves, as the detecting signals, are generated by the embedded tubular piezoceramic transducers in the SCS. Tubular piezoceramic transducers are used due to their capacity of generating radially uniform stress waves in a two-dimensional concrete slab (such as bridge decks and walls), increasing the monitoring range. A circular type delay-and-sum (DAS) imaging algorithm is developed to image the active acoustic sources based on the direct response received by each sensor. After the scattering signals from the damage are obtained by subtracting the baseline response of the concrete structures from those of the defective ones, the elliptical type DAS imaging algorithm is employed to process the scattering signals and reconstruct the image of the damage. Finally, two experiments, including active acoustic source monitoring and damage imaging for concrete structures, are carried out to illustrate and demonstrate the effectiveness of the proposed method.
User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm
Bourobou, Serge Thomas Mickala; Yoo, Younghwan
2015-01-01
This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen’s temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home. PMID:26007738
ProperCAD: A portable object-oriented parallel environment for VLSI CAD
NASA Technical Reports Server (NTRS)
Ramkumar, Balkrishna; Banerjee, Prithviraj
1993-01-01
Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.
Optimizing Controlling-Value-Based Power Gating with Gate Count and Switching Activity
NASA Astrophysics Data System (ADS)
Chen, Lei; Kimura, Shinji
In this paper, a new heuristic algorithm is proposed to optimize the power domain clustering in controlling-value-based (CV-based) power gating technology. In this algorithm, both the switching activity of sleep signals (p) and the overall numbers of sleep gates (gate count, N) are considered, and the sum of the product of p and N is optimized. The algorithm effectively exerts the total power reduction obtained from the CV-based power gating. Even when the maximum depth is kept to be the same, the proposed algorithm can still achieve power reduction approximately 10% more than that of the prior algorithms. Furthermore, detailed comparison between the proposed heuristic algorithm and other possible heuristic algorithms are also presented. HSPICE simulation results show that over 26% of total power reduction can be obtained by using the new heuristic algorithm. In addition, the effect of dynamic power reduction through the CV-based power gating method and the delay overhead caused by the switching of sleep transistors are also shown in this paper.
Global Precipitation Measurement
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Skofronick-Jackson, Gail; Kummerow, Christian D.; Shepherd, James Marshall
2008-01-01
This chapter begins with a brief history and background of microwave precipitation sensors, with a discussion of the sensitivity of both passive and active instruments, to trace the evolution of satellite-based rainfall techniques from an era of inference to an era of physical measurement. Next, the highly successful Tropical Rainfall Measuring Mission will be described, followed by the goals and plans for the Global Precipitation Measurement (GPM) Mission and the status of precipitation retrieval algorithm development. The chapter concludes with a summary of the need for space-based precipitation measurement, current technological capabilities, near-term algorithm advancements and anticipated new sciences and societal benefits in the GPM era.
Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)
2000-01-01
In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.
NASA Astrophysics Data System (ADS)
Meier, W.; Stroeve, J.; Duerr, R. E.; Fetterer, F. M.
2009-12-01
The declining Arctic sea ice is one of the most dramatic indicators of climate change and is being recognized as a key factor in future climate impacts on biology, human activities, and global climate change. As such, the audience for sea ice data is expanding well beyond the sea ice community. The most comprehensive sea ice data are from a series of satellite-borne passive microwave sensors. They provide a near-complete daily timeseries of sea ice concentration and extent since late-1978. However, there are many complicating issues in using such data, particularly for novice users. First, there is not one single, definitive algorithm, but several. And even for a given algorithm, different processing and quality-control methods may be used, depending on the source. Second, for all algorithms, there are uncertainties in any retrieved value. In general, these limitations are well-known: low spatial-resolution results in an imprecise ice edge determination and lack of small-scale detail (e.g., lead detection) within the ice pack; surface melt depresses concentration values during summer; thin ice is underestimated in some algorithms; some algorithms are sensitive to physical surface temperature; other surface features (e.g., snow) can influence retrieved data. While general error estimates are available for concentration values, currently the products do not carry grid-cell level or even granule level data quality information. Finally, metadata and data provenance information are limited, both of which are essential for future reprocessing. Here we describe the progress to date toward development of sea ice concentration products and outline the future steps needed to complete a sea ice climate data record.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A
2016-04-01
The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.
2016-01-01
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582
Active Solution Space and Search on Job-shop Scheduling Problem
NASA Astrophysics Data System (ADS)
Watanabe, Masato; Ida, Kenichi; Gen, Mitsuo
In this paper we propose a new searching method of Genetic Algorithm for Job-shop scheduling problem (JSP). The coding method that represent job number in order to decide a priority to arrange a job to Gannt Chart (called the ordinal representation with a priority) in JSP, an active schedule is created by using left shift. We define an active solution at first. It is solution which can create an active schedule without using left shift, and set of its defined an active solution space. Next, we propose an algorithm named Genetic Algorithm with active solution space search (GA-asol) which can create an active solution while solution is evaluated, in order to search the active solution space effectively. We applied it for some benchmark problems to compare with other method. The experimental results show good performance.
Real time algorithms for sharp wave ripple detection.
Sethi, Ankit; Kemere, Caleb
2014-01-01
Neural activity during sharp wave ripples (SWR), short bursts of co-ordinated oscillatory activity in the CA1 region of the rodent hippocampus, is implicated in a variety of memory functions from consolidation to recall. Detection of these events in an algorithmic framework, has thus far relied on simple thresholding techniques with heuristically derived parameters. This study is an investigation into testing and improving the current methods for detection of SWR events in neural recordings. We propose and profile methods to reduce latency in ripple detection. Proposed algorithms are tested on simulated ripple data. The findings show that simple realtime algorithms can improve upon existing power thresholding methods and can detect ripple activity with latencies in the range of 10-20 ms.
A comparative analysis of signal processing methods for motion-based rate responsive pacing.
Greenhut, S E; Shreve, E A; Lau, C P
1996-08-01
Pacemakers that augment heart rate (HR) by sensing body motion have been the most frequently prescribed rate responsive pacemakers. Many comparisons between motion-based rate responsive pacemaker models have been published. However, conclusions regarding specific signal processing methods used for rate response (e.g., filters and algorithms) can be affected by device-specific features. To objectively compare commonly used motion sensing filters and algorithms, acceleration and ECG signals were recorded from 16 normal subjects performing exercise and daily living activities. Acceleration signals were filtered (1-4 or 15-Hz band-pass), then processed using threshold crossing (TC) or integration (IN) algorithms creating four filter/algorithm combinations. Data were converted to an acceleration indicated rate and compared to intrinsic HR using root mean square difference (RMSd) and signed RMSd. Overall, the filters and algorithms performed similarly for most activities. The only differences between filters were for walking at an increasing grade (1-4 Hz superior to 15-Hz) and for rocking in a chair (15-Hz superior to 1-4 Hz). The only differences between algorithms were for bicycling (TC superior to IN), walking at an increasing grade (IN superior to TC), and holding a drill (IN superior to TC). Performance of the four filter/algorithm combinations was also similar over most activities. The 1-4/IN (filter [Hz]/algorithm) combination performed best for walking at a grade, while the 15/TC combination was best for bicycling. However, the 15/TC combination tended to be most sensitive to higher frequency artifact, such as automobile driving, downstairs walking, and hand drilling. Chair rocking artifact was highest for 1-4/IN. The RMSd for bicycling and upstairs walking were large for all combinations, reflecting the nonphysiological nature of the sensor. The 1-4/TC combination demonstrated the least intersubject variability, was the only filter/algorithm combination insensitive to changes in footwear, and gave similar RMSd over a large range of amplitude thresholds for most activities. In conclusion, based on overall error performance, the preferred filter/algorithm combination depended upon the type of activity.
lazar: a modular predictive toxicology framework
Maunz, Andreas; Gütlein, Martin; Rautenberg, Micha; Vorgrimmler, David; Gebele, Denis; Helma, Christoph
2013-01-01
lazar (lazy structure–activity relationships) is a modular framework for predictive toxicology. Similar to the read across procedure in toxicological risk assessment, lazar creates local QSAR (quantitative structure–activity relationship) models for each compound to be predicted. Model developers can choose between a large variety of algorithms for descriptor calculation and selection, chemical similarity indices, and model building. This paper presents a high level description of the lazar framework and discusses the performance of example classification and regression models. PMID:23761761
NASA Technical Reports Server (NTRS)
Thakoor, Anil
1991-01-01
The JPL Center for Space Microelectronics Technology (CSMT) is actively pursuing research in the neural network theory, algorithms, and electronics as well as optoelectronic neural net hardware implementations, to explore the strengths and application potential for a variety of NASA, DoD, as well as commercial application problems, where conventional computing techniques are extremely time-consuming, cumbersome, or simply non-existent. An overview of the JPL electronic neural network hardware development activities and some of the striking applications of the JPL electronic neuroprocessors are presented.
Active control for stabilization of neoclassical tearing modesa)
NASA Astrophysics Data System (ADS)
Humphreys, D. A.; Ferron, J. R.; La Haye, R. J.; Luce, T. C.; Petty, C. C.; Prater, R.; Welander, A. S.
2006-05-01
This work describes active control algorithms used by DIII-D [J. L. Luxon, Nucl. Fusion 42, 614 (2002)] to stabilize and maintain suppression of 3/2 or 2/1 neoclassical tearing modes (NTMs) by application of electron cyclotron current drive (ECCD) at the rational q surface. The DIII-D NTM control system can determine the correct q-surface/ECCD alignment and stabilize existing modes within 100-500ms of activation, or prevent mode growth with preemptive application of ECCD, in both cases enabling stable operation at normalized beta values above 3.5. Because NTMs can limit performance or cause plasma-terminating disruptions in tokamaks, their stabilization is essential to the high performance operation of ITER [R. Aymar et al., ITER Joint Central Team, ITER Home Teams, Nucl. Fusion 41, 1301 (2001)]. The DIII-D NTM control system has demonstrated many elements of an eventual ITER solution, including general algorithms for robust detection of q-surface/ECCD alignment and for real-time maintenance of alignment following the disappearance of the mode. This latter capability, unique to DIII-D, is based on real-time reconstruction of q-surface geometry by a Grad-Shafranov solver using external magnetics and internal motional Stark effect measurements. Alignment is achieved by varying either the plasma major radius (and the rational q surface) or the toroidal field (and the deposition location). The requirement to achieve and maintain q-surface/ECCD alignment with accuracy on the order of 1cm is routinely met by the DIII-D Plasma Control System and these algorithms. We discuss the integrated plasma control design process used for developing these and other general control algorithms, which includes physics-based modeling and testing of the algorithm implementation against simulations of actuator and plasma responses. This systematic design/test method and modeling environment enabled successful mode suppression by the NTM control system upon first-time use in an experimental discharge.
Buscema, Massimo; Grossi, Enzo; Montanini, Luisa; Street, Maria E.
2015-01-01
Objectives Intra-uterine growth retardation is often of unknown origin, and is of great interest as a “Fetal Origin of Adult Disease” has been now well recognized. We built a benchmark based upon a previously analysed data set related to Intrauterine Growth Retardation with 46 subjects described by 14 variables, related with the insulin-like growth factor system and pro-inflammatory cytokines, namely interleukin -6 and tumor necrosis factor -α. Design and Methods We used new algorithms for optimal information sorting based on the combination of two neural network algorithms: Auto-contractive Map and Activation and Competition System. Auto-Contractive Map spatializes the relationships among variables or records by constructing a suitable embedding space where ‘closeness’ among variables or records reflects accurately their associations. The Activation and Competition System algorithm instead works as a dynamic non linear associative memory on the weight matrices of other algorithms, and is able to produce a prototypical variable profile of a given target. Results Classical statistical analysis, proved to be unable to distinguish intrauterine growth retardation from appropriate-for-gestational age (AGA) subjects due to the high non-linearity of underlying functions. Auto-contractive map succeeded in clustering and differentiating completely the conditions under study, while Activation and Competition System allowed to develop the profile of variables which discriminated the two conditions under study better than any other previous form of attempt. In particular, Activation and Competition System showed that ppropriateness for gestational age was explained by IGF-2 relative gene expression, and by IGFBP-2 and TNF-α placental contents. IUGR instead was explained by IGF-I, IGFBP-1, IGFBP-2 and IL-6 gene expression in placenta. Conclusion This further analysis provided further insight into the placental key-players of fetal growth within the insulin-like growth factor and cytokine systems. Our previous published analysis could identify only which variables were predictive of fetal growth in general, and identified only some relationships. PMID:26158499
Geostationary Lightning Mapper for GOES-R
NASA Technical Reports Server (NTRS)
Goodman, Steven; Blakeslee, Richard; Koshak, William
2007-01-01
The Geostationary Lightning Mapper (GLM) is a single channel, near-IR optical detector, used to detect, locate and measure total lightning activity over the full-disk as part of a 3-axis stabilized, geostationary weather satellite system. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series with a planned launch in 2014 will carry a GLM that will provide continuous day and night observations of lightning from the west coast of Africa (GOES-E) to New Zealand (GOES-W) when the constellation is fully operational. The mission objectives for the GLM are to 1) provide continuous, full-disk lightning measurements for storm warning and Nowcasting, 2) provide early warning of tornadic activity, and 3) accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997-Present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 11 year data record of global lightning activity. Instrument formulation studies begun in January 2006 will be completed in March 2007, with implementation expected to begin in September 2007. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite, airborne science missions (e.g., African Monsoon Multi-disciplinary Analysis, AMMA), and regional test beds (e.g, Lightning Mapping Arrays) are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. Real time lightning mapping data now being provided to selected forecast offices will lead to improved understanding of the application of these data in the severe storm warning process and accelerate the development of the pre-launch algorithms and Nowcasting applications. Proxy data combined with MODIS and Meteosat Second Generation SEVERI observations will also lead to new applications (e.g., multi-sensor precipitation algorithms blending the GLM with the Advanced Baseline Imager, convective cloud initiation and identification, early warnings of lightning threat, storm tracking, and data assimilation).
Solar Occultation Retrieval Algorithm Development
NASA Technical Reports Server (NTRS)
Lumpe, Jerry D.
2004-01-01
This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.
Global Precipitation Measurement: GPM Microwave Imager (GMI) Algorithm Development Approach
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz
2009-01-01
This slide presentation reviews the approach to the development of the Global Precipitation Measurement algorithm. This presentation includes information about the responsibilities for the development of the algorithm, and the calibration. Also included is information about the orbit, and the sun angle. The test of the algorithm code will be done with synthetic data generated from the Precipitation Processing System (PPS).
NASA Technical Reports Server (NTRS)
Chen, C. P.; Wu, S. T.
1992-01-01
The objective of this investigation has been to develop an algorithm (or algorithms) for the improvement of the accuracy and efficiency of the computer fluid dynamics (CFD) models to study the fundamental physics of combustion chamber flows, which are necessary ultimately for the design of propulsion systems such as SSME and STME. During this three year study (May 19, 1978 - May 18, 1992), a unique algorithm was developed for all speed flows. This newly developed algorithm basically consists of two pressure-based algorithms (i.e. PISOC and MFICE). This PISOC is a non-iterative scheme and the FICE is an iterative scheme where PISOC has the characteristic advantages on low and high speed flows and the modified FICE has shown its efficiency and accuracy to compute the flows in the transonic region. A new algorithm is born from a combination of these two algorithms. This newly developed algorithm has general application in both time-accurate and steady state flows, and also was tested extensively for various flow conditions, such as turbulent flows, chemically reacting flows, and multiphase flows.
NASA Astrophysics Data System (ADS)
Jarvis, Jan; Haertelt, Marko; Hugger, Stefan; Butschek, Lorenz; Fuchs, Frank; Ostendorf, Ralf; Wagner, Joachim; Beyerer, Juergen
2017-04-01
In this work we present data analysis algorithms for detection of hazardous substances in hyperspectral observations acquired using active mid-infrared (MIR) backscattering spectroscopy. We present a novel background extraction algorithm based on the adaptive target generation process proposed by Ren and Chang called the adaptive background generation process (ABGP) that generates a robust and physically meaningful set of background spectra for operation of the well-known adaptive matched subspace detection (AMSD) algorithm. It is shown that the resulting AMSD-ABGP detection algorithm competes well with other widely used detection algorithms. The method is demonstrated in measurement data obtained by two fundamentally different active MIR hyperspectral data acquisition devices. A hyperspectral image sensor applicable in static scenes takes a wavelength sequential approach to hyperspectral data acquisition, whereas a rapid wavelength-scanning single-element detector variant of the same principle uses spatial scanning to generate the hyperspectral observation. It is shown that the measurement timescale of the latter is sufficient for the application of the data analysis algorithms even in dynamic scenarios.
Development and implementation of (Q)SAR modeling within the CHARMMing web-user interface.
Weidlich, Iwona E; Pevzner, Yuri; Miller, Benjamin T; Filippov, Igor V; Woodcock, H Lee; Brooks, Bernard R
2015-01-05
Recent availability of large publicly accessible databases of chemical compounds and their biological activities (PubChem, ChEMBL) has inspired us to develop a web-based tool for structure activity relationship and quantitative structure activity relationship modeling to add to the services provided by CHARMMing (www.charmming.org). This new module implements some of the most recent advances in modern machine learning algorithms-Random Forest, Support Vector Machine, Stochastic Gradient Descent, Gradient Tree Boosting, so forth. A user can import training data from Pubchem Bioassay data collections directly from our interface or upload his or her own SD files which contain structures and activity information to create new models (either categorical or numerical). A user can then track the model generation process and run models on new data to predict activity. © 2014 Wiley Periodicals, Inc.
Validity of Five Satellite-Based Latent Heat Flux Algorithms for Semi-arid Ecosystems
Feng, Fei; Chen, Jiquan; Li, Xianglan; ...
2015-12-09
Accurate estimation of latent heat flux (LE) is critical in characterizing semiarid ecosystems. Many LE algorithms have been developed during the past few decades. However, the algorithms have not been directly compared, particularly over global semiarid ecosystems. In this paper, we evaluated the performance of five LE models over semiarid ecosystems such as grassland, shrub, and savanna using the Fluxnet dataset of 68 eddy covariance (EC) sites during the period 2000–2009. We also used a modern-era retrospective analysis for research and applications (MERRA) dataset, the Normalized Difference Vegetation Index (NDVI) and Fractional Photosynthetically Active Radiation (FPAR) from the moderate resolutionmore » imaging spectroradiometer (MODIS) products; the leaf area index (LAI) from the global land surface satellite (GLASS) products; and the digital elevation model (DEM) from shuttle radar topography mission (SRTM30) dataset to generate LE at region scale during the period 2003–2006. The models were the moderate resolution imaging spectroradiometer LE (MOD16) algorithm, revised remote sensing based Penman–Monteith LE algorithm (RRS), the Priestley–Taylor LE algorithm of the Jet Propulsion Laboratory (PT-JPL), the modified satellite-based Priestley–Taylor LE algorithm (MS-PT), and the semi-empirical Penman LE algorithm (UMD). Direct comparison with ground measured LE showed the PT-JPL and MS-PT algorithms had relative high performance over semiarid ecosystems with the coefficient of determination (R2) ranging from 0.6 to 0.8 and root mean squared error (RMSE) of approximately 20 W/m 2. Empirical parameters in the structure algorithms of MOD16 and RRS, and calibrated coefficients of the UMD algorithm may be the cause of the reduced performance of these LE algorithms with R2 ranging from 0.5 to 0.7 and RMSE ranging from 20 to 35 W/m 2 for MOD16, RRS and UMD. Sensitivity analysis showed that radiation and vegetation terms were the dominating variables affecting LE Fluxes in global semiarid ecosystem.« less
Validity of Five Satellite-Based Latent Heat Flux Algorithms for Semi-arid Ecosystems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Fei; Chen, Jiquan; Li, Xianglan
Accurate estimation of latent heat flux (LE) is critical in characterizing semiarid ecosystems. Many LE algorithms have been developed during the past few decades. However, the algorithms have not been directly compared, particularly over global semiarid ecosystems. In this paper, we evaluated the performance of five LE models over semiarid ecosystems such as grassland, shrub, and savanna using the Fluxnet dataset of 68 eddy covariance (EC) sites during the period 2000–2009. We also used a modern-era retrospective analysis for research and applications (MERRA) dataset, the Normalized Difference Vegetation Index (NDVI) and Fractional Photosynthetically Active Radiation (FPAR) from the moderate resolutionmore » imaging spectroradiometer (MODIS) products; the leaf area index (LAI) from the global land surface satellite (GLASS) products; and the digital elevation model (DEM) from shuttle radar topography mission (SRTM30) dataset to generate LE at region scale during the period 2003–2006. The models were the moderate resolution imaging spectroradiometer LE (MOD16) algorithm, revised remote sensing based Penman–Monteith LE algorithm (RRS), the Priestley–Taylor LE algorithm of the Jet Propulsion Laboratory (PT-JPL), the modified satellite-based Priestley–Taylor LE algorithm (MS-PT), and the semi-empirical Penman LE algorithm (UMD). Direct comparison with ground measured LE showed the PT-JPL and MS-PT algorithms had relative high performance over semiarid ecosystems with the coefficient of determination (R2) ranging from 0.6 to 0.8 and root mean squared error (RMSE) of approximately 20 W/m 2. Empirical parameters in the structure algorithms of MOD16 and RRS, and calibrated coefficients of the UMD algorithm may be the cause of the reduced performance of these LE algorithms with R2 ranging from 0.5 to 0.7 and RMSE ranging from 20 to 35 W/m 2 for MOD16, RRS and UMD. Sensitivity analysis showed that radiation and vegetation terms were the dominating variables affecting LE Fluxes in global semiarid ecosystem.« less
Zafar, Raheel; Kamel, Nidal; Naufal, Mohamad; Malik, Aamir Saeed; Dass, Sarat C; Ahmad, Rana Fayyaz; Abdullah, Jafri M; Reza, Faruque
2017-01-01
Decoding of human brain activity has always been a primary goal in neuroscience especially with functional magnetic resonance imaging (fMRI) data. In recent years, Convolutional neural network (CNN) has become a popular method for the extraction of features due to its higher accuracy, however it needs a lot of computation and training data. In this study, an algorithm is developed using Multivariate pattern analysis (MVPA) and modified CNN to decode the behavior of brain for different images with limited data set. Selection of significant features is an important part of fMRI data analysis, since it reduces the computational burden and improves the prediction performance; significant features are selected using t-test. MVPA uses machine learning algorithms to classify different brain states and helps in prediction during the task. General linear model (GLM) is used to find the unknown parameters of every individual voxel and the classification is done using multi-class support vector machine (SVM). MVPA-CNN based proposed algorithm is compared with region of interest (ROI) based method and MVPA based estimated values. The proposed method showed better overall accuracy (68.6%) compared to ROI (61.88%) and estimation values (64.17%).
Thermal tracking in mobile robots for leak inspection activities.
Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki
2013-10-09
Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system.
Thermal Tracking in Mobile Robots for Leak Inspection Activities
Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki
2013-01-01
Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system. PMID:24113684
Nonlinear-Based MEMS Sensors and Active Switches for Gas Detection.
Bouchaala, Adam; Jaber, Nizar; Yassine, Omar; Shekhah, Osama; Chernikova, Valeriya; Eddaoudi, Mohamed; Younis, Mohammad I
2016-05-25
The objective of this paper is to demonstrate the integration of a MOF thin film on electrostatically actuated microstructures to realize a switch triggered by gas and a sensing algorithm based on amplitude tracking. The devices are based on the nonlinear response of micromachined clamped-clamped beams. The microbeams are coated with a metal-organic framework (MOF), namely HKUST-1, to achieve high sensitivity. The softening and hardening nonlinear behaviors of the microbeams are exploited to demonstrate the ideas. For gas sensing, an amplitude-based tracking algorithm is developed to quantify the captured quantity of gas. Then, a MEMS switch triggered by gas using the nonlinear response of the microbeam is demonstrated. Noise analysis is conducted, which shows that the switch has high stability against thermal noise. The proposed switch is promising for delivering binary sensing information, and also can be used directly to activate useful functionalities, such as alarming.
Nonlinear-Based MEMS Sensors and Active Switches for Gas Detection
Bouchaala, Adam; Jaber, Nizar; Yassine, Omar; Shekhah, Osama; Chernikova, Valeriya; Eddaoudi, Mohamed; Younis, Mohammad I.
2016-01-01
The objective of this paper is to demonstrate the integration of a MOF thin film on electrostatically actuated microstructures to realize a switch triggered by gas and a sensing algorithm based on amplitude tracking. The devices are based on the nonlinear response of micromachined clamped-clamped beams. The microbeams are coated with a metal-organic framework (MOF), namely HKUST-1, to achieve high sensitivity. The softening and hardening nonlinear behaviors of the microbeams are exploited to demonstrate the ideas. For gas sensing, an amplitude-based tracking algorithm is developed to quantify the captured quantity of gas. Then, a MEMS switch triggered by gas using the nonlinear response of the microbeam is demonstrated. Noise analysis is conducted, which shows that the switch has high stability against thermal noise. The proposed switch is promising for delivering binary sensing information, and also can be used directly to activate useful functionalities, such as alarming. PMID:27231914
Life Sciences Implications of Lunar Surface Operations
NASA Technical Reports Server (NTRS)
Chappell, Steven P.; Norcross, Jason R.; Abercromby, Andrew F.; Gernhardt, Michael L.
2010-01-01
The purpose of this report is to document preliminary, predicted, life sciences implications of expected operational concepts for lunar surface extravehicular activity (EVA). Algorithms developed through simulation and testing in lunar analog environments were used to predict crew metabolic rates and ground reaction forces experienced during lunar EVA. Subsequently, the total metabolic energy consumption, the daily bone load stimulus, total oxygen needed, and other variables were calculated and provided to Human Research Program and Exploration Systems Mission Directorate stakeholders. To provide context to the modeling, the report includes an overview of some scenarios that have been considered. Concise descriptions of the analog testing and development of the algorithms are also provided. This document may be updated to remain current with evolving lunar or other planetary surface operations, assumptions and concepts, and to provide additional data and analyses collected during the ongoing analog research program.
Compiler Optimization Pass Visualization: The Procedural Abstraction Case
ERIC Educational Resources Information Center
Schaeckeler, Stefan; Shang, Weijia; Davis, Ruth
2009-01-01
There is an active research community concentrating on visualizations of algorithms taught in CS1 and CS2 courses. These visualizations can help students to create concrete visual images of the algorithms and their underlying concepts. Not only "fundamental algorithms" can be visualized, but also algorithms used in compilers. Visualizations that…
CAD system for footwear design based on whole real 3D data of last surface
NASA Astrophysics Data System (ADS)
Song, Wanzhong; Su, Xianyu
2000-10-01
Two major parts of application of CAD in footwear design are studied: the development of last surface; computer-aided design of planar shoe-template. A new quasi-experiential development algorithm of last surface based on triangulation approximation is presented. This development algorithm consumes less time and does not need any interactive operation for precisely development compared with other development algorithm of last surface. Based on this algorithm, a software, SHOEMAKERTM, which contains computer aided automatic measurement, automatic development of last surface and computer aide design of shoe-template has been developed.
Using ACIS on the Chandra X-ray Observatory as a Particle Radiation Monitor II
NASA Technical Reports Server (NTRS)
Grant, C. E.; Ford, P. G.; Bautz, M. W.; ODell, S. L.
2012-01-01
The Advanced CCD Imaging Spectrometer is an instrument on the Chandra X-ray Observatory. CCDs are vulnerable to radiation damage, particularly by soft protons in the radiation belts and solar storms. The Chandra team has implemented procedures to protect ACIS during high-radiation events including autonomous protection triggered by an on-board radiation monitor. Elevated temperatures have reduced the effectiveness of the on-board monitor. The ACIS team has developed an algorithm which uses data from the CCDs themselves to detect periods of high radiation and a flight software patch to apply this algorithm is currently active on-board the instrument. In this paper, we explore the ACIS response to particle radiation through comparisons to a number of external measures of the radiation environment. We hope to better understand the efficiency of the algorithm as a function of the flux and spectrum of the particles and the time-profile of the radiation event.
Williams, A G
1996-01-01
The 'Apple Juice' program is an interactive diabetes self-management program which runs on a lap-top Macintosh Powerbook 100 computer. The dose-by-dose insulin advisory program was initially designed for children with insulin-dependent (type 1) diabetes mellitus. It utilizes several different insulin algorithms, measurement formulae, and compensation factors for meals, activity, medication and the dawn phenomenon. It was developed to assist the individual with diabetes and/or care providers, in determining specific insulin dosage recommendations throughout a 24 h period. Information technology functions include, but are not limited to automated record keeping, data recall, event reminders, data trend/pattern analyses and education. This paper highlights issues, observations and recommendations surrounding the use of the current version of the software, along with a detailed description of the insulin algorithms and measurement formulae applied successfully with the author's daughter over a six year period.
Inverse transport calculations in optical imaging with subspace optimization algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu
2014-09-15
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less
Martin, Bryan D.; Wolfson, Julian; Adomavicius, Gediminas; Fan, Yingling
2017-01-01
We propose and compare combinations of several methods for classifying transportation activity data from smartphone GPS and accelerometer sensors. We have two main objectives. First, we aim to classify our data as accurately as possible. Second, we aim to reduce the dimensionality of the data as much as possible in order to reduce the computational burden of the classification. We combine dimension reduction and classification algorithms and compare them with a metric that balances accuracy and dimensionality. In doing so, we develop a classification algorithm that accurately classifies five different modes of transportation (i.e., walking, biking, car, bus and rail) while being computationally simple enough to run on a typical smartphone. Further, we use data that required no behavioral changes from the smartphone users to collect. Our best classification model uses the random forest algorithm to achieve 96.8% accuracy. PMID:28885550
Automated Cryocooler Monitor and Control System Software
NASA Technical Reports Server (NTRS)
Britchcliffe, Michael J.; Conroy, Bruce L.; Anderson, Paul E.; Wilson, Ahmad
2011-01-01
This software is used in an automated cryogenic control system developed to monitor and control the operation of small-scale cryocoolers. The system was designed to automate the cryogenically cooled low-noise amplifier system described in "Automated Cryocooler Monitor and Control System" (NPO-47246), NASA Tech Briefs, Vol. 35, No. 5 (May 2011), page 7a. The software contains algorithms necessary to convert non-linear output voltages from the cryogenic diode-type thermometers and vacuum pressure and helium pressure sensors, to temperature and pressure units. The control function algorithms use the monitor data to control the cooler power, vacuum solenoid, vacuum pump, and electrical warm-up heaters. The control algorithms are based on a rule-based system that activates the required device based on the operating mode. The external interface is Web-based. It acts as a Web server, providing pages for monitor, control, and configuration. No client software from the external user is required.
Martin, Bryan D; Addona, Vittorio; Wolfson, Julian; Adomavicius, Gediminas; Fan, Yingling
2017-09-08
We propose and compare combinations of several methods for classifying transportation activity data from smartphone GPS and accelerometer sensors. We have two main objectives. First, we aim to classify our data as accurately as possible. Second, we aim to reduce the dimensionality of the data as much as possible in order to reduce the computational burden of the classification. We combine dimension reduction and classification algorithms and compare them with a metric that balances accuracy and dimensionality. In doing so, we develop a classification algorithm that accurately classifies five different modes of transportation (i.e., walking, biking, car, bus and rail) while being computationally simple enough to run on a typical smartphone. Further, we use data that required no behavioral changes from the smartphone users to collect. Our best classification model uses the random forest algorithm to achieve 96.8% accuracy.
Satisfiability of logic programming based on radial basis function neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamadneh, Nawaf; Sathasivam, Saratha; Tilahun, Surafel Luleseged
2014-07-10
In this paper, we propose a new technique to test the Satisfiability of propositional logic programming and quantified Boolean formula problem in radial basis function neural networks. For this purpose, we built radial basis function neural networks to represent the proportional logic which has exactly three variables in each clause. We used the Prey-predator algorithm to calculate the output weights of the neural networks, while the K-means clustering algorithm is used to determine the hidden parameters (the centers and the widths). Mean of the sum squared error function is used to measure the activity of the two algorithms. We appliedmore » the developed technique with the recurrent radial basis function neural networks to represent the quantified Boolean formulas. The new technique can be applied to solve many applications such as electronic circuits and NP-complete problems.« less
Segmentation algorithm on smartphone dual camera: application to plant organs in the wild
NASA Astrophysics Data System (ADS)
Bertrand, Sarah; Cerutti, Guillaume; Tougne, Laure
2018-04-01
In order to identify the species of a tree, the different organs that are the leaves, the bark, the flowers and the fruits, are inspected by botanists. So as to develop an algorithm that identifies automatically the species, we need to extract these objects of interest from their complex natural environment. In this article, we focus on the segmentation of flowers and fruits and we present a new method of segmentation based on an active contour algorithm using two probability maps. The first map is constructed via the dual camera that we can find on the back of the latest smartphones. The second map is made with the help of a multilayer perceptron (MLP). The combination of these two maps to drive the evolution of the object contour allows an efficient segmentation of the organ from a natural background.
Friedenberg, David A; Bouton, Chad E; Annetta, Nicholas V; Skomrock, Nicholas; Mingming Zhang; Schwemmer, Michael; Bockbrader, Marcia A; Mysiw, W Jerry; Rezai, Ali R; Bresler, Herbert S; Sharma, Gaurav
2016-08-01
Recent advances in Brain Computer Interfaces (BCIs) have created hope that one day paralyzed patients will be able to regain control of their paralyzed limbs. As part of an ongoing clinical study, we have implanted a 96-electrode Utah array in the motor cortex of a paralyzed human. The array generates almost 3 million data points from the brain every second. This presents several big data challenges towards developing algorithms that should not only process the data in real-time (for the BCI to be responsive) but are also robust to temporal variations and non-stationarities in the sensor data. We demonstrate an algorithmic approach to analyze such data and present a novel method to evaluate such algorithms. We present our methodology with examples of decoding human brain data in real-time to inform a BCI.
Hardware Design of the Energy Efficient Fall Detection Device
NASA Astrophysics Data System (ADS)
Skorodumovs, A.; Avots, E.; Hofmanis, J.; Korāts, G.
2016-04-01
Health issues for elderly people may lead to different injuries obtained during simple activities of daily living. Potentially the most dangerous are unintentional falls that may be critical or even lethal to some patients due to the heavy injury risk. In the project "Wireless Sensor Systems in Telecare Application for Elderly People", we have developed a robust fall detection algorithm for a wearable wireless sensor. To optimise the algorithm for hardware performance and test it in field, we have designed an accelerometer based wireless fall detector. Our main considerations were: a) functionality - so that the algorithm can be applied to the chosen hardware, and b) power efficiency - so that it can run for a very long time. We have picked and tested the parts, built a prototype, optimised the firmware for lowest consumption, tested the performance and measured the consumption parameters. In this paper, we discuss our design choices and present the results of our work.
ROBNCA: robust network component analysis for recovering transcription factor activities.
Noor, Amina; Ahmad, Aitzaz; Serpedin, Erchin; Nounou, Mohamed; Nounou, Hazem
2013-10-01
Network component analysis (NCA) is an efficient method of reconstructing the transcription factor activity (TFA), which makes use of the gene expression data and prior information available about transcription factor (TF)-gene regulations. Most of the contemporary algorithms either exhibit the drawback of inconsistency and poor reliability, or suffer from prohibitive computational complexity. In addition, the existing algorithms do not possess the ability to counteract the presence of outliers in the microarray data. Hence, robust and computationally efficient algorithms are needed to enable practical applications. We propose ROBust Network Component Analysis (ROBNCA), a novel iterative algorithm that explicitly models the possible outliers in the microarray data. An attractive feature of the ROBNCA algorithm is the derivation of a closed form solution for estimating the connectivity matrix, which was not available in prior contributions. The ROBNCA algorithm is compared with FastNCA and the non-iterative NCA (NI-NCA). ROBNCA estimates the TF activity profiles as well as the TF-gene control strength matrix with a much higher degree of accuracy than FastNCA and NI-NCA, irrespective of varying noise, correlation and/or amount of outliers in case of synthetic data. The ROBNCA algorithm is also tested on Saccharomyces cerevisiae data and Escherichia coli data, and it is observed to outperform the existing algorithms. The run time of the ROBNCA algorithm is comparable with that of FastNCA, and is hundreds of times faster than NI-NCA. The ROBNCA software is available at http://people.tamu.edu/∼amina/ROBNCA
Latest Results From the QuakeFinder Statistical Analysis Framework
NASA Astrophysics Data System (ADS)
Kappler, K. N.; MacLean, L. S.; Schneider, D.; Bleier, T.
2017-12-01
Since 2005 QuakeFinder (QF) has acquired an unique dataset with outstanding spatial and temporal sampling of earth's magnetic field along several active fault systems. This QF network consists of 124 stations in California and 45 stations along fault zones in Greece, Taiwan, Peru, Chile and Indonesia. Each station is equipped with three feedback induction magnetometers, two ion sensors, a 4 Hz geophone, a temperature sensor, and a humidity sensor. Data are continuously recorded at 50 Hz with GPS timing and transmitted daily to the QF data center in California for analysis. QF is attempting to detect and characterize anomalous EM activity occurring ahead of earthquakes. There have been many reports of anomalous variations in the earth's magnetic field preceding earthquakes. Specifically, several authors have drawn attention to apparent anomalous pulsations seen preceding earthquakes. Often studies in long term monitoring of seismic activity are limited by availability of event data. It is particularly difficult to acquire a large dataset for rigorous statistical analyses of the magnetic field near earthquake epicenters because large events are relatively rare. Since QF has acquired hundreds of earthquakes in more than 70 TB of data, we developed an automated approach for finding statistical significance of precursory behavior and developed an algorithm framework. Previously QF reported on the development of an Algorithmic Framework for data processing and hypothesis testing. The particular instance of algorithm we discuss identifies and counts magnetic variations from time series data and ranks each station-day according to the aggregate number of pulses in a time window preceding the day in question. If the hypothesis is true that magnetic field activity increases over some time interval preceding earthquakes, this should reveal itself by the station-days on which earthquakes occur receiving higher ranks than they would if the ranking scheme were random. This can be analysed using the Receiver Operating Characteristic test. In this presentation we give a status report of our latest results, largely focussed on reproducibility of results, robust statistics in the presence of missing data, and exploring optimization landscapes in our parameter space.
Phase 2 development of Great Lakes algorithms for Nimbus-7 coastal zone color scanner
NASA Technical Reports Server (NTRS)
Tanis, Fred J.
1984-01-01
A series of experiments have been conducted in the Great Lakes designed to evaluate the application of the NIMBUS-7 Coastal Zone Color Scanner (CZCS). Atmospheric and water optical models were used to relate surface and subsurface measurements to satellite measured radiances. Absorption and scattering measurements were reduced to obtain a preliminary optical model for the Great Lakes. Algorithms were developed for geometric correction, correction for Rayleigh and aerosol path radiance, and prediction of chlorophyll-a pigment and suspended mineral concentrations. The atmospheric algorithm developed compared favorably with existing algorithms and was the only algorithm found to adequately predict the radiance variations in the 670 nm band. The atmospheric correction algorithm developed was designed to extract needed algorithm parameters from the CZCS radiance values. The Gordon/NOAA ocean algorithms could not be demonstrated to work for Great Lakes waters. Predicted values of chlorophyll-a concentration compared favorably with expected and measured data for several areas of the Great Lakes.
A translational platform for prototyping closed-loop neuromodulation systems
Afshar, Pedram; Khambhati, Ankit; Stanslaski, Scott; Carlson, David; Jensen, Randy; Linde, Dave; Dani, Siddharth; Lazarewicz, Maciej; Cong, Peng; Giftakis, Jon; Stypulkowski, Paul; Denison, Tim
2013-01-01
While modulating neural activity through stimulation is an effective treatment for neurological diseases such as Parkinson's disease and essential tremor, an opportunity for improving neuromodulation therapy remains in automatically adjusting therapy to continuously optimize patient outcomes. Practical issues associated with achieving this include the paucity of human data related to disease states, poorly validated estimators of patient state, and unknown dynamic mappings of optimal stimulation parameters based on estimated states. To overcome these challenges, we present an investigational platform including: an implanted sensing and stimulation device to collect data and run automated closed-loop algorithms; an external tool to prototype classifier and control-policy algorithms; and real-time telemetry to update the implanted device firmware and monitor its state. The prototyping system was demonstrated in a chronic large animal model studying hippocampal dynamics. We used the platform to find biomarkers of the observed states and transfer functions of different stimulation amplitudes. Data showed that moderate levels of stimulation suppress hippocampal beta activity, while high levels of stimulation produce seizure-like after-discharge activity. The biomarker and transfer function observations were mapped into classifier and control-policy algorithms, which were downloaded to the implanted device to continuously titrate stimulation amplitude for the desired network effect. The platform is designed to be a flexible prototyping tool and could be used to develop improved mechanistic models and automated closed-loop systems for a variety of neurological disorders. PMID:23346048
A translational platform for prototyping closed-loop neuromodulation systems.
Afshar, Pedram; Khambhati, Ankit; Stanslaski, Scott; Carlson, David; Jensen, Randy; Linde, Dave; Dani, Siddharth; Lazarewicz, Maciej; Cong, Peng; Giftakis, Jon; Stypulkowski, Paul; Denison, Tim
2012-01-01
While modulating neural activity through stimulation is an effective treatment for neurological diseases such as Parkinson's disease and essential tremor, an opportunity for improving neuromodulation therapy remains in automatically adjusting therapy to continuously optimize patient outcomes. Practical issues associated with achieving this include the paucity of human data related to disease states, poorly validated estimators of patient state, and unknown dynamic mappings of optimal stimulation parameters based on estimated states. To overcome these challenges, we present an investigational platform including: an implanted sensing and stimulation device to collect data and run automated closed-loop algorithms; an external tool to prototype classifier and control-policy algorithms; and real-time telemetry to update the implanted device firmware and monitor its state. The prototyping system was demonstrated in a chronic large animal model studying hippocampal dynamics. We used the platform to find biomarkers of the observed states and transfer functions of different stimulation amplitudes. Data showed that moderate levels of stimulation suppress hippocampal beta activity, while high levels of stimulation produce seizure-like after-discharge activity. The biomarker and transfer function observations were mapped into classifier and control-policy algorithms, which were downloaded to the implanted device to continuously titrate stimulation amplitude for the desired network effect. The platform is designed to be a flexible prototyping tool and could be used to develop improved mechanistic models and automated closed-loop systems for a variety of neurological disorders.
[An improved algorithm for electrohysterogram envelope extraction].
Lu, Yaosheng; Pan, Jie; Chen, Zhaoxia; Chen, Zhaoxia
2017-02-01
Extraction uterine contraction signal from abdominal uterine electromyogram(EMG) signal is considered as the most promising method to replace the traditional tocodynamometer(TOCO) for detecting uterine contractions activity. The traditional root mean square(RMS) algorithm has only some limited values in canceling the impulsive noise. In our study, an improved algorithm for uterine EMG envelope extraction was proposed to overcome the problem. Firstly, in our experiment, zero-crossing detection method was used to separate the burst of uterine electrical activity from the raw uterine EMG signal. After processing the separated signals by employing two filtering windows which have different width, we used the traditional RMS algorithm to extract uterus EMG envelope. To assess the performance of the algorithm, the improved algorithm was compared with two existing intensity of uterine electromyogram(IEMG) extraction algorithms. The results showed that the improved algorithm was better than the traditional ones in eliminating impulsive noise present in the uterine EMG signal. The measurement sensitivity and positive predictive value(PPV) of the improved algorithm were 0.952 and 0.922, respectively, which were not only significantly higher than the corresponding values(0.859 and 0.847) of the first comparison algorithm, but also higher than the values(0.928 and 0.877) of the second comparison algorithm. Thus the new method is reliable and effective.
Hebert, Courtney; Flaherty, Jennifer; Smyer, Justin; Ding, Jing; Mangino, Julie E
2018-03-01
Surveillance is an important tool for infection control; however, this task can often be time-consuming and take away from infection prevention activities. With the increasing availability of comprehensive electronic health records, there is an opportunity to automate these surveillance activities. The objective of this article is to describe the implementation of an electronic algorithm for ventilator-associated events (VAEs) at a large academic medical center METHODS: This article reports on a 6-month manual validation of a dashboard for VAEs. We developed a computerized algorithm for automatically detecting VAEs and compared the output of this algorithm to the traditional, manual method of VAE surveillance. Manual surveillance by the infection preventionists identified 13 possible and 11 probable ventilator-associated pneumonias (VAPs), and the VAE dashboard identified 16 possible and 13 probable VAPs. The dashboard had 100% sensitivity and 100% accuracy when compared with manual surveillance for possible and probable VAP. We report on the successfully implemented VAE dashboard. Workflow of the infection preventionists was simplified after implementation of the dashboard with subjective time-savings reported. Implementing a computerized dashboard for VAE surveillance at a medical center with a comprehensive electronic health record is feasible; however, this required significant initial and ongoing work on the part of data analysts and infection preventionists. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Development of an algorithm for controlling a multilevel three-phase converter
NASA Astrophysics Data System (ADS)
Taissariyeva, Kyrmyzy; Ilipbaeva, Lyazzat
2017-08-01
This work is devoted to the development of an algorithm for controlling transistors in a three-phase multilevel conversion system. The developed algorithm allows to organize a correct operation and describes the state of transistors at each moment of time when constructing a computer model of a three-phase multilevel converter. The developed algorithm of operation of transistors provides in-phase of a three-phase converter and obtaining a sinusoidal voltage curve at the converter output.
The Rational Hybrid Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Clark, Michael
2006-12-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
Wu, Hao; Wan, Zhong
2018-02-01
In this paper, a multiobjective mixed-integer piecewise nonlinear programming model (MOMIPNLP) is built to formulate the management problem of urban mining system, where the decision variables are associated with buy-back pricing, choices of sites, transportation planning, and adjustment of production capacity. Different from the existing approaches, the social negative effect, generated from structural optimization of the recycling system, is minimized in our model, as well as the total recycling profit and utility from environmental improvement are jointly maximized. For solving the problem, the MOMIPNLP model is first transformed into an ordinary mixed-integer nonlinear programming model by variable substitution such that the piecewise feature of the model is removed. Then, based on technique of orthogonal design, a hybrid heuristic algorithm is developed to find an approximate Pareto-optimal solution, where genetic algorithm is used to optimize the structure of search neighborhood, and both local branching algorithm and relaxation-induced neighborhood search algorithm are employed to cut the searching branches and reduce the number of variables in each branch. Numerical experiments indicate that this algorithm spends less CPU (central processing unit) time in solving large-scale regional urban mining management problems, especially in comparison with the similar ones available in literature. By case study and sensitivity analysis, a number of practical managerial implications are revealed from the model. Since the metal stocks in society are reliable overground mineral sources, urban mining has been paid great attention as emerging strategic resources in an era of resource shortage. By mathematical modeling and development of efficient algorithms, this paper provides decision makers with useful suggestions on the optimal design of recycling system in urban mining. For example, this paper can answer how to encourage enterprises to join the recycling activities by government's support and subsidies, whether the existing recycling system can meet the developmental requirements or not, and what is a reasonable adjustment of production capacity.
An Overview of the JPSS Ground Project Algorithm Integration Process
NASA Astrophysics Data System (ADS)
Vicente, G. A.; Williams, R.; Dorman, T. J.; Williamson, R. C.; Shaw, F. J.; Thomas, W. M.; Hung, L.; Griffin, A.; Meade, P.; Steadley, R. S.; Cember, R. P.
2015-12-01
The smooth transition, implementation and operationalization of scientific software's from the National Oceanic and Atmospheric Administration (NOAA) development teams to the Join Polar Satellite System (JPSS) Ground Segment requires a variety of experiences and expertise. This task has been accomplished by a dedicated group of scientist and engineers working in close collaboration with the NOAA Satellite and Information Services (NESDIS) Center for Satellite Applications and Research (STAR) science teams for the JPSS/Suomi-NPOES Preparatory Project (S-NPP) Advanced Technology Microwave Sounder (ATMS), Cross-track Infrared Sounder (CrIS), Visible Infrared Imaging Radiometer Suite (VIIRS) and Ozone Mapping and Profiler Suite (OMPS) instruments. The presentation purpose is to describe the JPSS project process for algorithm implementation from the very early delivering stages by the science teams to the full operationalization into the Interface Processing Segment (IDPS), the processing system that provides Environmental Data Records (EDR's) to NOAA. Special focus is given to the NASA Data Products Engineering and Services (DPES) Algorithm Integration Team (AIT) functional and regression test activities. In the functional testing phase, the AIT uses one or a few specific chunks of data (granules) selected by the NOAA STAR Calibration and Validation (cal/val) Teams to demonstrate that a small change in the code performs properly and does not disrupt the rest of the algorithm chain. In the regression testing phase, the modified code is placed into to the Government Resources for Algorithm Verification, Integration, Test and Evaluation (GRAVITE) Algorithm Development Area (ADA), a simulated and smaller version of the operational IDPS. Baseline files are swapped out, not edited and the whole code package runs in one full orbit of Science Data Records (SDR's) using Calibration Look Up Tables (Cal LUT's) for the time of the orbit. The purpose of the regression test is to identify unintended outcomes. Overall the presentation provides a general and easy to follow overview of the JPSS Algorithm Change Process (ACP) and is intended to facility the audience understanding of a very extensive and complex process.
Hamilton, Lei; McConley, Marc; Angermueller, Kai; Goldberg, David; Corba, Massimiliano; Kim, Louis; Moran, James; Parks, Philip D; Sang Chin; Widge, Alik S; Dougherty, Darin D; Eskandar, Emad N
2015-08-01
A fully autonomous intracranial device is built to continually record neural activities in different parts of the brain, process these sampled signals, decode features that correlate to behaviors and neuropsychiatric states, and use these features to deliver brain stimulation in a closed-loop fashion. In this paper, we describe the sampling and stimulation aspects of such a device. We first describe the signal processing algorithms of two unsupervised spike sorting methods. Next, we describe the LFP time-frequency analysis and feature derivation from the two spike sorting methods. Spike sorting includes a novel approach to constructing a dictionary learning algorithm in a Compressed Sensing (CS) framework. We present a joint prediction scheme to determine the class of neural spikes in the dictionary learning framework; and, the second approach is a modified OSort algorithm which is implemented in a distributed system optimized for power efficiency. Furthermore, sorted spikes and time-frequency analysis of LFP signals can be used to generate derived features (including cross-frequency coupling, spike-field coupling). We then show how these derived features can be used in the design and development of novel decode and closed-loop control algorithms that are optimized to apply deep brain stimulation based on a patient's neuropsychiatric state. For the control algorithm, we define the state vector as representative of a patient's impulsivity, avoidance, inhibition, etc. Controller parameters are optimized to apply stimulation based on the state vector's current state as well as its historical values. The overall algorithm and software design for our implantable neural recording and stimulation system uses an innovative, adaptable, and reprogrammable architecture that enables advancement of the state-of-the-art in closed-loop neural control while also meeting the challenges of system power constraints and concurrent development with ongoing scientific research designed to define brain network connectivity and neural network dynamics that vary at the individual patient level and vary over time.
Toward Optimal Target Placement for Neural Prosthetic Devices
Cunningham, John P.; Yu, Byron M.; Gilja, Vikash; Ryu, Stephen I.; Shenoy, Krishna V.
2008-01-01
Neural prosthetic systems have been designed to estimate continuous reach trajectories (motor prostheses) and to predict discrete reach targets (communication prostheses). In the latter case, reach targets are typically decoded from neural spiking activity during an instructed delay period before the reach begins. Such systems use targets placed in radially symmetric geometries independent of the tuning properties of the neurons available. Here we seek to automate the target placement process and increase decode accuracy in communication prostheses by selecting target locations based on the neural population at hand. Motor prostheses that incorporate intended target information could also benefit from this consideration. We present an optimal target placement algorithm that approximately maximizes decode accuracy with respect to target locations. In simulated neural spiking data fit from two monkeys, the optimal target placement algorithm yielded statistically significant improvements up to 8 and 9% for two and sixteen targets, respectively. For four and eight targets, gains were more modest, as the target layouts found by the algorithm closely resembled the canonical layouts. We trained a monkey in this paradigm and tested the algorithm with experimental neural data to confirm some of the results found in simulation. In all, the algorithm can serve not only to create new target layouts that outperform canonical layouts, but it can also confirm or help select among multiple canonical layouts. The optimal target placement algorithm developed here is the first algorithm of its kind, and it should both improve decode accuracy and help automate target placement for neural prostheses. PMID:18829845
Making adjustments to event annotations for improved biological event extraction.
Baek, Seung-Cheol; Park, Jong C
2016-09-16
Current state-of-the-art approaches to biological event extraction train statistical models in a supervised manner on corpora annotated with event triggers and event-argument relations. Inspecting such corpora, we observe that there is ambiguity in the span of event triggers (e.g., "transcriptional activity" vs. 'transcriptional'), leading to inconsistencies across event trigger annotations. Such inconsistencies make it quite likely that similar phrases are annotated with different spans of event triggers, suggesting the possibility that a statistical learning algorithm misses an opportunity for generalizing from such event triggers. We anticipate that adjustments to the span of event triggers to reduce these inconsistencies would meaningfully improve the present performance of event extraction systems. In this study, we look into this possibility with the corpora provided by the 2009 BioNLP shared task as a proof of concept. We propose an Informed Expectation-Maximization (EM) algorithm, which trains models using the EM algorithm with a posterior regularization technique, which consults the gold-standard event trigger annotations in a form of constraints. We further propose four constraints on the possible event trigger annotations to be explored by the EM algorithm. The algorithm is shown to outperform the state-of-the-art algorithm on the development corpus in a statistically significant manner and on the test corpus by a narrow margin. The analysis of the annotations generated by the algorithm shows that there are various types of ambiguity in event annotations, even though they could be small in number.
NASA Astrophysics Data System (ADS)
Jiang, Y.; Xing, H. L.
2016-12-01
Micro-seismic events induced by water injection, mining activity or oil/gas extraction are quite informative, the interpretation of which can be applied for the reconstruction of underground stress and monitoring of hydraulic fracturing progress in oil/gas reservoirs. The source characterises and locations are crucial parameters that required for these purposes, which can be obtained through the waveform matching inversion (WMI) method. Therefore it is imperative to develop a WMI algorithm with high accuracy and convergence speed. Heuristic algorithm, as a category of nonlinear method, possesses a very high convergence speed and good capacity to overcome local minimal values, and has been well applied for many areas (e.g. image processing, artificial intelligence). However, its effectiveness for micro-seismic WMI is still poorly investigated; very few literatures exits that addressing this subject. In this research an advanced heuristic algorithm, gravitational search algorithm (GSA) , is proposed to estimate the focal mechanism (angle of strike, dip and rake) and source locations in three dimension. Unlike traditional inversion methods, the heuristic algorithm inversion does not require the approximation of green function. The method directly interacts with a CPU parallelized finite difference forward modelling engine, and updating the model parameters under GSA criterions. The effectiveness of this method is tested with synthetic data form a multi-layered elastic model; the results indicate GSA can be well applied on WMI and has its unique advantages. Keywords: Micro-seismicity, Waveform matching inversion, gravitational search algorithm, parallel computation
Lund, S H; Aspelund, T; Kirby, P; Russell, G; Einarsson, S; Palsson, O; Stefánsson, E
2016-05-01
To validate a mathematical algorithm that calculates risk of diabetic retinopathy progression in a diabetic population with UK staging (R0-3; M1) of diabetic retinopathy. To establish the utility of the algorithm to reduce screening frequency in this cohort, while maintaining safety standards. The cohort of 9690 diabetic individuals in England, followed for 2 years. The algorithms calculated individual risk for development of preproliferative retinopathy (R2), active proliferative retinopathy (R3A) and diabetic maculopathy (M1) based on clinical data. Screening intervals were determined such that the increase in risk of developing certain stages of retinopathy between screenings was the same for all patients and identical to mean risk in fixed annual screening. Receiver operating characteristic curves were drawn and area under the curve calculated to estimate the prediction capability. The algorithm predicts the occurrence of the given diabetic retinopathy stages with area under the curve =80% for patients with type II diabetes (CI 0.78 to 0.81). Of the cohort 64% is at less than 5% risk of progression to R2, R3A or M1 within 2 years. By applying a 2 year ceiling to the screening interval, patients with type II diabetes are screened on average every 20 months, which is a 40% reduction in frequency compared with annual screening. The algorithm reliably identifies patients at high risk of developing advanced stages of diabetic retinopathy, including preproliferative R2, active proliferative R3A and maculopathy M1. Majority of patients have less than 5% risk of progression between stages within a year and a small high-risk group is identified. Screening visit frequency and presumably costs in a diabetic retinopathy screening system can be reduced by 40% by using a 2 year ceiling. Individualised risk assessment with 2 year ceiling on screening intervals may be a pragmatic next step in diabetic retinopathy screening in UK, in that safety is maximised and cost reduced by about 40%. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
NASA Astrophysics Data System (ADS)
Debats, Stephanie Renee
Smallholder farms dominate in many parts of the world, including Sub-Saharan Africa. These systems are characterized by small, heterogeneous, and often indistinct field patterns, requiring a specialized methodology to map agricultural landcover. In this thesis, we developed a benchmark labeled data set of high-resolution satellite imagery of agricultural fields in South Africa. We presented a new approach to mapping agricultural fields, based on efficient extraction of a vast set of simple, highly correlated, and interdependent features, followed by a random forest classifier. The algorithm achieved similar high performance across agricultural types, including spectrally indistinct smallholder fields, and demonstrated the ability to generalize across large geographic areas. In sensitivity analyses, we determined multi-temporal images provided greater performance gains than the addition of multi-spectral bands. We also demonstrated how active learning can be incorporated in the algorithm to create smaller, more efficient training data sets, which reduced computational resources, minimized the need for humans to hand-label data, and boosted performance. We designed a patch-based uncertainty metric to drive the active learning framework, based on the regular grid of a crowdsourcing platform, and demonstrated how subject matter experts can be replaced with fleets of crowdsourcing workers. Our active learning algorithm achieved similar performance as an algorithm trained with randomly selected data, but with 62% less data samples. This thesis furthers the goal of providing accurate agricultural landcover maps, at a scale that is relevant for the dominant smallholder class. Accurate maps are crucial for monitoring and promoting agricultural production. Furthermore, improved agricultural landcover maps will aid a host of other applications, including landcover change assessments, cadastral surveys to strengthen smallholder land rights, and constraints for crop modeling and famine prediction.
Mazumder, Oishee; Kundu, Ananda Sankar; Lenka, Prasanna Kumar; Bhaumik, Subhasis
2016-10-01
Ambulatory activity classification is an active area of research for controlling and monitoring state initiation, termination, and transition in mobility assistive devices such as lower-limb exoskeletons. State transition of lower-limb exoskeletons reported thus far are achieved mostly through the use of manual switches or state machine-based logic. In this paper, we propose a postural activity classifier using a 'dendogram-based support vector machine' (DSVM) which can be used to control a lower-limb exoskeleton. A pressure sensor-based wearable insole and two six-axis inertial measurement units (IMU) have been used for recognising two static and seven dynamic postural activities: sit, stand, and sit-to-stand, stand-to-sit, level walk, fast walk, slope walk, stair ascent and stair descent. Most of the ambulatory activities are periodic in nature and have unique patterns of response. The proposed classification algorithm involves the recognition of activity patterns on the basis of the periodic shape of trajectories. Polynomial coefficients extracted from the hip angle trajectory and the centre-of-pressure (CoP) trajectory during an activity cycle are used as features to classify dynamic activities. The novelty of this paper lies in finding suitable instrumentation, developing post-processing techniques, and selecting shape-based features for ambulatory activity classification. The proposed activity classifier is used to identify the activity states of a lower-limb exoskeleton. The DSVM classifier algorithm achieved an overall classification accuracy of 95.2%. Copyright © 2016 Elsevier B.V. All rights reserved.
Sokoll, Stefan; Tönnies, Klaus; Heine, Martin
2012-01-01
In this paper we present an algorithm for the detection of spontaneous activity at individual synapses in microscopy images. By employing the optical marker pHluorin, we are able to visualize synaptic vesicle release with a spatial resolution in the nm range in a non-invasive manner. We compute individual synaptic signals from automatically segmented regions of interest and detect peaks that represent synaptic activity using a continuous wavelet transform based algorithm. As opposed to standard peak detection algorithms, we employ multiple wavelets to match all relevant features of the peak. We evaluate our multiple wavelet algorithm (MWA) on real data and assess the performance on synthetic data over a wide range of signal-to-noise ratios.
NASA Technical Reports Server (NTRS)
Cota, Glenn F.
2001-01-01
The overall goal of this effort is to acquire a large bio-optical database, encompassing most environmental variability in the Arctic, to develop algorithms for phytoplankton biomass and production and other optically active constituents. A large suite of bio-optical and biogeochemical observations have been collected in a variety of high latitude ecosystems at different seasons. The Ocean Research Consortium of the Arctic (ORCA) is a collaborative effort between G.F. Cota of Old Dominion University (ODU), W.G. Harrison and T. Platt of the Bedford Institute of Oceanography (BIO), S. Sathyendranath of Dalhousie University and S. Saitoh of Hokkaido University. ORCA has now conducted 12 cruises and collected over 500 in-water optical profiles plus a variety of ancillary data. Observational suites typically include apparent optical properties (AOPs), inherent optical property (IOPs), and a variety of ancillary observations including sun photometry, biogeochemical profiles, and productivity measurements. All quality-assured data have been submitted to NASA's SeaWIFS Bio-Optical Archive and Storage System (SeaBASS) data archive. Our algorithm development efforts address most of the potential bio-optical data products for the Sea-Viewing Wide Field-of-view Sensor (SeaWiFS), Moderate Resolution Imaging Spectroradiometer (MODIS), and GLI, and provides validation for a specific areas of concern, i.e., high latitudes and coastal waters.
NASA Astrophysics Data System (ADS)
Englander, J. G.; Brodrick, P. G.; Brandt, A. R.
2015-12-01
Fugitive emissions from oil and gas extraction have become a greater concern with the recent increases in development of shale hydrocarbon resources. There are significant gaps in the tools and research used to estimate fugitive emissions from oil and gas extraction. Two approaches exist for quantifying these emissions: atmospheric (or 'top down') studies, which measure methane fluxes remotely, or inventory-based ('bottom up') studies, which aggregate leakage rates on an equipment-specific basis. Bottom-up studies require counting or estimating how many devices might be leaking (called an 'activity count'), as well as how much each device might leak on average (an 'emissions factor'). In a real-world inventory, there is uncertainty in both activity counts and emissions factors. Even at the well level there are significant disagreements in data reporting. For example, some prior studies noted a ~5x difference in the number of reported well completions in the United States between EPA and private data sources. The purpose of this work is to address activity count uncertainty by using machine learning algorithms to classify oilfield surface facilities using high-resolution spatial imagery. This method can help estimate venting and fugitive emissions sources from regions where reporting of oilfield equipment is incomplete or non-existent. This work will utilize high resolution satellite imagery to count well pads in the Bakken oil field of North Dakota. This initial study examines an area of ~2,000 km2 with ~1000 well pads. We compare different machine learning classification techniques, and explore the impact of training set size, input variables, and image segmentation settings to develop efficient and robust techniques identifying well pads. We discuss the tradeoffs inherent to different classification algorithms, and determine the optimal algorithms for oilfield feature detection. In the future, the results of this work will be leveraged to be provide activity counts of oilfield surface equipment including tanks, pumpjacks, and holding ponds.
An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.
Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei
2013-05-01
Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.
Mannan, Malik M Naeem; Kim, Shinjung; Jeong, Myung Yung; Kamran, M Ahmad
2016-02-19
Contamination of eye movement and blink artifacts in Electroencephalogram (EEG) recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI). In this paper, we proposed an automatic framework based on independent component analysis (ICA) and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data.
Towards multifocal ultrasonic neural stimulation: pattern generation algorithms
NASA Astrophysics Data System (ADS)
Hertzberg, Yoni; Naor, Omer; Volovick, Alexander; Shoham, Shy
2010-10-01
Focused ultrasound (FUS) waves directed onto neural structures have been shown to dynamically modulate neural activity and excitability, opening up a range of possible systems and applications where the non-invasiveness, safety, mm-range resolution and other characteristics of FUS are advantageous. As in other neuro-stimulation and modulation modalities, the highly distributed and parallel nature of neural systems and neural information processing call for the development of appropriately patterned stimulation strategies which could simultaneously address multiple sites in flexible patterns. Here, we study the generation of sparse multi-focal ultrasonic distributions using phase-only modulation in ultrasonic phased arrays. We analyse the relative performance of an existing algorithm for generating multifocal ultrasonic distributions and new algorithms that we adapt from the field of optical digital holography, and find that generally the weighted Gerchberg-Saxton algorithm leads to overall superior efficiency and uniformity in the focal spots, without significantly increasing the computational burden. By combining phased-array FUS and magnetic-resonance thermometry we experimentally demonstrate the simultaneous generation of tightly focused multifocal distributions in a tissue phantom, a first step towards patterned FUS neuro-modulation systems and devices.
Shape-driven 3D segmentation using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2006-01-01
This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details.
Experimental and analytical study of secondary path variations in active engine mounts
NASA Astrophysics Data System (ADS)
Hausberg, Fabian; Scheiblegger, Christian; Pfeffer, Peter; Plöchl, Manfred; Hecker, Simon; Rupp, Markus
2015-03-01
Active engine mounts (AEMs) provide an effective solution to further improve the acoustic and vibrational comfort of passenger cars. Typically, adaptive feedforward control algorithms, e.g., the filtered-x-least-mean-squares (FxLMS) algorithm, are applied to cancel disturbing engine vibrations. These algorithms require an accurate estimate of the AEM active dynamic characteristics, also known as the secondary path, in order to guarantee control performance and stability. This paper focuses on the experimental and theoretical study of secondary path variations in AEMs. The impact of three major influences, namely nonlinearity, change of preload and component temperature, on the AEM active dynamic characteristics is experimentally analyzed. The obtained test results are theoretically investigated with a linear AEM model which incorporates an appropriate description for elastomeric components. A special experimental set-up extends the model validation of the active dynamic characteristics to higher frequencies up to 400 Hz. The theoretical and experimental results show that significant secondary path variations are merely observed in the frequency range of the AEM actuator's resonance frequency. These variations mainly result from the change of the component temperature. As the stability of the algorithm is primarily affected by the actuator's resonance frequency, the findings of this paper facilitate the design of AEMs with simpler adaptive feedforward algorithms. From a practical point of view it may further be concluded that algorithmic countermeasures against instability are only necessary in the frequency range of the AEM actuator's resonance frequency.
NASA Technical Reports Server (NTRS)
Powell, Bradley W.; Burroughs, Ivan A.
1994-01-01
Through the two phases of this contract, sensors for welding applications and parameter extraction algorithms have been developed. These sensors form the foundation of a weld control system which can provide action weld control through the monitoring of the weld pool and keyhole in a VPPA welding process. Systems of this type offer the potential of quality enhancement and cost reduction (minimization of rework on faulty welds) for high-integrity welding applications. Sensors for preweld and postweld inspection, weld pool monitoring, keyhole/weld wire entry monitoring, and seam tracking were developed. Algorithms for signal extraction were also developed and analyzed to determine their application to an adaptive weld control system. The following sections discuss findings for each of the three sensors developed under this contract: (1) weld profiling sensor; (2) weld pool sensor; and (3) stereo seam tracker/keyhole imaging sensor. Hardened versions of these sensors were designed and built under this contract. A control system, described later, was developed on a multiprocessing/multitasking operating system for maximum power and flexibility. Documentation for sensor mechanical and electrical design is also included as appendices in this report.
Development of a Smart Release Algorithm for Mid-Air Separation of Parachute Test Articles
NASA Technical Reports Server (NTRS)
Moore, James W.
2011-01-01
The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is currently developing an autonomous method to separate a capsule-shaped parachute test vehicle from an air-drop platform for use in the test program to develop and validate the parachute system for the Orion spacecraft. The CPAS project seeks to perform air-drop tests of an Orion-like boilerplate capsule. Delivery of the boilerplate capsule to the test condition has proven to be a critical and complicated task. In the current concept, the boilerplate vehicle is extracted from an aircraft on top of a Type V pallet and then separated from the pallet in mid-air. The attitude of the vehicles at separation is critical to avoiding re-contact and successfully deploying the boilerplate into a heatshield-down orientation. Neither the pallet nor the boilerplate has an active control system. However, the attitude of the mated vehicle as a function of time is somewhat predictable. CPAS engineers have designed an avionics system to monitor the attitude of the mated vehicle as it is extracted from the aircraft and command a release when the desired conditions are met. The algorithm includes contingency capabilities designed to release the test vehicle before undesirable orientations occur. The algorithm was verified with simulation and ground testing. The pre-flight development and testing is discussed and limitations of ground testing are noted. The CPAS project performed a series of three drop tests as a proof-of-concept of the release technique. These tests helped to refine the attitude instrumentation and software algorithm to be used on future tests. The drop tests are described in detail and the evolution of the release system with each test is described.
Computationally efficient algorithm for high sampling-frequency operation of active noise control
NASA Astrophysics Data System (ADS)
Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati
2015-05-01
In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.
NASA Astrophysics Data System (ADS)
Houchin, J. S.
2014-09-01
A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.
Global rainfall monitoring by SSM/I
NASA Technical Reports Server (NTRS)
Barrett, Eric C.; Kidd, C.; Kniveton, D.
1993-01-01
Significant accomplishments in the last year of research are presented. During 1991, three main activities were undertaken: (1) development and testing of a preliminary global rainfall algorithm; (2) researching areas of strong surface scattering; and (3) formulation of a program of work for the WetNet PrecipWG. Focus of present research and plans for next year are briefly dismissed.