Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem
NASA Astrophysics Data System (ADS)
Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang
2015-09-01
A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.
Parenting Styles and Youth Well-Being across Immigrant Generations
ERIC Educational Resources Information Center
Driscoll, Anne K.; Russell, Stephen T.; Crockett, Lisa J.
2008-01-01
This study examines generational patterns of parenting styles, the relationships between parenting styles and adolescent well-being among youth of Mexican origin, and the role of generational parenting style patterns in explaining generational patterns in youth behavior (delinquency and alcohol problems) and psychological well-being (depression…
NASA Astrophysics Data System (ADS)
Octarina, Sisca; Radiana, Mutia; Bangun, Putra B. J.
2018-01-01
Two dimensional cutting stock problem (CSP) is a problem in determining the cutting pattern from a set of stock with standard length and width to fulfill the demand of items. Cutting patterns were determined in order to minimize the usage of stock. This research implemented pattern generation algorithm to formulate Gilmore and Gomory model of two dimensional CSP. The constraints of Gilmore and Gomory model was performed to assure the strips which cut in the first stage will be used in the second stage. Branch and Cut method was used to obtain the optimal solution. Based on the results, it found many patterns combination, if the optimal cutting patterns which correspond to the first stage were combined with the second stage.
Linking Substance Use and Problem Behavior across Three Generations
ERIC Educational Resources Information Center
Bailey, Jennifer A.; Hill, Karl G.; Oesterle, Sabrina; Hawkins, J. David
2006-01-01
This study examined patterns of between-generation continuity in substance use from generation 1 (G1) parents to generation 2 (G2) adolescents and from G2 adult substance use and G1 substance use to generation 3 (G3) problem behavior in childhood. Structural equation modeling of prospective, longitudinal data from 808 participants, their parents,…
Diffractive elements for generating microscale laser beam patterns: a Y2K problem
NASA Astrophysics Data System (ADS)
Teiwes, Stephan; Krueger, Sven; Wernicke, Guenther K.; Ferstl, Margit
2000-03-01
Lasers are widely used in industrial fabrication for engraving, cutting and many other purposes. However, material processing at very small scales is still a matter of concern. Advances in diffractive optics could provide for laser systems that could be used for engraving or cutting of micro-scale patterns at high speeds. In our paper we focus on the design of diffractive elements which can be used for this special application. It is a common desire in material processing to apply 'discrete' as well as 'continuous' beam patterns. Especially, the latter case is difficult to handle as typical micro-scale patterns are characterized by bad band-limitation properties, and as speckles can easily occur in beam patterns. It is shown in this paper that a standard iterative design method usually fails to obtain diffractive elements that generate diffraction patterns with acceptable quality. Insights gained from an analysis of the design problems are used to optimize the iterative design method. We demonstrate applicability and success of our approach by the design of diffractive phase elements that generate a discrete and a continuous 'Y2K' pattern.
Neuromorphic walking gait control.
Still, Susanne; Hepp, Klaus; Douglas, Rodney J
2006-03-01
We present a neuromorphic pattern generator for controlling the walking gaits of four-legged robots which is inspired by central pattern generators found in the nervous system and which is implemented as a very large scale integrated (VLSI) chip. The chip contains oscillator circuits that mimic the output of motor neurons in a strongly simplified way. We show that four coupled oscillators can produce rhythmic patterns with phase relationships that are appropriate to generate all four-legged animal walking gaits. These phase relationships together with frequency and duty cycle of the oscillators determine the walking behavior of a robot driven by the chip, and they depend on a small set of stationary bias voltages. We give analytic expressions for these dependencies. This chip reduces the complex, dynamic inter-leg control problem associated with walking gait generation to the problem of setting a few stationary parameters. It provides a compact and low power solution for walking gait control in robots.
SPMBR: a scalable algorithm for mining sequential patterns based on bitmaps
NASA Astrophysics Data System (ADS)
Xu, Xiwei; Zhang, Changhai
2013-12-01
Now some sequential patterns mining algorithms generate too many candidate sequences, and increase the processing cost of support counting. Therefore, we present an effective and scalable algorithm called SPMBR (Sequential Patterns Mining based on Bitmap Representation) to solve the problem of mining the sequential patterns for large databases. Our method differs from previous related works of mining sequential patterns. The main difference is that the database of sequential patterns is represented by bitmaps, and a simplified bitmap structure is presented firstly. In this paper, First the algorithm generate candidate sequences by SE(Sequence Extension) and IE(Item Extension), and then obtain all frequent sequences by comparing the original bitmap and the extended item bitmap .This method could simplify the problem of mining the sequential patterns and avoid the high processing cost of support counting. Both theories and experiments indicate that the performance of SPMBR is predominant for large transaction databases, the required memory size for storing temporal data is much less during mining process, and all sequential patterns can be mined with feasibility.
Generation of shape complexity through tissue conflict resolution
Rebocho, Alexandra B; Southam, Paul; Kennaway, J Richard; Coen, Enrico
2017-01-01
Out-of-plane tissue deformations are key morphogenetic events during plant and animal development that generate 3D shapes, such as flowers or limbs. However, the mechanisms by which spatiotemporal patterns of gene expression modify cellular behaviours to generate such deformations remain to be established. We use the Snapdragon flower as a model system to address this problem. Combining cellular analysis with tissue-level modelling, we show that an orthogonal pattern of growth orientations plays a key role in generating out-of-plane deformations. This growth pattern is most likely oriented by a polarity field, highlighted by PIN1 protein localisation, and is modulated by dorsoventral gene activity. The orthogonal growth pattern interacts with other patterns of differential growth to create tissue conflicts that shape the flower. Similar shape changes can be generated by contraction as well as growth, suggesting tissue conflict resolution provides a flexible morphogenetic mechanism for generating shape diversity in plants and animals. DOI: http://dx.doi.org/10.7554/eLife.20156.001 PMID:28166865
PatternCoder: A Programming Support Tool for Learning Binary Class Associations and Design Patterns
ERIC Educational Resources Information Center
Paterson, J. H.; Cheng, K. F.; Haddow, J.
2009-01-01
PatternCoder is a software tool to aid student understanding of class associations. It has a wizard-based interface which allows students to select an appropriate binary class association or design pattern for a given problem. Java code is then generated which allows students to explore the way in which the class associations are implemented in a…
Frank, Steven A.
2010-01-01
We typically observe large-scale outcomes that arise from the interactions of many hidden, small-scale processes. Examples include age of disease onset, rates of amino acid substitutions, and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern, and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of non-neutral generative processes that attract to the same neutral pattern. PMID:19538344
Patterns of Generative Discourse in Online Discussions during the Field Experience
ERIC Educational Resources Information Center
Lafferty, Karen Elizabeth; Kopcha, Theodore J.
2016-01-01
This study examined how online discussion of the classroom challenges that preservice teachers face during the field experience can lead to problem solving and knowledge generation. Drawing upon Horn and Little's (2010) descriptions of generative discourse, the study examined how a community of preservice teachers, their university supervisors,…
Traction patterns of tumor cells.
Ambrosi, D; Duperray, A; Peschetola, V; Verdier, C
2009-01-01
The traction exerted by a cell on a planar deformable substrate can be indirectly obtained on the basis of the displacement field of the underlying layer. The usual methodology used to address this inverse problem is based on the exploitation of the Green tensor of the linear elasticity problem in a half space (Boussinesq problem), coupled with a minimization algorithm under force penalization. A possible alternative strategy is to exploit an adjoint equation, obtained on the basis of a suitable minimization requirement. The resulting system of coupled elliptic partial differential equations is applied here to determine the force field per unit surface generated by T24 tumor cells on a polyacrylamide substrate. The shear stress obtained by numerical integration provides quantitative insight of the traction field and is a promising tool to investigate the spatial pattern of force per unit surface generated in cell motion, particularly in the case of such cancer cells.
Automatic Hidden-Web Table Interpretation by Sibling Page Comparison
NASA Astrophysics Data System (ADS)
Tao, Cui; Embley, David W.
The longstanding problem of automatic table interpretation still illudes us. Its solution would not only be an aid to table processing applications such as large volume table conversion, but would also be an aid in solving related problems such as information extraction and semi-structured data management. In this paper, we offer a conceptual modeling solution for the common special case in which so-called sibling pages are available. The sibling pages we consider are pages on the hidden web, commonly generated from underlying databases. We compare them to identify and connect nonvarying components (category labels) and varying components (data values). We tested our solution using more than 2,000 tables in source pages from three different domains—car advertisements, molecular biology, and geopolitical information. Experimental results show that the system can successfully identify sibling tables, generate structure patterns, interpret tables using the generated patterns, and automatically adjust the structure patterns, if necessary, as it processes a sequence of hidden-web pages. For these activities, the system was able to achieve an overall F-measure of 94.5%.
The Campus Community and the Futureless Generation
ERIC Educational Resources Information Center
Shoben, Edward J.
1969-01-01
Discusses social problems and concerns which cause student alienation and activism. Stresses that task of education is to reformulate the problems and seek solutions through increased respect for personal differences and pattern of social arrangements that allows widest expression of varied constructive potentials of man. (Author/NS)
Saeid, Nazemi; Roudbari, Aliakbar; Yaghmaeian, Kamyar
2014-01-14
The aim of the study was to design and implementation of integrated solid wastes management pattern in Shahroud industrial zone, evaluates the results and determine possible performance problems. This cross - sectional study was carried out for 4 years in Shahroud industrial zone and the implementation process included:1- Qualitative and quantitative analysis of all solid waste generated in the city, 2- determine the current state of solid waste management in the zone and to identify programs conducted, 3- Design and implementation of integrated solid wastes management pattern including design and implementation of training programs, laws, penalties and incentives and explain and implement programs for all factories and 4- The monitoring of the implementation process and determine the results. Annually, 1,728 tons of solid wastes generated in the town including 1603 tons of industrial wastes and 125 tons of municipal wastes. By implementing this pattern, the two separated systems of collection and recycling of domestic and industrial wastes was launched in this zone. Also consistent with the goals, the amount of solid wastes generated and disposed in 2009 was 51.5 and 28.6 kg per 100 million Rials production, respectively. Results showed that implementation of pattern of separated collection, training programs, capacity building, providing technical services, completing chain of industries and strengthening the cooperation between industrial estate management and industrial units could greatly reduce the waste management problems.
NASA Astrophysics Data System (ADS)
Lutich, Andrey
2017-07-01
This research considers the problem of generating compact vector representations of physical design patterns for analytics purposes in semiconductor patterning domain. PatterNet uses a deep artificial neural network to learn mapping of physical design patterns to a compact Euclidean hyperspace. Distances among mapped patterns in this space correspond to dissimilarities among patterns defined at the time of the network training. Once the mapping network has been trained, PatterNet embeddings can be used as feature vectors with standard machine learning algorithms, and pattern search, comparison, and clustering become trivial problems. PatterNet is inspired by the concepts developed within the framework of generative adversarial networks as well as the FaceNet. Our method facilitates a deep neural network (DNN) to learn directly the compact representation by supplying it with pairs of design patterns and dissimilarity among these patterns defined by a user. In the simplest case, the dissimilarity is represented by an area of the XOR of two patterns. Important to realize that our PatterNet approach is very different to the methods developed for deep learning on image data. In contrast to "conventional" pictures, the patterns in the CAD world are the lists of polygon vertex coordinates. The method solely relies on the promise of deep learning to discover internal structure of the incoming data and learn its hierarchical representations. Artificial intelligence arising from the combination of PatterNet and clustering analysis very precisely follows intuition of patterning/optical proximity correction experts paving the way toward human-like and human-friendly engineering tools.
A Temporal Pattern Mining Approach for Classifying Electronic Health Record Data
Batal, Iyad; Valizadegan, Hamed; Cooper, Gregory F.; Hauskrecht, Milos
2013-01-01
We study the problem of learning classification models from complex multivariate temporal data encountered in electronic health record systems. The challenge is to define a good set of features that are able to represent well the temporal aspect of the data. Our method relies on temporal abstractions and temporal pattern mining to extract the classification features. Temporal pattern mining usually returns a large number of temporal patterns, most of which may be irrelevant to the classification task. To address this problem, we present the Minimal Predictive Temporal Patterns framework to generate a small set of predictive and non-spurious patterns. We apply our approach to the real-world clinical task of predicting patients who are at risk of developing heparin induced thrombocytopenia. The results demonstrate the benefit of our approach in efficiently learning accurate classifiers, which is a key step for developing intelligent clinical monitoring systems. PMID:25309815
Congestion patterns of electric vehicles with limited battery capacity.
Jing, Wentao; Ramezani, Mohsen; An, Kun; Kim, Inhi
2018-01-01
The path choice behavior of battery electric vehicle (BEV) drivers is influenced by the lack of public charging stations, limited battery capacity, range anxiety and long battery charging time. This paper investigates the congestion/flow pattern captured by stochastic user equilibrium (SUE) traffic assignment problem in transportation networks with BEVs, where the BEV paths are restricted by their battery capacities. The BEV energy consumption is assumed to be a linear function of path length and path travel time, which addresses both path distance limit problem and road congestion effect. A mathematical programming model is proposed for the path-based SUE traffic assignment where the path cost is the sum of the corresponding link costs and a path specific out-of-energy penalty. We then apply the convergent Lagrangian dual method to transform the original problem into a concave maximization problem and develop a customized gradient projection algorithm to solve it. A column generation procedure is incorporated to generate the path set. Finally, two numerical examples are presented to demonstrate the applicability of the proposed model and the solution algorithm.
NASA Technical Reports Server (NTRS)
Ledwith, W. A., Jr.
1972-01-01
A computer solution is developed to the exhaust gas reingestion problem for aircraft operating in the reverse thrust mode on a crosswind-free runway. The computer program determines the location of the inlet flow pattern, whether the exhaust efflux lies within the inlet flow pattern or not, and if so, the approximate time before the reversed flow reaches the engine inlet. The program is written so that the user is free to select discrete runway speeds or to study the entire aircraft deceleration process for both the far field and cross-ingestion problems. While developed with STOL applications in mind, the solution is equally applicable to conventional designs. The inlet and reversed jet flow fields involved in the problem are assumed to be noninteracting. The nacelle model used in determining the inlet flow field is generated using an iterative solution to the Neuman problem from potential flow theory while the reversed jet flow field is adapted using an empirical correlation from the literature. Sample results obtained using the program are included.
Congestion patterns of electric vehicles with limited battery capacity
2018-01-01
The path choice behavior of battery electric vehicle (BEV) drivers is influenced by the lack of public charging stations, limited battery capacity, range anxiety and long battery charging time. This paper investigates the congestion/flow pattern captured by stochastic user equilibrium (SUE) traffic assignment problem in transportation networks with BEVs, where the BEV paths are restricted by their battery capacities. The BEV energy consumption is assumed to be a linear function of path length and path travel time, which addresses both path distance limit problem and road congestion effect. A mathematical programming model is proposed for the path-based SUE traffic assignment where the path cost is the sum of the corresponding link costs and a path specific out-of-energy penalty. We then apply the convergent Lagrangian dual method to transform the original problem into a concave maximization problem and develop a customized gradient projection algorithm to solve it. A column generation procedure is incorporated to generate the path set. Finally, two numerical examples are presented to demonstrate the applicability of the proposed model and the solution algorithm. PMID:29543875
2014-01-01
Background The aim of the study was to design and implementation of integrated solid wastes management pattern in Shahroud industrial zone, evaluates the results and determine possible performance problems. This cross - sectional study was carried out for 4 years in Shahroud industrial zone and the implementation process included:1- Qualitative and quantitative analysis of all solid waste generated in the city, 2- determine the current state of solid waste management in the zone and to identify programs conducted, 3- Design and implementation of integrated solid wastes management pattern including design and implementation of training programs, laws, penalties and incentives and explain and implement programs for all factories and 4- The monitoring of the implementation process and determine the results. Results Annually, 1,728 tons of solid wastes generated in the town including 1603 tons of industrial wastes and 125 tons of municipal wastes. By implementing this pattern, the two separated systems of collection and recycling of domestic and industrial wastes was launched in this zone. Also consistent with the goals, the amount of solid wastes generated and disposed in 2009 was 51.5 and 28.6 kg per 100 million Rials production, respectively. Conclusion Results showed that implementation of pattern of separated collection, training programs, capacity building, providing technical services, completing chain of industries and strengthening the cooperation between industrial estate management and industrial units could greatly reduce the waste management problems. PMID:24423020
Multiclassifier information fusion methods for microarray pattern recognition
NASA Astrophysics Data System (ADS)
Braun, Jerome J.; Glina, Yan; Judson, Nicholas; Herzig-Marx, Rachel
2004-04-01
This paper addresses automatic recognition of microarray patterns, a capability that could have a major significance for medical diagnostics, enabling development of diagnostic tools for automatic discrimination of specific diseases. The paper presents multiclassifier information fusion methods for microarray pattern recognition. The input space partitioning approach based on fitness measures that constitute an a-priori gauging of classification efficacy for each subspace is investigated. Methods for generation of fitness measures, generation of input subspaces and their use in the multiclassifier fusion architecture are presented. In particular, two-level quantification of fitness that accounts for the quality of each subspace as well as the quality of individual neighborhoods within the subspace is described. Individual-subspace classifiers are Support Vector Machine based. The decision fusion stage fuses the information from mulitple SVMs along with the multi-level fitness information. Final decision fusion stage techniques, including weighted fusion as well as Dempster-Shafer theory based fusion are investigated. It should be noted that while the above methods are discussed in the context of microarray pattern recognition, they are applicable to a broader range of discrimination problems, in particular to problems involving a large number of information sources irreducible to a low-dimensional feature space.
ERIC Educational Resources Information Center
Cantero-García, María; Alonso-Tapia, Jesús
2017-01-01
Introduction: Though different intervention patterns that parents use to employ for managing children behavior problems are known, it is not known whether they interact and how, configuring what can be considered as the family climate generated by the way in which children's behavior is managed (FCBM). Method: In this study, we developed a…
Reflection Patterns Generated by Condensed-Phase Oblique Detonation Interaction with a Rigid Wall
NASA Astrophysics Data System (ADS)
Short, Mark; Chiquete, Carlos; Bdzil, John; Meyer, Chad
2017-11-01
We examine numerically the wave reflection patterns generated by a detonation in a condensed phase explosive inclined obliquely but traveling parallel to a rigid wall as a function of incident angle. The problem is motivated by the characterization of detonation-material confiner interactions. We compare the reflection patterns for two detonation models, one where the reaction zone is spatially distributed, and the other where the reaction is instantaneous (a Chapman-Jouguet detonation). For the Chapman-Jouguet model, we compare the results of the computations with an asymptotic study recently conducted by Bdzil and Short for small detonation incident angles. We show that the ability of a spatially distributed reaction energy release to turn flow streamlines has a significant impact on the nature of the observed reflection patterns. The computational approach uses a shock-fit methodology.
ERIC Educational Resources Information Center
Hirschman, Charles
1994-01-01
Examines alternative methods to measure the status of "second-generation immigrants" using 1990 Census of Population data. Research of the variations in socioeconomic adaptation by duration of American residence among immigrants who arrived as children or teenagers reveals a dominant pattern of successful adaptation with greater exposure…
NASA Astrophysics Data System (ADS)
Roverso, Davide
2003-08-01
Many-class learning is the problem of training a classifier to discriminate among a large number of target classes. Together with the problem of dealing with high-dimensional patterns (i.e. a high-dimensional input space), the many class problem (i.e. a high-dimensional output space) is a major obstacle to be faced when scaling-up classifier systems and algorithms from small pilot applications to large full-scale applications. The Autonomous Recursive Task Decomposition (ARTD) algorithm is here proposed as a solution to the problem of many-class learning. Example applications of ARTD to neural classifier training are also presented. In these examples, improvements in training time are shown to range from 4-fold to more than 30-fold in pattern classification tasks of both static and dynamic character.
Dynamical origin of complex motor patterns
NASA Astrophysics Data System (ADS)
Alonso, L. M.; Alliende, J. A.; Mindlin, G. B.
2010-11-01
Behavior emerges as the nervous system generates motor patterns in charge of driving a peripheral biomechanical device. For several cases in the animal kingdom, it has been identified that the motor patterns used in order to accomplish a diversity of tasks are the different solutions of a simple, low dimensional nonlinear dynamical system. Yet, motor patterns emerge from the interaction of an enormous number of individual dynamical units. In this work, we study the dynamics of the average activity of a large set of coupled excitable units which are periodically forced. We show that low dimensional, yet non trivial dynamics emerges. As a case study, we analyze the air sac pressure patterns used by domestic canaries during song, which consists of a succession of repetitions of different syllable types. We show that the pressure patterns used to generate different syllables can be approximated by the solutions of the investigated model. In this way, we are capable of integrating different description scales of our problem.
Use of Electronic Health Record Simulation to Understand the Accuracy of Intern Progress Notes
March, Christopher A.; Scholl, Gretchen; Dversdal, Renee K.; Richards, Matthew; Wilson, Leah M.; Mohan, Vishnu; Gold, Jeffrey A.
2016-01-01
Background With the widespread adoption of electronic health records (EHRs), there is a growing awareness of problems in EHR training for new users and subsequent problems with the quality of information present in EHR-generated progress notes. By standardizing the case, simulation allows for the discovery of EHR patterns of use as well as a modality to aid in EHR training. Objective To develop a high-fidelity EHR training exercise for internal medicine interns to understand patterns of EHR utilization in the generation of daily progress notes. Methods Three months after beginning their internship, 32 interns participated in an EHR simulation designed to assess patterns in note writing and generation. Each intern was given a simulated chart and instructed to create a daily progress note. Notes were graded for use of copy-paste, macros, and accuracy of presented data. Results A total of 31 out of 32 interns (97%) completed the exercise. There was wide variance in use of macros to populate data, with multiple macro types used for the same data category. Three-quarters of notes contained either copy-paste elements or the elimination of active medical problems from the prior days' notes. This was associated with a significant number of quality issues, including failure to recognize a lack of deep vein thrombosis prophylaxis, medications stopped on admission, and issues in prior discharge summary. Conclusions Interns displayed wide variation in the process of creating progress notes. Additional studies are being conducted to determine the impact EHR-based simulation has on standardization of note content. PMID:27168894
Use of Electronic Health Record Simulation to Understand the Accuracy of Intern Progress Notes.
March, Christopher A; Scholl, Gretchen; Dversdal, Renee K; Richards, Matthew; Wilson, Leah M; Mohan, Vishnu; Gold, Jeffrey A
2016-05-01
Background With the widespread adoption of electronic health records (EHRs), there is a growing awareness of problems in EHR training for new users and subsequent problems with the quality of information present in EHR-generated progress notes. By standardizing the case, simulation allows for the discovery of EHR patterns of use as well as a modality to aid in EHR training. Objective To develop a high-fidelity EHR training exercise for internal medicine interns to understand patterns of EHR utilization in the generation of daily progress notes. Methods Three months after beginning their internship, 32 interns participated in an EHR simulation designed to assess patterns in note writing and generation. Each intern was given a simulated chart and instructed to create a daily progress note. Notes were graded for use of copy-paste, macros, and accuracy of presented data. Results A total of 31 out of 32 interns (97%) completed the exercise. There was wide variance in use of macros to populate data, with multiple macro types used for the same data category. Three-quarters of notes contained either copy-paste elements or the elimination of active medical problems from the prior days' notes. This was associated with a significant number of quality issues, including failure to recognize a lack of deep vein thrombosis prophylaxis, medications stopped on admission, and issues in prior discharge summary. Conclusions Interns displayed wide variation in the process of creating progress notes. Additional studies are being conducted to determine the impact EHR-based simulation has on standardization of note content.
A random generation approach to pattern library creation for full chip lithographic simulation
NASA Astrophysics Data System (ADS)
Zou, Elain; Hong, Sid; Liu, Limei; Huang, Lucas; Yang, Legender; Kabeel, Aliaa; Madkour, Kareem; ElManhawy, Wael; Kwan, Joe; Du, Chunshan; Hu, Xinyi; Wan, Qijian; Zhang, Recoo
2017-04-01
As technology advances, the need for running lithographic (litho) checking for early detection of hotspots before tapeout has become essential. This process is important at all levels—from designing standard cells and small blocks to large intellectual property (IP) and full chip layouts. Litho simulation provides high accuracy for detecting printability issues due to problematic geometries, but it has the disadvantage of slow performance on large designs and blocks [1]. Foundries have found a good compromise solution for running litho simulation on full chips by filtering out potential candidate hotspot patterns using pattern matching (PM), and then performing simulation on the matched locations. The challenge has always been how to easily create a PM library of candidate patterns that provides both comprehensive coverage for litho problems and fast runtime performance. This paper presents a new strategy for generating candidate real design patterns through a random generation approach using a layout schema generator (LSG) utility. The output patterns from the LSG are simulated, and then classified by a scoring mechanism that categorizes patterns according to the severity of the hotspots, probability of their presence in the design, and the likelihood of the pattern causing a hotspot. The scoring output helps to filter out the yield problematic patterns that should be removed from any standard cell design, and also to define potential problematic patterns that must be simulated within a bigger context to decide whether or not they represent an actual hotspot. This flow is demonstrated on SMIC 14nm technology, creating a candidate hotspot pattern library that can be used in full chip simulation with very high coverage and robust performance.
NASA Astrophysics Data System (ADS)
Pchelintseva, Svetlana V.; Runnova, Anastasia E.; Musatov, Vyacheslav Yu.; Hramov, Alexander E.
2017-03-01
In the paper we study the problem of recognition type of the observed object, depending on the generated pattern and the registered EEG data. EEG recorded at the time of displaying cube Necker characterizes appropriate state of brain activity. As an image we use bistable image Necker cube. Subject selects the type of cube and interpret it either as aleft cube or as the right cube. To solve the problem of recognition, we use artificial neural networks. In our paper to create a classifier we have considered a multilayer perceptron. We examine the structure of the artificial neural network and define cubes recognition accuracy.
Probabilistic generation of random networks taking into account information on motifs occurrence.
Bois, Frederic Y; Gayraud, Ghislaine
2015-01-01
Because of the huge number of graphs possible even with a small number of nodes, inference on network structure is known to be a challenging problem. Generating large random directed graphs with prescribed probabilities of occurrences of some meaningful patterns (motifs) is also difficult. We show how to generate such random graphs according to a formal probabilistic representation, using fast Markov chain Monte Carlo methods to sample them. As an illustration, we generate realistic graphs with several hundred nodes mimicking a gene transcription interaction network in Escherichia coli.
Probabilistic Generation of Random Networks Taking into Account Information on Motifs Occurrence
Bois, Frederic Y.
2015-01-01
Abstract Because of the huge number of graphs possible even with a small number of nodes, inference on network structure is known to be a challenging problem. Generating large random directed graphs with prescribed probabilities of occurrences of some meaningful patterns (motifs) is also difficult. We show how to generate such random graphs according to a formal probabilistic representation, using fast Markov chain Monte Carlo methods to sample them. As an illustration, we generate realistic graphs with several hundred nodes mimicking a gene transcription interaction network in Escherichia coli. PMID:25493547
Automated Blazar Light Curves Using Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Spencer James
2017-07-27
This presentation describes a problem and methodology pertaining to automated blazar light curves. Namely, optical variability patterns for blazars require the construction of light curves and in order to generate the light curves, data must be filtered before processing to ensure quality.
Biological pattern formation: from basic mechanisms to complex structures
NASA Astrophysics Data System (ADS)
Koch, A. J.; Meinhardt, H.
1994-10-01
The reliable development of highly complex organisms is an intriguing and fascinating problem. The genetic material is, as a rule, the same in each cell of an organism. How then do cells, under the influence of their common genes, produce spatial patterns? Simple models are discussed that describe the generation of patterns out of an initially nearly homogeneous state. They are based on nonlinear interactions of at least two chemicals and on their diffusion. The concepts of local autocatalysis and of long-range inhibition play a fundamental role. Numerical simulations show that the models account for many basic biological observations such as the regeneration of a pattern after excision of tissue or the production of regular (or nearly regular) arrays of organs during (or after) completion of growth. Very complex patterns can be generated in a reproducible way by hierarchical coupling of several such elementary reactions. Applications to animal coats and to the generation of polygonally shaped patterns are provided. It is further shown how to generate a strictly periodic pattern of units that themselves exhibit a complex and polar fine structure. This is illustrated by two examples: the assembly of photoreceptor cells in the eye of Drosophila and the positioning of leaves and axillary buds in a growing shoot. In both cases, the substructures have to achieve an internal polarity under the influence of some primary pattern-forming system existing in the fly's eye or in the plant. The fact that similar models can describe essential steps in organisms as distantly related as animals and plants suggests that they reveal some universal mechanisms.
Classification of time series patterns from complex dynamic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, J.C.; Rao, N.
1998-07-01
An increasing availability of high-performance computing and data storage media at decreasing cost is making possible the proliferation of large-scale numerical databases and data warehouses. Numeric warehousing enterprises on the order of hundreds of gigabytes to terabytes are a reality in many fields such as finance, retail sales, process systems monitoring, biomedical monitoring, surveillance and transportation. Large-scale databases are becoming more accessible to larger user communities through the internet, web-based applications and database connectivity. Consequently, most researchers now have access to a variety of massive datasets. This trend will probably only continue to grow over the next several years. Unfortunately,more » the availability of integrated tools to explore, analyze and understand the data warehoused in these archives is lagging far behind the ability to gain access to the same data. In particular, locating and identifying patterns of interest in numerical time series data is an increasingly important problem for which there are few available techniques. Temporal pattern recognition poses many interesting problems in classification, segmentation, prediction, diagnosis and anomaly detection. This research focuses on the problem of classification or characterization of numerical time series data. Highway vehicles and their drivers are examples of complex dynamic systems (CDS) which are being used by transportation agencies for field testing to generate large-scale time series datasets. Tools for effective analysis of numerical time series in databases generated by highway vehicle systems are not yet available, or have not been adapted to the target problem domain. However, analysis tools from similar domains may be adapted to the problem of classification of numerical time series data.« less
Van Geel, Mitch; Vedder, Paul
2010-10-01
This article addresses the possible existence of an immigrant paradox in a sample of immigrant adolescents attending vocational schools in the Netherlands. An immigrant paradox is the finding that first generation immigrants show a more positive pattern of adaptation than nationals despite poorer economic conditions. Second generation immigrants regress to the nationals in terms of adaptation. A sample of 152 first generation immigrant adolescents, 285 second generation immigrant adolescents and 406 national adolescents completed self-reports about socio-economic status, psychological problems, behavioral problems and self-esteem. The results supported the existence of an immigrant paradox in this sample. This indicates that further assimilation among immigrant adolescents does not necessarily lead to increased well being. © 2010 The Authors. Scandinavian Journal of Psychology © 2010 The Scandinavian Psychological Associations.
NASA Astrophysics Data System (ADS)
Pastorello, G.; Agarwal, D.; Poindexter, C.; Papale, D.; Trotta, C.; Ribeca, A.; Canfora, E.; Faybishenko, B.; Gunter, D.; Chu, H.
2015-12-01
The fluxes-measuring sites that are part of AmeriFlux are operated and maintained in a fairly independent fashion, both in terms of scientific goals and operational practices. This is also the case for most sites from other networks in FLUXNET. This independence leads to a degree of heterogeneity in the data sets collected at the sites, which is also reflected in data quality levels. The generation of derived data products and data synthesis efforts, two of the main goals of these networks, are directly affected by the heterogeneity in data quality. In a collaborative effort between AmeriFlux and ICOS, a series of quality checks are being conducted for the data sets before any network-level data processing and product generation take place. From these checks, a set of common data issues were identified, and are being cataloged and classified into data quality patterns. These patterns are now being used as a basis for implementing automation for certain data quality checks, speeding up the process of applying the checks and evaluating the data. Currently, most data checks are performed individually in each data set, requiring visual inspection and inputs from a data curator. This manual process makes it difficult to scale the quality checks, creating a bottleneck for the data processing. One goal of the automated checks is to free up time of data curators so they can focus on new or less common issues. As new issues are identified, they can also be cataloged and classified, extending the coverage of existing patterns or potentially generating new patterns, helping both improve existing automated checks and create new ones. This approach is helping make data quality evaluation faster, more systematic, and reproducible. Furthermore, these patterns are also helping with documenting common causes and solutions for data problems. This can help tower teams with diagnosing problems in data collection and processing, and also in correcting historical data sets. In this presentation, using AmeriFlux fluxes and micrometeorological data, we discuss our approach to creating observational data patterns, and how we are using them to implement new automated checks. We also detail examples of these observational data patterns, illustrating how they are being used.
Adapted random sampling patterns for accelerated MRI.
Knoll, Florian; Clason, Christian; Diwoky, Clemens; Stollberger, Rudolf
2011-02-01
Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.
A SEASAT-A synthetic aperture imaging radar system
NASA Technical Reports Server (NTRS)
Jordan, R. L.; Rodgers, D. H.
1975-01-01
The SEASAT, a synthetic aperture imaging radar system is the first radar system of its kind designed for the study of ocean wave patterns from orbit. The basic requirement of this system is to generate continuous radar imagery with a 100 km swath with 25m resolution from an orbital altitude of 800 km. These requirements impose unique system design problems. The end to end data system described including interactions of the spacecraft, antenna, sensor, telemetry link, and data processor. The synthetic aperture radar system generates a large quantity of data requiring the use of an analog link with stable local oscillator encoding. The problems associated in telemetering the radar information with sufficient fidelity to synthesize an image on the ground is described as well as the selected solutions to the problems.
Application of artificial neural networks to identify equilibration in computer simulations
NASA Astrophysics Data System (ADS)
Leibowitz, Mitchell H.; Miller, Evan D.; Henry, Michael M.; Jankowski, Eric
2017-11-01
Determining which microstates generated by a thermodynamic simulation are representative of the ensemble for which sampling is desired is a ubiquitous, underspecified problem. Artificial neural networks are one type of machine learning algorithm that can provide a reproducible way to apply pattern recognition heuristics to underspecified problems. Here we use the open-source TensorFlow machine learning library and apply it to the problem of identifying which hypothetical observation sequences from a computer simulation are “equilibrated” and which are not. We generate training populations and test populations of observation sequences with embedded linear and exponential correlations. We train a two-neuron artificial network to distinguish the correlated and uncorrelated sequences. We find that this simple network is good enough for > 98% accuracy in identifying exponentially-decaying energy trajectories from molecular simulations.
Membership generation using multilayer neural network
NASA Technical Reports Server (NTRS)
Kim, Jaeseok
1992-01-01
There has been intensive research in neural network applications to pattern recognition problems. Particularly, the back-propagation network has attracted many researchers because of its outstanding performance in pattern recognition applications. In this section, we describe a new method to generate membership functions from training data using a multilayer neural network. The basic idea behind the approach is as follows. The output values of a sigmoid activation function of a neuron bear remarkable resemblance to membership values. Therefore, we can regard the sigmoid activation values as the membership values in fuzzy set theory. Thus, in order to generate class membership values, we first train a suitable multilayer network using a training algorithm such as the back-propagation algorithm. After the training procedure converges, the resulting network can be treated as a membership generation network, where the inputs are feature values and the outputs are membership values in the different classes. This method allows fairly complex membership functions to be generated because the network is highly nonlinear in general. Also, it is to be noted that the membership functions are generated from a classification point of view. For pattern recognition applications, this is highly desirable, although the membership values may not be indicative of the degree of typicality of a feature value in a particular class.
Assessing Threat Detection Scenarios through Hypothesis Generation and Testing
2015-12-01
Dog Day scenario .............................................................................................................. 9...Figure 1. Rankings of priority threats identified in the Dog Day scenario ............................... 9 Figure 2. Rankings of priority...making in uncertain environments relies heavily on pattern matching. Cohen, Freeman, and Wolf (1996) reported that features of the decision problem
Olugbara, Oludayo
2014-01-01
This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms—being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem. PMID:24883369
Adekanmbi, Oluwole; Olugbara, Oludayo; Adeyemo, Josiah
2014-01-01
This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms-being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem.
Papargyropoulou, Effie; Wright, Nigel; Lozano, Rodrigo; Steinberger, Julia; Padfield, Rory; Ujang, Zaini
2016-03-01
Food waste has significant detrimental economic, environmental and social impacts. The magnitude and complexity of the global food waste problem has brought it to the forefront of the environmental agenda; however, there has been little research on the patterns and drivers of food waste generation, especially outside the household. This is partially due to weaknesses in the methodological approaches used to understand such a complex problem. This paper proposes a novel conceptual framework to identify and explain the patterns and drivers of food waste generation in the hospitality sector, with the aim of identifying food waste prevention measures. This conceptual framework integrates data collection and analysis methods from ethnography and grounded theory, complemented with concepts and tools from industrial ecology for the analysis of quantitative data. A case study of food waste generation at a hotel restaurant in Malaysia is used as an example to illustrate how this conceptual framework can be applied. The conceptual framework links the biophysical and economic flows of food provisioning and waste generation, with the social and cultural practices associated with food preparation and consumption. The case study demonstrates that food waste is intrinsically linked to the way we provision and consume food, the material and socio-cultural context of food consumption and food waste generation. Food provisioning, food consumption and food waste generation should be studied together in order to fully understand how, where and most importantly why food waste is generated. This understanding will then enable to draw detailed, case specific food waste prevention plans addressing the material and socio-economic aspects of food waste generation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Diamond thin film temperature and heat-flux sensors
NASA Technical Reports Server (NTRS)
Aslam, M.; Yang, G. S.; Masood, A.; Fredricks, R.
1995-01-01
Diamond film temperature and heat-flux sensors are developed using a technology compatible with silicon integrated circuit processing. The technology involves diamond nucleation, patterning, doping, and metallization. Multi-sensor test chips were designed and fabricated to study the thermistor behavior. The minimum feature size (device width) for 1st and 2nd generation chips are 160 and 5 micron, respectively. The p-type diamond thermistors on the 1st generation test chip show temperature and response time ranges of 80-1270 K and 0.29-25 microseconds, respectively. An array of diamond thermistors, acting as heat flux sensors, was successfully fabricated on an oxidized Si rod with a diameter of 1 cm. Some problems were encountered in the patterning of the Pt/Ti ohmic contacts on the rod, due mainly to the surface roughness of the diamond film. The use of thermistors with a minimum width of 5 micron (to improve the spatial resolution of measurement) resulted in lithographic problems related to surface roughness of diamond films. We improved the mean surface roughness from 124 nm to 30 nm by using an ultra high nucleation density of 10(exp 11)/sq cm. To deposit thermistors with such small dimensions on a curved surface, a new 3-D diamond patterning technique is currently under development. This involves writing a diamond seed pattern directly on the curved surface by a computer-controlled nozzle.
NASA Astrophysics Data System (ADS)
Tang, Jiafu; Liu, Yang; Fung, Richard; Luo, Xinggang
2008-12-01
Manufacturers have a legal accountability to deal with industrial waste generated from their production processes in order to avoid pollution. Along with advances in waste recovery techniques, manufacturers may adopt various recycling strategies in dealing with industrial waste. With reuse strategies and technologies, byproducts or wastes will be returned to production processes in the iron and steel industry, and some waste can be recycled back to base material for reuse in other industries. This article focuses on a recovery strategies optimization problem for a typical class of industrial waste recycling process in order to maximize profit. There are multiple strategies for waste recycling available to generate multiple byproducts; these byproducts are then further transformed into several types of chemical products via different production patterns. A mixed integer programming model is developed to determine which recycling strategy and which production pattern should be selected with what quantity of chemical products corresponding to this strategy and pattern in order to yield maximum marginal profits. The sales profits of chemical products and the set-up costs of these strategies, patterns and operation costs of production are considered. A simulated annealing (SA) based heuristic algorithm is developed to solve the problem. Finally, an experiment is designed to verify the effectiveness and feasibility of the proposed method. By comparing a single strategy to multiple strategies in an example, it is shown that the total sales profit of chemical products can be increased by around 25% through the simultaneous use of multiple strategies. This illustrates the superiority of combinatorial multiple strategies. Furthermore, the effects of the model parameters on profit are discussed to help manufacturers organize their waste recycling network.
NASA Astrophysics Data System (ADS)
Mohedano, Rubén; Chaves, Julio; Hernández, Maikel
2016-04-01
In many illumination problems, the beam pattern needed and/or some geometrical constraints lead to very asymmetric design conditions. These asymmetries have been solved in the past by means of arrangements of rotationally symmetric or linear lamps aimed in different directions whose patterns overlap to provide the asymmetric prescriptions or by splitting one single lamp into several sections, each one providing a part of the pattern. The development of new design methods yielding smooth continuous free-form optical surfaces to solve these challenging design problems, combined with the proper CAD modeling tools plus the development of multiple axes diamond turn machines, give birth to a new generation of optics. These are able to offer the performance and other advanced features, such as efficiency, compactness, or aesthetical advantages, and can be manufactured at low cost by injection molding. This paper presents two examples of devices with free-form optical surfaces, a camera flash, and a car headlamp.
Generative Effects of Note-Taking during Science Lectures.
ERIC Educational Resources Information Center
Peper, Richard J.; Mayer, Richard E.
1986-01-01
In two experiments subjects were required to either take notes or not take notes while viewing a videotaped lecture on automobile engines. Results produced a pattern of interaction in which note-takers performed better on far-transfer tasks such as problem solving but worse on near-transfer tasks. (Author/LMO)
Data-driven approach for creating synthetic electronic medical records.
Buczak, Anna L; Babin, Steven; Moniz, Linda
2010-10-14
New algorithms for disease outbreak detection are being developed to take advantage of full electronic medical records (EMRs) that contain a wealth of patient information. However, due to privacy concerns, even anonymized EMRs cannot be shared among researchers, resulting in great difficulty in comparing the effectiveness of these algorithms. To bridge the gap between novel bio-surveillance algorithms operating on full EMRs and the lack of non-identifiable EMR data, a method for generating complete and synthetic EMRs was developed. This paper describes a novel methodology for generating complete synthetic EMRs both for an outbreak illness of interest (tularemia) and for background records. The method developed has three major steps: 1) synthetic patient identity and basic information generation; 2) identification of care patterns that the synthetic patients would receive based on the information present in real EMR data for similar health problems; 3) adaptation of these care patterns to the synthetic patient population. We generated EMRs, including visit records, clinical activity, laboratory orders/results and radiology orders/results for 203 synthetic tularemia outbreak patients. Validation of the records by a medical expert revealed problems in 19% of the records; these were subsequently corrected. We also generated background EMRs for over 3000 patients in the 4-11 yr age group. Validation of those records by a medical expert revealed problems in fewer than 3% of these background patient EMRs and the errors were subsequently rectified. A data-driven method was developed for generating fully synthetic EMRs. The method is general and can be applied to any data set that has similar data elements (such as laboratory and radiology orders and results, clinical activity, prescription orders). The pilot synthetic outbreak records were for tularemia but our approach may be adapted to other infectious diseases. The pilot synthetic background records were in the 4-11 year old age group. The adaptations that must be made to the algorithms to produce synthetic background EMRs for other age groups are indicated.
Demand side management in recycling and electricity retail pricing
NASA Astrophysics Data System (ADS)
Kazan, Osman
This dissertation addresses several problems from the recycling industry and electricity retail market. The first paper addresses a real-life scheduling problem faced by a national industrial recycling company. Based on their practices, a scheduling problem is defined, modeled, analyzed, and a solution is approximated efficiently. The recommended application is tested on the real-life data and randomly generated data. The scheduling improvements and the financial benefits are presented. The second problem is from electricity retail market. There are well-known patterns in daily usage in hours. These patterns change in shape and magnitude by seasons and days of the week. Generation costs are multiple times higher during the peak hours of the day. Yet most consumers purchase electricity at flat rates. This work explores analytic pricing tools to reduce peak load electricity demand for retailers. For that purpose, a nonlinear model that determines optimal hourly prices is established based on two major components: unit generation costs and consumers' utility. Both are analyzed and estimated empirically in the third paper. A pricing model is introduced to maximize the electric retailer's profit. As a result, a closed-form expression for the optimal price vector is obtained. Possible scenarios are evaluated for consumers' utility distribution. For the general case, we provide a numerical solution methodology to obtain the optimal pricing scheme. The models recommended are tested under various scenarios that consider consumer segmentation and multiple pricing policies. The recommended model reduces the peak load significantly in most cases. Several utility companies offer hourly pricing to their customers. They determine prices using historical data of unit electricity cost over time. In this dissertation we develop a nonlinear model that determines optimal hourly prices with parameter estimation. The last paper includes a regression analysis of the unit generation cost function obtained from Independent Service Operators. A consumer experiment is established to replicate the peak load behavior. As a result, consumers' utility function is estimated and optimal retail electricity prices are computed.
Asymptotic Far Field Conditions for Unsteady Subsonic and Transonic Flows.
1983-04-01
3, 4, and 5). We shall use the form given by Randall. The conventional treatment of far field conditions for subsonic flows makes use of analytical...PERTURBATIONS IN A PLANE FLOW FIELD WITH A FREE STREAM MACH NUMBER ONE Figure 2 shows the wave patterns obtained in the linearized treatment of subsonic flows... treatment of the three-dimensional problem is entirely analogous to that of the plane problem. At great distances the flow field generated by a body of finite
[Behavioural phenotypes in Prader-Willi syndrome].
Rosell-Raga, L
2003-02-01
The behavioural phenotype of Prader-Willi syndrome (PWS) is defined by a neurological profile and a characteristic pattern of behavioural disorders which include cognitive deficits, learning difficulties and behavioural problems, which increase with age, both in number and gravity. We review the behavioural phenotype of the cases of PWS in the Valencian Community, together with their peculiar behaviours, and analyse how these generate family and social problems. The description of a peculiar behaviour opens up new horizons when understanding and treating PWS, both from a pharmacological and neuropsychological perspective.
A modified priority list-based MILP method for solving large-scale unit commitment problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ke, Xinda; Lu, Ning; Wu, Di
This paper studies the typical pattern of unit commitment (UC) results in terms of generator’s cost and capacity. A method is then proposed to combine a modified priority list technique with mixed integer linear programming (MILP) for UC problem. The proposed method consists of two steps. At the first step, a portion of generators are predetermined to be online or offline within a look-ahead period (e.g., a week), based on the demand curve and generator priority order. For the generators whose on/off status is predetermined, at the second step, the corresponding binary variables are removed from the UC MILP problemmore » over the operational planning horizon (e.g., 24 hours). With a number of binary variables removed, the resulted problem can be solved much faster using the off-the-shelf MILP solvers, based on the branch-and-bound algorithm. In the modified priority list method, scale factors are designed to adjust the tradeoff between solution speed and level of optimality. It is found that the proposed method can significantly speed up the UC problem with minor compromise in optimality by selecting appropriate scale factors.« less
School Bullying and Victimization. NSSC Resource Paper.
ERIC Educational Resources Information Center
Greenbaum, Stuart, Ed.
Schoolyard bullying, a pervasive and significant problem, tends to lead to anti-social behavior in the adult/parental years as well, perpetuating the pattern of violence in a new generation of students. Bullies, and often their victims, tend to operate at a unilateral, or one-way, attitudinal level instead of a reciprocal or collaborative level.…
Leadership Development in Governments of the United Arab Emirates: Re-Framing a Wicked Problem
ERIC Educational Resources Information Center
Mathias, Megan
2017-01-01
Developing the next generation of leaders in government is seen as a strategic challenge of national importance in the United Arab Emirates (UAE). This article examines the wicked nature of the UAE's leadership development challenge, identifying patterns of complexity, uncertainty, and divergence in the strategic intentions underlying current…
Closing the gap between research and management
Deborah M. Finch; Marcia Patton-Mallory
1993-01-01
In this paper, we evaluate the reasons for gaps in communication between researchers and natural resource managers and identify methods to close these gaps. Gaps originate from differing patterns of language use, disparities in organizational culture and values, generation of knowledge that is too narrowly-focused to solve complex problems, failure by managers to relay...
Pattern of Plagiarism in Novice Students' Generated Programs: An Experimental Approach
ERIC Educational Resources Information Center
Ahmadzadeh, Marzieh; Mahmoudabadi, Elham; Khodadadi, Farzad
2011-01-01
Anecdotal evidence shows that in computer programming courses plagiarism is a widespread problem. With the growing number of students in such courses, manual plagiarism detection is impractical. This requires instructors to use one of the many available plagiarism detection tools. Prior to choosing one of such tools, a metric that assures the…
Fuel management optimization using genetic algorithms and code independence
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1994-12-31
Fuel management optimization is a hard problem for traditional optimization techniques. Loading pattern optimization is a large combinatorial problem without analytical derivative information. Therefore, methods designed for continuous functions, such as linear programming, do not always work well. Genetic algorithms (GAs) address these problems and, therefore, appear ideal for fuel management optimization. They do not require derivative information and work well with combinatorial. functions. The GAs are a stochastic method based on concepts from biological genetics. They take a group of candidate solutions, called the population, and use selection, crossover, and mutation operators to create the next generation of bettermore » solutions. The selection operator is a {open_quotes}survival-of-the-fittest{close_quotes} operation and chooses the solutions for the next generation. The crossover operator is analogous to biological mating, where children inherit a mixture of traits from their parents, and the mutation operator makes small random changes to the solutions.« less
On a production system using default reasoning for pattern classification
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Lowe, Carlyle M.
1990-01-01
This paper addresses an unconventional application of a production system to a problem involving belief specialization. The production system reduces a large quantity of low-level descriptions into just a few higher-level descriptions that encompass the problem space in a more tractable fashion. This classification process utilizes a set of descriptions generated by combining the component hierarchy of a physical system with the semantics of the terminology employed in its operation. The paper describes an application of this process in a program, constructed in C and CLIPS, that classifies signatures of electromechanical system configurations. The program compares two independent classifications, describing the actual and expected system configurations, in order to generate a set of contradictions between the two.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
Pattern Generator for Bench Test of Digital Boards
NASA Technical Reports Server (NTRS)
Berkun, Andrew C.; Chu, Anhua J.
2012-01-01
All efforts to develop electronic equipment reach a stage where they need a board test station for each board. The SMAP digital system consists of three board types that interact with each other using interfaces with critical timing. Each board needs to be tested individually before combining into the integrated digital electronics system. Each board needs critical timing signals from the others to be able to operate. A bench test system was developed to support test of each board. The test system produces all the outputs of the control and timing unit, and is delivered much earlier than the timing unit. Timing signals are treated as data. A large file is generated containing the state of every timing signal at any instant. This file is streamed out to an IO card, which is wired directly to the device-under-test (DUT) input pins. This provides a flexible test environment that can be adapted to any of the boards required to test in a standalone configuration. The problem of generating the critical timing signals is then transferred from a hardware problem to a software problem where it is more easily dealt with.
The effect of butterfly-scale inspired patterning on leading-edge vortex growth
NASA Astrophysics Data System (ADS)
Wilroy, Jacob; Lang, Amy; Wahidi, Redha
2014-11-01
Leading edge vortices (LEVs) are important for generating thrust and lift in flapping flight, and the surface patterning (scales) on butterfly wings is hypothesized to play a role in the vortex formation of the LEV. To simplify this complex flow problem, we designed an experiment to focus on the alteration of 2-D vortex development with a variation in surface patterning. Specifically we are interested in the secondary vorticity generated by the LEV interacting at the patterned surface and how this can affect the growth rate of the circulation in the LEV. For this experiment we used rapid-prototyped longitudinal and transverse square grooves attached to a flat plate and compared the vortex formation as the plate moved vertically. The plate is impulsively started in quiescent water and flow fields at Re = 1500, 3000, and 6000 are examined using Digital Particle Image Velocimetry (DPIV). The vortex formation time is 0.6 and is based on the flat plate travel length and chord length. Support for this research came from NSF REU Grant 1358991 and CBET 1335848.
The effect of butterfly-scale inspired patterning on leading-edge vortex growth
NASA Astrophysics Data System (ADS)
Wilroy, Jacob; Lang, Amy
2015-11-01
Leading edge vortices (LEVs) are important for generating thrust and lift in flapping flight, and the surface patterning (scales) on butterfly wings is hypothesized to play a role in the vortex formation of the LEV. To simplify this complex flow problem, an experiment was designed to focus on the alteration of 2-D vortex development with a variation in surface patterning. Specifically, the secondary vorticity generated by the LEV interacting at the patterned surface was studied and the subsequent affect on the growth rate of the circulation in the LEV. For this experiment we used butterfly inspired grooves attached to a flat plate and compared the vortex formation to a smooth plate case as the plate moved vertically. The plate is impulsively started in quiescent water and flow fields at Re = 1500, 3000, and 6000 are examined using Digital Particle Image Velocimetry (DPIV). The vortex formation time is 3.0 and is based on the flat plate travel length and chord length. We would like to thank the National Science Foundation REU Site Award 1358991 for funding this research.
Threshold matrix for digital halftoning by genetic algorithm optimization
NASA Astrophysics Data System (ADS)
Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero
1998-10-01
Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.
A manual for PARTI runtime primitives
NASA Technical Reports Server (NTRS)
Berryman, Harry; Saltz, Joel
1990-01-01
Primitives are presented that are designed to help users efficiently program irregular problems (e.g., unstructured mesh sweeps, sparse matrix codes, adaptive mesh partial differential equations solvers) on distributed memory machines. These primitives are also designed for use in compilers for distributed memory multiprocessors. Communications patterns are captured at runtime, and the appropriate send and receive messages are automatically generated.
ERIC Educational Resources Information Center
Snarey, John; And Others
1987-01-01
A longitudinal study investigated variations in the coping patterns of 52 married men who experienced infertility problems in their first marriage. Styles of coping considered were initial substitutes, subsequent parenting resolutions, and final marital outcomes and the impact of these variations on the men's subsequent success in achieving…
Wains: a pattern-seeking artificial life species.
de Buitléir, Amy; Russell, Michael; Daly, Mark
2012-01-01
We describe the initial phase of a research project to develop an artificial life framework designed to extract knowledge from large data sets with minimal preparation or ramp-up time. In this phase, we evolved an artificial life population with a new brain architecture. The agents have sufficient intelligence to discover patterns in data and to make survival decisions based on those patterns. The species uses diploid reproduction, Hebbian learning, and Kohonen self-organizing maps, in combination with novel techniques such as using pattern-rich data as the environment and framing the data analysis as a survival problem for artificial life. The first generation of agents mastered the pattern discovery task well enough to thrive. Evolution further adapted the agents to their environment by making them a little more pessimistic, and also by making their brains more efficient.
Kim, Hwi; Min, Sung-Wook; Lee, Byoungho
2008-12-01
Geometrical optics analysis of the structural imperfection of retroreflection corner cubes is described. In the analysis, a geometrical optics model of six-beam reflection patterns generated by an imperfect retroreflection corner cube is developed, and its structural error extraction is formulated as a nonlinear optimization problem. The nonlinear conjugate gradient method is employed for solving the nonlinear optimization problem, and its detailed implementation is described. The proposed method of analysis is a mathematical basis for the nondestructive optical inspection of imperfectly fabricated retroreflection corner cubes.
Is prenatal smoking associated with a developmental pattern of conduct problems in young boys?
Wakschlag, Lauren S; Pickett, Kate E; Kasza, Kristen E; Loeber, Rolf
2006-04-01
Prenatal smoking is robustly associated with increased risk of conduct problems in offspring. Observational studies that provide detailed phenotypic description are critical for generating testable hypotheses about underlying processes through which the effects of prenatal smoking may operate. To this end, we use a developmental framework to examine the association of exposure with (1) oppositional defiant disorder and attention-deficit/hyperactivity disorder in young boys and (2) the pattern of delinquent behavior at adolescence. Using diagnostic measures and repeated measures of delinquency, we compare exposed and nonexposed boys from the youngest cohort of the Pittsburgh Youth Study (N = 448). Exposed boys were significantly more likely to (1) develop oppositional defiant disorder and comorbid oppositional defiant disorder-attention-deficit/hyperactivity disorder but not attention-deficit/hyperactivity disorder alone and (2) to have an earlier onset of significant delinquent behavior. The early emergence and developmental coherence of exposure-related conduct problems is striking and is consistent with a behavioral teratological model. Phenotypically, exposure-related conduct problems appear to be characterized by socially resistant and impulsively aggressive behavior. Whether prenatal smoking plays an etiological role in or is a risk marker for the development of conduct problems, exposed offspring are at increased risk of an early-starter pathway to conduct problems.
NASA Astrophysics Data System (ADS)
Iijima, Aya; Suzuki, Kazumi; Wakao, Shinji; Kawasaki, Norihiro; Usami, Akira
With a background of environmental problems and energy issues, it is expected that PV systems will be introduced rapidly and connected with power grids on a large scale in the future. For this reason, the concern to which PV power generation will affect supply and demand adjustment in electric power in the future arises and the technique of correctly grasping the PV power generation becomes increasingly important. The PV power generation depends on solar irradiance, temperature of a module and solar spectral irradiance. Solar spectral irradiance is distribution of the strength of the light for every wavelength. As the spectrum sensitivity of solar cell depends on kind of solar cell, it becomes important for exact grasp of PV power generation. Especially the preparation of solar spectral irradiance is, however, not easy because the observational instrument of solar spectral irradiance is expensive. With this background, in this paper, we propose a new method based on statistical pattern recognition for estimating the spectrum center which is representative index of solar spectral irradiance. Some numerical examples obtained by the proposed method are also presented.
Binary-Phase Fourier Gratings for Nonuniform Array Generation
NASA Technical Reports Server (NTRS)
Keys, Andrew S.; Crow, Robert W.; Ashley, Paul R.
2003-01-01
We describe a design method for a binary-phase Fourier grating that generates an array of spots with nonuniform, user-defined intensities symmetric about the zeroth order. Like the Dammann fanout grating approach, the binary-phase Fourier grating uses only two phase levels in its grating surface profile to generate the final spot array. Unlike the Dammann fanout grating approach, this method allows for the generation of nonuniform, user-defined intensities within the final fanout pattern. Restrictions governing the specification and realization of the array's individual spot intensities are discussed. Design methods used to realize the grating employ both simulated annealing and nonlinear optimization approaches to locate optimal solutions to the grating design problem. The end-use application driving this development operates in the near- to mid-infrared spectrum - allowing for higher resolution in grating specification and fabrication with respect to wavelength than may be available in visible spectrum applications. Fabrication of a grating generating a user-defined nine spot pattern is accomplished in GaAs for the near-infrared. Characterization of the grating is provided through the measurement of individual spot intensities, array uniformity, and overall efficiency. Final measurements are compared to calculated values with a discussion of the results.
Comparison of three coding strategies for a low cost structure light scanner
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming
2014-12-01
Coded structure light is widely used for 3D scanning, and different coding strategies are adopted to suit for different goals. In this paper, three coding strategies are compared, and one of them is selected to implement a low cost structure light scanner under the cost of €100. To reach this goal, the projector and the video camera must be the cheapest, which will lead to some problems related to light coding. For a cheapest projector, complex intensity pattern can't be generated; even if it can be generated, it can't be captured by a cheapest camera. Based on Gray code, three different strategies are implemented and compared, called phase-shift, line-shift, and bit-shift, respectively. The bit-shift Gray code is the contribution of this paper, in which a simple, stable light pattern is used to generate dense(mean points distance<0.4mm) and accurate(mean error<0.1mm) results. The whole algorithm details and some example are presented in the papers.
NASA Astrophysics Data System (ADS)
Zhang, Kai; Li, Jingzhi; He, Zhubin; Yan, Wanfeng
2018-07-01
In this paper, a stochastic optimization framework is proposed to address the microgrid energy dispatching problem with random renewable generation and vehicle activity pattern, which is closer to the practical applications. The patterns of energy generation, consumption and storage availability are all random and unknown at the beginning, and the microgrid controller design (MCD) is formulated as a Markov decision process (MDP). Hence, an online learning-based control algorithm is proposed for the microgrid, which could adapt the control policy with increasing knowledge of the system dynamics and converges to the optimal algorithm. We adopt the linear approximation idea to decompose the original value functions as the summation of each per-battery value function. As a consequence, the computational complexity is significantly reduced from exponential growth to linear growth with respect to the size of battery states. Monte Carlo simulation of different scenarios demonstrates the effectiveness and efficiency of our algorithm.
Myers, Russell B; Millman, Brandon; Noor, Mohamed A F
2014-04-11
Students in college courses struggle to understand many concepts fundamental to transmission and evolutionary genetics, including multilocus inheritance, recombination, Hardy-Weinberg, and genetic drift. These students consistently ask for more demonstrations and more practice problems. With this demand in mind, the "Genetics and Evolution" app was designed to help students (and their instructors) by providing a suite of tools granting them the ability to: (1) simulate genetic crosses with varying numbers of genes and patterns of inheritance, (2) simulate allele frequency changes under natural selection and/ or genetic drift, (3) quiz themselves to reinforce terminology (customizable by any instructor for their whole classroom), *4) solve various problems (recombination fractions, Hardy-Weinberg, heritability, population growth), and (5) generate literally an infinite number of practice problems in all of these areas to try on their own. Although some of these functions are available elsewhere, the alternatives do not have the ability to instantly generate new practice problems or achieve these diverse functions in devices that students carry in their pockets every day. Copyright © 2014 Myers et al.
Selective loss of verbal imagery.
Mehta, Z; Newcombe, F
1996-05-01
This single case study of the ability to generate verbal and non-verbal imagery in a woman who sustained a gunshot wound to the brain reports a significant difficulty in generating images of word shapes but not a significant problem in generating object images. Further dissociation, however, was observed in her ability to generate images of living vs non-living material. She made more errors in imagery and factual information tasks for non-living items than for living items. This pattern contrasts with our previous report of the agnosic patient, M.S., who had severe difficulty in generating images of living material, whereas his ability to image the shape of words was comparable to that of normal control subjects. Furthermore, with regard to the generation of images of living compared with non-living material, M.S. shows more errors with living than nonliving items. In contrast, the present patient, S.M., made significantly more errors with non-living relative to living items. There appear to be two types of double dissociation which reinforce the growing evidence of dissociable impairments in the ability to generate images for different types of verbal and non-verbal material. Such dissociations, presumably related to sensory and cognitive processing demands, address the problem of the neural basis of imagery.
A manual for PARTI runtime primitives, revision 1
NASA Technical Reports Server (NTRS)
Das, Raja; Saltz, Joel; Berryman, Harry
1991-01-01
Primitives are presented that are designed to help users efficiently program irregular problems (e.g., unstructured mesh sweeps, sparse matrix codes, adaptive mesh partial differential equations solvers) on distributed memory machines. These primitives are also designed for use in compilers for distributed memory multiprocessors. Communications patterns are captured at runtime, and the appropriate send and receive messages are automatically generated.
History matching through dynamic decision-making
Maschio, Célio; Santos, Antonio Alberto; Schiozer, Denis; Rocha, Anderson
2017-01-01
History matching is the process of modifying the uncertain attributes of a reservoir model to reproduce the real reservoir performance. It is a classical reservoir engineering problem and plays an important role in reservoir management since the resulting models are used to support decisions in other tasks such as economic analysis and production strategy. This work introduces a dynamic decision-making optimization framework for history matching problems in which new models are generated based on, and guided by, the dynamic analysis of the data of available solutions. The optimization framework follows a ‘learning-from-data’ approach, and includes two optimizer components that use machine learning techniques, such as unsupervised learning and statistical analysis, to uncover patterns of input attributes that lead to good output responses. These patterns are used to support the decision-making process while generating new, and better, history matched solutions. The proposed framework is applied to a benchmark model (UNISIM-I-H) based on the Namorado field in Brazil. Results show the potential the dynamic decision-making optimization framework has for improving the quality of history matching solutions using a substantial smaller number of simulations when compared with a previous work on the same benchmark. PMID:28582413
Xiang, Zuoshuang; Zheng, Jie; Lin, Yu; He, Yongqun
2015-01-01
It is time-consuming to build an ontology with many terms and axioms. Thus it is desired to automate the process of ontology development. Ontology Design Patterns (ODPs) provide a reusable solution to solve a recurrent modeling problem in the context of ontology engineering. Because ontology terms often follow specific ODPs, the Ontology for Biomedical Investigations (OBI) developers proposed a Quick Term Templates (QTTs) process targeted at generating new ontology classes following the same pattern, using term templates in a spreadsheet format. Inspired by the ODPs and QTTs, the Ontorat web application is developed to automatically generate new ontology terms, annotations of terms, and logical axioms based on a specific ODP(s). The inputs of an Ontorat execution include axiom expression settings, an input data file, ID generation settings, and a target ontology (optional). The axiom expression settings can be saved as a predesigned Ontorat setting format text file for reuse. The input data file is generated based on a template file created by a specific ODP (text or Excel format). Ontorat is an efficient tool for ontology expansion. Different use cases are described. For example, Ontorat was applied to automatically generate over 1,000 Japan RIKEN cell line cell terms with both logical axioms and rich annotation axioms in the Cell Line Ontology (CLO). Approximately 800 licensed animal vaccines were represented and annotated in the Vaccine Ontology (VO) by Ontorat. The OBI team used Ontorat to add assay and device terms required by ENCODE project. Ontorat was also used to add missing annotations to all existing Biobank specific terms in the Biobank Ontology. A collection of ODPs and templates with examples are provided on the Ontorat website and can be reused to facilitate ontology development. With ever increasing ontology development and applications, Ontorat provides a timely platform for generating and annotating a large number of ontology terms by following design patterns. http://ontorat.hegroup.org/.
An exploration of family dynamics and attachment strategies in a family with ADHD/conduct problems.
Dallos, Rudi; Smart, Cordet
2011-10-01
This article reports the preliminary findings of a study of attachment patterns and relationship themes using the TAAI (Transition to Adulthood Attachment Interview), AAI (Adult Attachment Interview) and family interviews (based on the first of 15 families). Research data is presented on a young man aged 16 with a diagnosis of ADHD and his family. Individual interviews, attachment interviews, and family interviews were conducted in order to explore the link between family dynamics, ADHD and attachment strategies. In contrast to findings from existing research indicating pre-occupied patterns for young people diagnosed with ADHD, the young man displayed a complex 'disoriented' attachment pattern which primarily featured a dismissive strategy. However, this was combined with pre-occupied patterns triggered by intrusions from unresolved traumas and memories of his parents' continuing unresolved conflicts. His sense of confusion and lack of a coherent strategy appeared to be closely related to his position of being triangulated into his parents' conflicts. Trans-generational processes were also influential, in that the parents' corrective intentions at more positive parenting were impeded by their own lack of experience of positive attachments in their own childhoods. The study emphasizes the need to consider the relationship between attachment patterns and problems within wider systemic process in the family, in particular triangulation and corrective scripts.
Activity Structures and the Unfolding of Problem-Solving Actions in High-School Chemistry Classrooms
NASA Astrophysics Data System (ADS)
Criswell, Brett A.; Rushton, Greg T.
2014-02-01
In this paper, we argue for a more systematic approach for studying the relationship between classroom practices and scientific practices—an approach that will likely better support the systemic reforms being promoted in the Next Generation Science Standards in the USA and similar efforts in other countries. One component of that approach is looking at how the nature of the activity structure may influence the relative alignment between classroom and scientific practices. To that end, we build on previously published research related to the practices utilized by five high-school chemistry teachers as they enacted problem-solving activities in which students were likely to generate proposals that were not aligned with normative scientific understandings. In that prior work, our analysis had emphasized micro-level features of the talk interactions and how they related to the way students' ideas were explored; in the current paper, the analysis zooms out to consider the macro-level nature of the enactments associated with the activity structure of each lesson examined. Our data show that there were two general patterns to the activity structure across the 14 lessons scrutinized, and that each pattern had associated with it a constellation of features that impinged on the way the problem space was navigated. A key finding is that both activity structures (the expansive and the open) had features that aligned with scientific practices espoused in the Next Generation Science Standards—and both had features that were not aligned with those practices. We discuss the nature of these two structures, evidence of the relationship of each structure to key features of how the lessons unfolded, and the implications of these findings for both future research and the training of teachers.
Real-time range acquisition by adaptive structured light.
Koninckx, Thomas P; Van Gool, Luc
2006-03-01
The goal of this paper is to provide a "self-adaptive" system for real-time range acquisition. Reconstructions are based on a single frame structured light illumination. Instead of using generic, static coding that is supposed to work under all circumstances, system adaptation is proposed. This occurs on-the-fly and renders the system more robust against instant scene variability and creates suitable patterns at startup. A continuous trade-off between speed and quality is made. A weighted combination of different coding cues--based upon pattern color, geometry, and tracking--yields a robust way to solve the correspondence problem. The individual coding cues are automatically adapted within a considered family of patterns. The weights to combine them are based on the average consistency with the result within a small time-window. The integration itself is done by reformulating the problem as a graph cut. Also, the camera-projector configuration is taken into account for generating the projection patterns. The correctness of the range maps is not guaranteed, but an estimation of the uncertainty is provided for each part of the reconstruction. Our prototype is implemented using unmodified consumer hardware only and, therefore, is cheap. Frame rates vary between 10 and 25 fps, dependent on scene complexity.
The French 35-hour workweek: a wide-ranging social change.
Prunier-Poulmaire, S; Gadbois, C
2001-12-01
The reduction of the legal working week to 35 hours in France has generated wide-ranging social change. We examine the resulting changes in working-time patterns as well as their repercussions on the use of the time gained and on the quality of life and health. To compensate the reduction in the length of the working week, companies have modified the working-time patterns, by extending operation time (shiftwork, atypical schedules) and by matching the on-site workforce to production requirements (flexible working hours). They have sought to make more efficient use of working time: job intensification or job compression. The effects on the off-the-job life and health are linked to the shiftwork and atypical schedules designed to increase the company's operating time, and adjustments to the company's need for flexibilization impose working time/free time patterns that are at odds with biological rhythms and social life patterns. Changes to working-time patterns have unexpected consequences for work organization: heightened difficulties for the individual and the crew. These changes may generate a range of health problems related to overwork and stress. The way some companies have adapted may call into question the usefulness of work done by employees, thus damaging their social identity and mental well-being.
Design pattern mining using distributed learning automata and DNA sequence alignment.
Esmaeilpour, Mansour; Naderifar, Vahideh; Shukur, Zarina
2014-01-01
Over the last decade, design patterns have been used extensively to generate reusable solutions to frequently encountered problems in software engineering and object oriented programming. A design pattern is a repeatable software design solution that provides a template for solving various instances of a general problem. This paper describes a new method for pattern mining, isolating design patterns and relationship between them; and a related tool, DLA-DNA for all implemented pattern and all projects used for evaluation. DLA-DNA achieves acceptable precision and recall instead of other evaluated tools based on distributed learning automata (DLA) and deoxyribonucleic acid (DNA) sequences alignment. The proposed method mines structural design patterns in the object oriented source code and extracts the strong and weak relationships between them, enabling analyzers and programmers to determine the dependency rate of each object, component, and other section of the code for parameter passing and modular programming. The proposed model can detect design patterns better that available other tools those are Pinot, PTIDEJ and DPJF; and the strengths of their relationships. The result demonstrate that whenever the source code is build standard and non-standard, based on the design patterns, then the result of the proposed method is near to DPJF and better that Pinot and PTIDEJ. The proposed model is tested on the several source codes and is compared with other related models and available tools those the results show the precision and recall of the proposed method, averagely 20% and 9.6% are more than Pinot, 27% and 31% are more than PTIDEJ and 3.3% and 2% are more than DPJF respectively. The primary idea of the proposed method is organized in two following steps: the first step, elemental design patterns are identified, while at the second step, is composed to recognize actual design patterns.
Single-shot thermal ghost imaging using wavelength-division multiplexing
NASA Astrophysics Data System (ADS)
Deng, Chao; Suo, Jinli; Wang, Yuwang; Zhang, Zhili; Dai, Qionghai
2018-01-01
Ghost imaging (GI) is an emerging technique that reconstructs the target scene from its correlated measurements with a sequence of patterns. Restricted by the multi-shot principle, GI usually requires long acquisition time and is limited in observation of dynamic scenes. To handle this problem, this paper proposes a single-shot thermal ghost imaging scheme via a wavelength-division multiplexing technique. Specifically, we generate thousands of correlated patterns simultaneously by modulating a broadband light source with a wavelength dependent diffuser. These patterns carry the scene's spatial information and then the correlated photons are coupled into a spectrometer for the final reconstruction. This technique increases the speed of ghost imaging and promotes the applications in dynamic ghost imaging with high scalability and compatibility.
Using ProMED-Mail and MedWorm blogs for cross-domain pattern analysis in epidemic intelligence.
Stewart, Avaré; Denecke, Kerstin
2010-01-01
In this work we motivate the use of medical blog user generated content for gathering facts about disease reporting events to support biosurveillance investigation. Given the characteristics of blogs, the extraction of such events is made more difficult due to noise and data abundance. We address the problem of automatically inferring disease reporting event extraction patterns in this more noisy setting. The sublanguage used in outbreak reports is exploited to align with the sequences of disease reporting sentences in blogs. Based our Cross Domain Pattern Analysis Framework, experimental results show that Phase-Level sequences tend to produce more overlap across the domains than Word-Level sequences. The cross domain alignment process is effective at filtering noisy sequences from blogs and extracting good candidate sequence patterns from an abundance of text.
Patterns of source monitoring bias in incarcerated youths with and without conduct problems.
Morosan, Larisa; Badoud, Deborah; Salaminios, George; Eliez, Stephan; Van der Linden, Martial; Heller, Patrick; Debbané, Martin
2018-01-01
Antisocial individuals present behaviours that violate the social norms and the rights of others. In the present study, we examine whether biases in monitoring the self-generated cognitive material might be linked to antisocial manifestations during adolescence. We further examine the association with psychopathic traits and conduct problems (CPs). Sixty-five incarcerated adolescents (IAs; M age = 15.85, SD = 1.30) and 88 community adolescents (CAs; M age = 15.78, SD = 1.60) participated in our study. In the IA group, 28 adolescents presented CPs (M age = 16.06, SD = 1.41) and 19 did not meet the diagnostic criteria for CPs (M age = 15.97, SD = 1.20). Source monitoring was assessed through a speech-monitoring task, using items requiring different levels of cognitive effort; recognition and source-monitoring bias scores (internalising and externalising biases) were calculated. Between-group comparisons indicate greater overall biases and different patterns of biases in the source monitoring. IA participants manifest a greater externalising bias, whereas CA participants present a greater internalising bias. In addition, IA with CPs present different patterns of item recognition. These results indicate that the two groups of adolescents present different types of source-monitoring bias for self-generated speech. In addition, the IAs with CPs present impairments in item recognition. Future studies may examine the developmental implications of self-monitoring biases in the perseverance of antisocial behaviours from adolescence to adulthood.
Simple Köhler Homogenizers for Image-forming Solar Concentrators
NASA Astrophysics Data System (ADS)
Winston, Roland; Zhang, Weiya
2011-12-01
We demonstrate that the Köhler illumination technique can be applied to the image-forming solar concentrators to solve the problem of "hot" spot and to generate the square irradiance pattern. The Köhler homogenizer can be simply a single aspheric lens optimized following a few guidelines. Two examples are given including a Fresnel lens based concentrator and a two-mirror aplanatic system.
Optimal dietary patterns designed from local foods to achieve maternal nutritional goals.
Raymond, Jofrey; Kassim, Neema; Rose, Jerman W; Agaba, Morris
2018-04-04
Achieving nutritional requirements for pregnant and lactating mothers in rural households while maintaining the intake of local and culture-specific foods can be a difficult task. Deploying a linear goal programming approach can effectively generate optimal dietary patterns that incorporate local and culturally acceptable diets. The primary objective of this study was to determine whether a realistic and affordable diet that achieves nutritional goals for rural pregnant and lactating women can be formulated from locally available foods in Tanzania. A cross sectional study was conducted to assess dietary intakes of 150 pregnant and lactating women using a weighed dietary record (WDR), 24 h dietary recalls and a 7-days food record. A market survey was also carried out to estimate the cost per 100 g of edible portion of foods that are frequently consumed in the study population. Dietary survey and market data were then used to define linear programming (LP) model parameters for diet optimisation. All LP analyses were done using linear program solver to generate optimal dietary patterns. Our findings showed that optimal dietary patterns designed from locally available foods would improve dietary adequacy for 15 and 19 selected nutrients in pregnant and lactating women, respectively, but inadequacies remained for iron, zinc, folate, pantothenic acid, and vitamin E, indicating that these are problem nutrients (nutrients that did not achieve 100% of their RNIs in optimised diets) in the study population. These findings suggest that optimal use of local foods can improve dietary adequacy for rural pregnant and lactating women aged 19-50 years. However, additional cost-effective interventions are needed to ensure adequate intakes for the identified problem nutrients.
A self-learning camera for the validation of highly variable and pseudorandom patterns
NASA Astrophysics Data System (ADS)
Kelley, Michael
2004-05-01
Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.
Layout decomposition of self-aligned double patterning for 2D random logic patterning
NASA Astrophysics Data System (ADS)
Ban, Yongchan; Miloslavsky, Alex; Lucas, Kevin; Choi, Soo-Han; Park, Chul-Hong; Pan, David Z.
2011-04-01
Self-aligned double pattering (SADP) has been adapted as a promising solution for sub-30nm technology nodes due to its lower overlay problem and better process tolerance. SADP is in production use for 1D dense patterns with good pitch control such as NAND Flash memory applications, but it is still challenging to apply SADP to 2D random logic patterns. The favored type of SADP for complex logic interconnects is a two mask approach using a core mask and a trim mask. In this paper, we first describe layout decomposition methods of spacer-type double patterning lithography, then report a type of SADP compliant layouts, and finally report SADP applications on Samsung 22nm SRAM layout. For SADP decomposition, we propose several SADP-aware layout coloring algorithms and a method of generating lithography-friendly core mask patterns. Experimental results on 22nm node designs show that our proposed layout decomposition for SADP effectively decomposes any given layouts.
The Intergenerational Circumstances of Household Food Insecurity and Adversity.
Chilton, Mariana; Knowles, Molly; Bloom, Sandra L
2017-04-03
Household food insecurity is linked with exposure to violence and adversity throughout the life course, suggesting its transfer across generations. Using grounded theory, we analyzed semistructured interviews with 31 mothers reporting household food insecurity where participants described major life events and social relationships. Through the lens of multigenerational interactions, 4 themes emerged: (1) hunger and violence across the generations, (2) disclosure to family and friends, (3) depression and problems with emotional management, and (4) breaking out of intergenerational patterns. After describing these themes and how they relate to reports of food insecurity, we identify opportunities for social services and policy intervention.
Mirus, B.B.; Ebel, B.A.; Heppner, C.S.; Loague, K.
2011-01-01
Concept development simulation with distributed, physics-based models provides a quantitative approach for investigating runoff generation processes across environmental conditions. Disparities within data sets employed to design and parameterize boundary value problems used in heuristic simulation inevitably introduce various levels of bias. The objective was to evaluate the impact of boundary value problem complexity on process representation for different runoff generation mechanisms. The comprehensive physics-based hydrologic response model InHM has been employed to generate base case simulations for four well-characterized catchments. The C3 and CB catchments are located within steep, forested environments dominated by subsurface stormflow; the TW and R5 catchments are located in gently sloping rangeland environments dominated by Dunne and Horton overland flows. Observational details are well captured within all four of the base case simulations, but the characterization of soil depth, permeability, rainfall intensity, and evapotranspiration differs for each. These differences are investigated through the conversion of each base case into a reduced case scenario, all sharing the same level of complexity. Evaluation of how individual boundary value problem characteristics impact simulated runoff generation processes is facilitated by quantitative analysis of integrated and distributed responses at high spatial and temporal resolution. Generally, the base case reduction causes moderate changes in discharge and runoff patterns, with the dominant process remaining unchanged. Moderate differences between the base and reduced cases highlight the importance of detailed field observations for parameterizing and evaluating physics-based models. Overall, similarities between the base and reduced cases indicate that the simpler boundary value problems may be useful for concept development simulation to investigate fundamental controls on the spectrum of runoff generation mechanisms. Copyright 2011 by the American Geophysical Union.
Fast generation of Fresnel holograms based on multirate filtering.
Tsang, Peter; Liu, Jung-Ping; Cheung, Wai-Keung; Poon, Ting-Chung
2009-12-01
One of the major problems in computer-generated holography is the high computation cost involved for the calculation of fringe patterns. Recently, the problem has been addressed by imposing a horizontal parallax only constraint whereby the process can be simplified to the computation of one-dimensional sublines, each representing a scan plane of the object scene. Subsequently the sublines can be expanded to a two-dimensional hologram through multiplication with a reference signal. Furthermore, economical hardware is available with which sublines can be generated in a computationally free manner with high throughput of approximately 100 M pixels/second. Apart from decreasing the computation loading, the sublines can be treated as intermediate data that can be compressed by simply downsampling the number of sublines. Despite these favorable features, the method is suitable only for the generation of white light (rainbow) holograms, and the resolution of the reconstructed image is inferior to the classical Fresnel hologram. We propose to generate holograms from one-dimensional sublines so that the above-mentioned problems can be alleviated. However, such an approach also leads to a substantial increase in computation loading. To overcome this problem we encapsulated the conversion of sublines to holograms as a multirate filtering process and implemented the latter by use of a fast Fourier transform. Evaluation reveals that, for holograms of moderate size, our method is capable of operating 40,000 times faster than the calculation of Fresnel holograms based on the precomputed table lookup method. Although there is no relative vertical parallax between object points at different distance planes, a global vertical parallax is preserved for the object scene as a whole and the reconstructed image can be observed easily.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elizondo, Marcelo A.; Samaan, Nader A.; Makarov, Yuri V.
Voltage and reactive power system control is generally performed following usual patterns of loads, based on off-line studies for daily and seasonal operations. This practice is currently challenged by the inclusion of distributed renewable generation, such as solar. There has been focus on resolving this problem at the distribution level; however, the transmission and sub-transmission levels have received less attention. This paper provides a literature review of proposed methods and solution approaches to coordinate and optimize voltage control and reactive power management, with an emphasis on applications at transmission and sub-transmission level. The conclusion drawn from the survey is thatmore » additional research is needed in the areas of optimizing switch shunt actions and coordinating all available resources to deal with uncertain patterns from increasing distributed renewable generation in the operational time frame. These topics are not deeply explored in the literature.« less
Sum-Frequency Generation from a Thin Cylindrical Layer
NASA Astrophysics Data System (ADS)
Shamyna, A. A.; Kapshai, V. N.
2018-01-01
In the Rayleigh-Gans-Debye approximation, we have solved the problem of the sum-frequency generation by two plane elliptically polarized electromagnetic waves from the surface of a dielectric particle of a cylindrical shape that is coated by a thin layer possessing nonlinear optical properties. The formulas that describe the sum-frequency field have been presented in the tensor and vector forms for the second-order nonlinear dielectric susceptibility tensor, which was chosen in the general form, containing chiral components. Expressions describing the sum-frequency field from the cylindrical particle ends have been obtained for the case of a nonlinear layer possessing chiral properties. Three-dimensional directivity patterns of the sum-frequency radiation have been analyzed for different combinations of parameters (angles of incidence, degrees of ellipticity, orientations of polarization ellipses, cylindrical particle dimensions). The mathematical properties of the spatial distribution functions of the sum-frequency field, which characterize the symmetry of directivity patterns, have been revealed.
A Genetic Algorithm for the Generation of Packetization Masks for Robust Image Communication
Zapata-Quiñones, Katherine; Duran-Faundez, Cristian; Gutiérrez, Gilberto; Lecuire, Vincent; Arredondo-Flores, Christopher; Jara-Lipán, Hugo
2017-01-01
Image interleaving has proven to be an effective solution to provide the robustness of image communication systems when resource limitations make reliable protocols unsuitable (e.g., in wireless camera sensor networks); however, the search for optimal interleaving patterns is scarcely tackled in the literature. In 2008, Rombaut et al. presented an interesting approach introducing a packetization mask generator based in Simulated Annealing (SA), including a cost function, which allows assessing the suitability of a packetization pattern, avoiding extensive simulations. In this work, we present a complementary study about the non-trivial problem of generating optimal packetization patterns. We propose a genetic algorithm, as an alternative to the cited work, adopting the mentioned cost function, then comparing it to the SA approach and a torus automorphism interleaver. In addition, we engage the validation of the cost function and provide results attempting to conclude about its implication in the quality of reconstructed images. Several scenarios based on visual sensor networks applications were tested in a computer application. Results in terms of the selected cost function and image quality metric PSNR show that our algorithm presents similar results to the other approaches. Finally, we discuss the obtained results and comment about open research challenges. PMID:28452934
Recurrence Methods for the Identification of Morphogenetic Patterns
Facchini, Angelo; Mocenni, Chiara
2013-01-01
This paper addresses the problem of identifying the parameters involved in the formation of spatial patterns in nonlinear two dimensional systems. To this aim, we perform numerical experiments on a prototypical model generating morphogenetic Turing patterns, by changing both the spatial frequency and shape of the patterns. The features of the patterns and their relationship with the model parameters are characterized by means of the Generalized Recurrence Quantification measures. We show that the recurrence measures Determinism and Recurrence Entropy, as well as the distribution of the line lengths, allow for a full characterization of the patterns in terms of power law decay with respect to the parameters involved in the determination of their spatial frequency and shape. A comparison with the standard two dimensional Fourier transform is performed and the results show a better performance of the recurrence indicators in identifying a reliable connection with the spatial frequency of the patterns. Finally, in order to evaluate the robustness of the estimation of the power low decay, extensive simulations have been performed by adding different levels of noise to the patterns. PMID:24066062
Hybrid generative-discriminative approach to age-invariant face recognition
NASA Astrophysics Data System (ADS)
Sajid, Muhammad; Shafique, Tamoor
2018-03-01
Age-invariant face recognition is still a challenging research problem due to the complex aging process involving types of facial tissues, skin, fat, muscles, and bones. Most of the related studies that have addressed the aging problem are focused on generative representation (aging simulation) or discriminative representation (feature-based approaches). Designing an appropriate hybrid approach taking into account both the generative and discriminative representations for age-invariant face recognition remains an open problem. We perform a hybrid matching to achieve robustness to aging variations. This approach automatically segments the eyes, nose-bridge, and mouth regions, which are relatively less sensitive to aging variations compared with the rest of the facial regions that are age-sensitive. The aging variations of age-sensitive facial parts are compensated using a demographic-aware generative model based on a bridged denoising autoencoder. The age-insensitive facial parts are represented by pixel average vector-based local binary patterns. Deep convolutional neural networks are used to extract relative features of age-sensitive and age-insensitive facial parts. Finally, the feature vectors of age-sensitive and age-insensitive facial parts are fused to achieve the recognition results. Extensive experimental results on morphological face database II (MORPH II), face and gesture recognition network (FG-NET), and Verification Subset of cross-age celebrity dataset (CACD-VS) demonstrate the effectiveness of the proposed method for age-invariant face recognition well.
NASA Astrophysics Data System (ADS)
Yang, Peng; Peng, Yongfei; Ye, Bin; Miao, Lixin
2017-09-01
This article explores the integrated optimization problem of location assignment and sequencing in multi-shuttle automated storage/retrieval systems under the modified 2n-command cycle pattern. The decision of storage and retrieval (S/R) location assignment and S/R request sequencing are jointly considered. An integer quadratic programming model is formulated to describe this integrated optimization problem. The optimal travel cycles for multi-shuttle S/R machines can be obtained to process S/R requests in the storage and retrieval request order lists by solving the model. The small-sized instances are optimally solved using CPLEX. For large-sized problems, two tabu search algorithms are proposed, in which the first come, first served and nearest neighbour are used to generate initial solutions. Various numerical experiments are conducted to examine the heuristics' performance and the sensitivity of algorithm parameters. Furthermore, the experimental results are analysed from the viewpoint of practical application, and a parameter list for applying the proposed heuristics is recommended under different real-life scenarios.
Indexing a sequence for mapping reads with a single mismatch.
Crochemore, Maxime; Langiu, Alessio; Rahman, M Sohel
2014-05-28
Mapping reads against a genome sequence is an interesting and useful problem in computational molecular biology and bioinformatics. In this paper, we focus on the problem of indexing a sequence for mapping reads with a single mismatch. We first focus on a simpler problem where the length of the pattern is given beforehand during the data structure construction. This version of the problem is interesting in its own right in the context of the next generation sequencing. In the sequel, we show how to solve the more general problem. In both cases, our algorithm can construct an efficient data structure in O(n log(1+ε) n) time and space and can answer subsequent queries in O(m log log n + K) time. Here, n is the length of the sequence, m is the length of the read, 0<ε<1 and is the optimal output size.
Prospect of EUV mask repair technology using e-beam tool
NASA Astrophysics Data System (ADS)
Kanamitsu, Shingo; Hirano, Takashi; Suga, Osamu
2010-09-01
Currently, repair machines used for advanced photomasks utilize principle method like as FIB, AFM, and EB. There are specific characteristic respectively, thus they have an opportunity to be used in suitable situation. But when it comes to EUV generation, pattern size is so small highly expected as under 80nm that higher image resolution and repair accuracy is needed for its machines. Because FIB machine has intrinsic damage problem induced by Ga ion and AFM machine has critical tip size issue, those machines are basically difficult to be applied for EUV generation. Consequently, we focused on EB repair tool for research work. EB repair tool has undergone practical milestone about MoSi based masks. We have applied same process which is used for MoSi to EUV blank and confirmed its reaction. Then we found some severe problems which show uncontrollable feature due to its enormously strong reaction between etching gas and absorber material. Though we could etch opaque defect with conventional method and get the edge shaped straight by top-down SEM viewing, there were problems like as sidewall undercut or local erosion depending on defect shape. In order to cope with these problems, the tool vender has developed a new process and reported it through an international conference [1]. We have evaluated the new process mentioned above in detail. In this paper, we will bring the results of those evaluations. Several experiments for repair accuracy, process stability, and other items have been done under estimation of practical condition assuming diversified size and shape defects. A series of actual printability tests will be also included. On the basis of these experiments, we consider the possibility of EB-repair application for 20nm pattern.
Quadruped Robot Locomotion using a Global Optimization Stochastic Algorithm
NASA Astrophysics Data System (ADS)
Oliveira, Miguel; Santos, Cristina; Costa, Lino; Ferreira, Manuel
2011-09-01
The problem of tuning nonlinear dynamical systems parameters, such that the attained results are considered good ones, is a relevant one. This article describes the development of a gait optimization system that allows a fast but stable robot quadruped crawl gait. We combine bio-inspired Central Patterns Generators (CPGs) and Genetic Algorithms (GA). CPGs are modelled as autonomous differential equations, that generate the necessar y limb movement to perform the required walking gait. The GA finds parameterizations of the CPGs parameters which attain good gaits in terms of speed, vibration and stability. Moreover, two constraint handling techniques based on tournament selection and repairing mechanism are embedded in the GA to solve the proposed constrained optimization problem and make the search more efficient. The experimental results, performed on a simulated Aibo robot, demonstrate that our approach allows low vibration with a high velocity and wide stability margin for a quadruped slow crawl gait.
Ginzburg-Landau equation as a heuristic model for generating rogue waves
NASA Astrophysics Data System (ADS)
Lechuga, Antonio
2016-04-01
Envelope equations have many applications in the study of physical systems. Particularly interesting is the case 0f surface water waves. In steady conditions, laboratory experiments are carried out for multiple purposes either for researches or for practical problems. In both cases envelope equations are useful for understanding qualitative and quantitative results. The Ginzburg-Landau equation provides an excellent model for systems of that kind with remarkable patterns. Taking into account the above paragraph the main aim of our work is to generate waves in a water tank with almost a symmetric spectrum according to Akhmediev (2011) and thus, to produce a succession of rogue waves. The envelope of these waves gives us some patterns whose model is a type of Ginzburg-Landau equation, Danilov et al (1988). From a heuristic point of view the link between the experiment and the model is achieved. Further, the next step consists of changing generating parameters on the water tank and also the coefficients of the Ginzburg-Landau equation, Lechuga (2013) in order to reach a sufficient good approach.
Daniela Biondi; Antonio Carlos Batista; Angeline Martini
2013-01-01
Urban growth worldwide has generated great concern in the planning of the different environments belonging to the wildland-urban interface. One of the problems that arise is the landscape treatment given to roads, which must not only comply with aesthetic and ecological principles, but also be functional, adding functions relating to forest fire prevention and control...
Simple Köhler homogenizers for image-forming solar concentrators
NASA Astrophysics Data System (ADS)
Zhang, Weiya; Winston, Roland
2010-08-01
By adding simple Köhler homogenizers in the form of aspheric lenses generated with an optimization approach, we solve the problems of non-uniform irradiance distribution and non-square irradiance pattern existing in some image-forming solar concentrators. The homogenizers do not require optical bonding to the solar cells or total internal reflection surface. Two examples are shown including a Fresnel lens based concentrator and a two-mirror aplanatic system.
The effect of butterfly-scale inspired patterning on leading-edge vortex growth
NASA Astrophysics Data System (ADS)
Wilroy, Jacob Aaron
Leading edge vortices (LEVs) are important for generating thrust and lift in flapping flight, and the surface patterning (scales) on butterfly wings is hypothesized to play a role in the vortex formation of the LEV. To simplify this complex flow problem, an experiment was designed to focus on the alteration of 2-D vortex development with a variation in surface patterning. Specifically, the secondary vorticity generated by the LEV interacting at the patterned surface was studied, as well as the subsequent effect on the LEV's growth rate and peak circulation. For this experiment, rapid-prototyped grooves based on the scale geometry of the Monarch butterfly (Danaus plexippus) were created using additive manufacturing and were attached to a flat plate with a chordwise orientation, thus increasing plate surface area. The vortex generated by the grooved plate was then compared to a smooth plate case in an experiment where the plate translated vertically through a 2 x 3 x 5 cubic foot tow tank. The plate was impulsively started in quiescent water and flow fields at Rec = 1416, 2833, and 5667 are examined using Digital Particle Image Velocimetry (DPIV). The maximum vortex formation number is 2.8 and is based on the flat plate travel length and chord length. Flow fields from each case show the generation of a secondary vortex whose interaction with the shear layer and LEV caused different behaviors depending upon the surface type. The vortex development process varied for each Reynolds number and it was found that for the lowest Reynolds number case a significant difference does not exist between surface types, however, for the other two cases the grooves affected the secondary vortex's behavior and the LEV's ability to grow at a rate similar to the smooth plate case.
Aymerich, María; Nieto, Daniel; Álvarez, Ezequiel; Flores-Arias, María T
2017-02-22
A laser based technique for microstructuring titanium and tantalum substrates using the Talbot effect and an array of microlenses is presented. By using this hybrid technique; we are able to generate different patterns and geometries on the top surfaces of the biomaterials. The Talbot effect allows us to rapidly make microstructuring, solving the common problems of using microlenses for multipatterning; where the material expelled during the ablation of biomaterials damages the microlens. The Talbot effect permits us to increase the working distance and reduce the period of the patterns. We also demonstrate that the geometries and patterns act as anchor points for cells; affecting the cell adhesion to the metallic substrates and guiding how they spread over the material.
Aymerich, María; Nieto, Daniel; Álvarez, Ezequiel; Flores-Arias, María T.
2017-01-01
A laser based technique for microstructuring titanium and tantalum substrates using the Talbot effect and an array of microlenses is presented. By using this hybrid technique; we are able to generate different patterns and geometries on the top surfaces of the biomaterials. The Talbot effect allows us to rapidly make microstructuring, solving the common problems of using microlenses for multipatterning; where the material expelled during the ablation of biomaterials damages the microlens. The Talbot effect permits us to increase the working distance and reduce the period of the patterns. We also demonstrate that the geometries and patterns act as anchor points for cells; affecting the cell adhesion to the metallic substrates and guiding how they spread over the material. PMID:28772574
Nice or effective? Social problem solving strategies in patients with major depressive disorder.
Thoma, Patrizia; Schmidt, Tobias; Juckel, Georg; Norra, Christine; Suchan, Boris
2015-08-30
Our study addressed distinct aspects of social problem solving in 28 hospitalized patients with Major Depressive Disorder (MDD) and 28 matched healthy controls. Three scenario-based tests assessed the ability to infer the mental states of story characters in difficult interpersonal situations, the capacity to freely generate good strategies for dealing with such situations and the ability to identify the best solutions among less optimal alternatives. Also, standard tests assessing attention, memory, executive function and trait empathy were administered. Compared to controls, MDD patients showed impaired interpretation of other peoples' sarcastic remarks but not of the mental states underlying other peoples' actions. Furthermore, MDD patients generated fewer strategies that were socially sensitive and practically effective at the same time or at least only socially sensitive. Overall, while the free generation of adequate strategies for difficult social situations was impaired, recognition of optimal solutions among alternatives was spared in MDD patients. Higher generation scores were associated with higher trait empathy and cognitive flexibility scores. We suggest that this specific pattern of impairments ought to be considered in the development of therapies addressing impaired social skills in MDD. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Online Optimization Method for Operation of Generators in a Micro Grid
NASA Astrophysics Data System (ADS)
Hayashi, Yasuhiro; Miyamoto, Hideki; Matsuki, Junya; Iizuka, Toshio; Azuma, Hitoshi
Recently a lot of studies and developments about distributed generator such as photovoltaic generation system, wind turbine generation system and fuel cell have been performed under the background of the global environment issues and deregulation of the electricity market, and the technique of these distributed generators have progressed. Especially, micro grid which consists of several distributed generators, loads and storage battery is expected as one of the new operation system of distributed generator. However, since precipitous load fluctuation occurs in micro grid for the reason of its smaller capacity compared with conventional power system, high-accuracy load forecasting and control scheme to balance of supply and demand are needed. Namely, it is necessary to improve the precision of operation in micro grid by observing load fluctuation and correcting start-stop schedule and output of generators online. But it is not easy to determine the operation schedule of each generator in short time, because the problem to determine start-up, shut-down and output of each generator in micro grid is a mixed integer programming problem. In this paper, the authors propose an online optimization method for the optimal operation schedule of generators in micro grid. The proposed method is based on enumeration method and particle swarm optimization (PSO). In the proposed method, after picking up all unit commitment patterns of each generators satisfied with minimum up time and minimum down time constraint by using enumeration method, optimal schedule and output of generators are determined under the other operational constraints by using PSO. Numerical simulation is carried out for a micro grid model with five generators and photovoltaic generation system in order to examine the validity of the proposed method.
Social cognition and social problem solving abilities in individuals with alcohol use disorder.
Schmidt, Tobias; Roser, Patrik; Juckel, Georg; Brüne, Martin; Suchan, Boris; Thoma, Patrizia
2016-11-01
Up to now, little is known about higher order cognitive abilities like social cognition and social problem solving abilities in alcohol-dependent patients. However, impairments in these domains lead to an increased probability for relapse and are thus highly relevant in treatment contexts. This cross-sectional study assessed distinct aspects of social cognition and social problem solving in 31 hospitalized patients with alcohol use disorder (AUD) and 30 matched healthy controls (HC). Three ecologically valid scenario-based tests were used to gauge the ability to infer the mental state of story characters in complicated interpersonal situations, the capacity to select the best problem solving strategy among other less optimal alternatives, and the ability to freely generate appropriate strategies to handle difficult interpersonal conflicts. Standardized tests were used to assess executive function, attention, trait empathy, and memory, and correlations were computed between measures of executive function, attention, trait empathy, and tests of social problem solving. AUD patients generated significantly fewer socially sensitive and practically effective solutions for problematic interpersonal situations than the HC group. Furthermore, patients performed significantly worse when asked to select the best alternative among a list of presented alternatives for scenarios containing sarcastic remarks and had significantly more problems to interpret sarcastic remarks in difficult interpersonal situations. These specific patterns of impairments should be considered in treatment programs addressing impaired social skills in individuals with AUD.
The baby boomer effect: changing patterns of substance abuse among adults ages 55 and older.
Duncan, David F; Nicholson, Thomas; White, John B; Bradley, Dana Burr; Bonaguro, John
2010-07-01
Between now and 2030, the number of adults aged 65 and older in the United States will almost double, from around 37 million to more than 70 million, an increase from 12% of the U.S. population to almost 20%. It was long held that, with only a few isolated exceptions, substance abuse simply did not exist among this population. In light of the impact of the baby boom generation, this assumption may no longer be valid. The authors examined admissions of persons 55 years and older (n = 918,955) from the Treatment Episode Data Set (1998-2006). Total admissions with a primary drug problem with alcohol have remained relatively stable over this time. Admissions for problems with a primary drug other than alcohol have shown a steady and substantial increase. Clearly, data from the Treatment Episode Data Set indicate a coming wave of older addicts whose primary problem is not alcohol. The authors suspect that this wave is led primarily by the continuing emergence of the baby boomer generation.
Auxiliary drying to prevent pattern collapse in high aspect ratio nanostructures
NASA Astrophysics Data System (ADS)
Liu, Gang; Zhou, Jie; Xiong, Ying; Zhang, Xiaobo; Tian, Yangchao
2011-07-01
Many defects are generated in densely packed high aspect ratio structures during nanofabrication. Pattern collapse is one of the serious problems that may arise, mainly due to the capillary force during drying after the rinsing process. In this paper, a method of auxiliary drying is presented to prevent pattern collapse in high aspect ratio nanostructures by adding an auxiliary substrate as a reinforcing rib to restrict deformation and to balance the capillary force. The principle of the method is presented based on the analysis of pattern collapse. A finite element method is then applied to analyze the deformation of the resist beams caused by the surface tension using the ANSYS software, and the effect of the nanostructure's length to width ratio simulated and analyzed. Finally, the possible range of applications based on the proposed method is discussed. Our results show that the aspect ratio may be increased 2.6 times without pattern collapse; furthermore, this method can be widely used in the removal of solvents in micro- and nanofabrication.
Auxiliary drying to prevent pattern collapse in high aspect ratio nanostructures.
Liu, Gang; Zhou, Jie; Xiong, Ying; Zhang, Xiaobo; Tian, Yangchao
2011-07-29
Many defects are generated in densely packed high aspect ratio structures during nanofabrication. Pattern collapse is one of the serious problems that may arise, mainly due to the capillary force during drying after the rinsing process. In this paper, a method of auxiliary drying is presented to prevent pattern collapse in high aspect ratio nanostructures by adding an auxiliary substrate as a reinforcing rib to restrict deformation and to balance the capillary force. The principle of the method is presented based on the analysis of pattern collapse. A finite element method is then applied to analyze the deformation of the resist beams caused by the surface tension using the ANSYS software, and the effect of the nanostructure's length to width ratio simulated and analyzed. Finally, the possible range of applications based on the proposed method is discussed. Our results show that the aspect ratio may be increased 2.6 times without pattern collapse; furthermore, this method can be widely used in the removal of solvents in micro- and nanofabrication.
OLED emission zone measurement with high accuracy
NASA Astrophysics Data System (ADS)
Danz, N.; MacCiarnain, R.; Michaelis, D.; Wehlus, T.; Rausch, A. F.; Wächter, C. A.; Reusch, T. C. G.
2013-09-01
Highly efficient state of the art organic light-emitting diodes (OLED) comprise thin emitting layers with thicknesses in the order of 10 nm. The spatial distribution of the photon generation rate, i.e. the profile of the emission zone, inside these layers is of interest for both device efficiency analysis and characterization of charge recombination processes. It can be accessed experimentally by reverse simulation of far-field emission pattern measurements. Such a far-field pattern is the sum of individual emission patterns associated with the corresponding positions inside the active layer. Based on rigorous electromagnetic theory the relation between far-field pattern and emission zone is modeled as a linear problem. This enables a mathematical analysis to be applied to the cases of single and double emitting layers in the OLED stack as well as to pattern measurements in air or inside the substrate. From the results, guidelines for optimum emitter - cathode separation and for selecting the best experimental approach are obtained. Limits for the maximum spatial resolution can be derived.
The Intergenerational Circumstances of Household Food Insecurity and Adversity
Chilton, Mariana; Knowles, Molly; Bloom, Sandra L.
2017-01-01
ABSTRACT Household food insecurity is linked with exposure to violence and adversity throughout the life course, suggesting its transfer across generations. Using grounded theory, we analyzed semistructured interviews with 31 mothers reporting household food insecurity where participants described major life events and social relationships. Through the lens of multigenerational interactions, 4 themes emerged: (1) hunger and violence across the generations, (2) disclosure to family and friends, (3) depression and problems with emotional management, and (4) breaking out of intergenerational patterns. After describing these themes and how they relate to reports of food insecurity, we identify opportunities for social services and policy intervention. PMID:28503244
Bao, Yan; von Stosch, Alexandra; Park, Mona; Pöppel, Ernst
2017-01-01
In experimental aesthetics the relationship between the arts and cognitive neuroscience has gained particular interest in recent years. But has cognitive neuroscience indeed something to offer when studying the arts? Here we present a theoretical frame within which the concept of complementarity as a generative or creative principle is proposed; neurocognitive processes are characterized by the duality of complementary activities like bottom-up and top-down control, or logistical functions like temporal control and content functions like perceptions in the neural machinery. On that basis a thought pattern is suggested for aesthetic appreciations and cognitive appraisals in general. This thought pattern is deeply rooted in the history of philosophy and art theory since antiquity; and complementarity also characterizes neural operations as basis for cognitive processes. We then discuss some challenges one is confronted with in experimental aesthetics; in our opinion, one serious problem is the lack of a taxonomy of functions in psychology and neuroscience which is generally accepted. This deficit makes it next to impossible to develop acceptable models which are similar to what has to be modeled. Another problem is the severe language bias in this field of research as knowledge gained in many languages over the ages remains inaccessible to most scientists. Thus, an inspection of research results or theoretical concepts is necessarily too narrow. In spite of these limitations we provide a selective summary of some results and viewpoints with a focus on visual art and its appreciation. It is described how questions of art and aesthetic appreciations using behavioral methods and in particular brain-imaging techniques are analyzed and evaluated focusing on such issues like the representation of artwork or affective experiences. Finally, we emphasize complementarity as a generative principle on a practical level when artists and scientists work directly together which can lead to new insights and broader perspectives on both sides. PMID:28536548
Bao, Yan; von Stosch, Alexandra; Park, Mona; Pöppel, Ernst
2017-01-01
In experimental aesthetics the relationship between the arts and cognitive neuroscience has gained particular interest in recent years. But has cognitive neuroscience indeed something to offer when studying the arts? Here we present a theoretical frame within which the concept of complementarity as a generative or creative principle is proposed; neurocognitive processes are characterized by the duality of complementary activities like bottom-up and top-down control, or logistical functions like temporal control and content functions like perceptions in the neural machinery. On that basis a thought pattern is suggested for aesthetic appreciations and cognitive appraisals in general. This thought pattern is deeply rooted in the history of philosophy and art theory since antiquity; and complementarity also characterizes neural operations as basis for cognitive processes. We then discuss some challenges one is confronted with in experimental aesthetics; in our opinion, one serious problem is the lack of a taxonomy of functions in psychology and neuroscience which is generally accepted. This deficit makes it next to impossible to develop acceptable models which are similar to what has to be modeled. Another problem is the severe language bias in this field of research as knowledge gained in many languages over the ages remains inaccessible to most scientists. Thus, an inspection of research results or theoretical concepts is necessarily too narrow. In spite of these limitations we provide a selective summary of some results and viewpoints with a focus on visual art and its appreciation. It is described how questions of art and aesthetic appreciations using behavioral methods and in particular brain-imaging techniques are analyzed and evaluated focusing on such issues like the representation of artwork or affective experiences. Finally, we emphasize complementarity as a generative principle on a practical level when artists and scientists work directly together which can lead to new insights and broader perspectives on both sides.
Optimizing human activity patterns using global sensitivity analysis.
Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M
2014-12-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.
Optimizing human activity patterns using global sensitivity analysis
Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2014-01-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080
Optimizing human activity patterns using global sensitivity analysis
Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...
2013-12-10
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less
Opportunities for Fluid Dynamics Research in the Forensic Discipline of Bloodstain Pattern Analysis
NASA Astrophysics Data System (ADS)
Attinger, Daniel; Moore, Craig; Donaldson, Adam; Jafari, Arian; Stone, Howard
2013-11-01
This review [Forensic Science International, vol. 231, pp. 375-396, 2013] highlights research opportunities for fluid dynamics (FD) studies related to the forensic discipline of bloodstain pattern analysis (BPA). The need for better integrating FD and BPA is mentioned in a 2009 report by the US National Research Council, entitled ``Strengthening Forensic Science in the United States: A Path Forward''. BPA aims for practical answers to specific questions of the kind: ``How did a bloodletting incident happen?'' FD, on the other hand, aims to quantitatively describe the transport of fluids and the related causes, with general equations. BPA typically solves the indirect problem of inspecting stains in a crime scene to infer the most probable bloodletting incident that produced these patterns. FD typically defines the initial and boundary conditions of a fluid system and from there describe how the system evolves in time and space, most often in a deterministic manner. We review four topics in BPA with strong connections to FD: the generation of drops, their flight, their impact and the formation of stains. Future research on these topics would deliver new quantitative tools and methods for BPA, and present new multiphase flow problems for FD.
Design Pattern Mining Using Distributed Learning Automata and DNA Sequence Alignment
Esmaeilpour, Mansour; Naderifar, Vahideh; Shukur, Zarina
2014-01-01
Context Over the last decade, design patterns have been used extensively to generate reusable solutions to frequently encountered problems in software engineering and object oriented programming. A design pattern is a repeatable software design solution that provides a template for solving various instances of a general problem. Objective This paper describes a new method for pattern mining, isolating design patterns and relationship between them; and a related tool, DLA-DNA for all implemented pattern and all projects used for evaluation. DLA-DNA achieves acceptable precision and recall instead of other evaluated tools based on distributed learning automata (DLA) and deoxyribonucleic acid (DNA) sequences alignment. Method The proposed method mines structural design patterns in the object oriented source code and extracts the strong and weak relationships between them, enabling analyzers and programmers to determine the dependency rate of each object, component, and other section of the code for parameter passing and modular programming. The proposed model can detect design patterns better that available other tools those are Pinot, PTIDEJ and DPJF; and the strengths of their relationships. Results The result demonstrate that whenever the source code is build standard and non-standard, based on the design patterns, then the result of the proposed method is near to DPJF and better that Pinot and PTIDEJ. The proposed model is tested on the several source codes and is compared with other related models and available tools those the results show the precision and recall of the proposed method, averagely 20% and 9.6% are more than Pinot, 27% and 31% are more than PTIDEJ and 3.3% and 2% are more than DPJF respectively. Conclusion The primary idea of the proposed method is organized in two following steps: the first step, elemental design patterns are identified, while at the second step, is composed to recognize actual design patterns. PMID:25243670
NASA Astrophysics Data System (ADS)
Liu, Xiyao; Lou, Jieting; Wang, Yifan; Du, Jingyu; Zou, Beiji; Chen, Yan
2018-03-01
Authentication and copyright identification are two critical security issues for medical images. Although zerowatermarking schemes can provide durable, reliable and distortion-free protection for medical images, the existing zerowatermarking schemes for medical images still face two problems. On one hand, they rarely considered the distinguishability for medical images, which is critical because different medical images are sometimes similar to each other. On the other hand, their robustness against geometric attacks, such as cropping, rotation and flipping, is insufficient. In this study, a novel discriminative and robust zero-watermarking (DRZW) is proposed to address these two problems. In DRZW, content-based features of medical images are first extracted based on completed local binary pattern (CLBP) operator to ensure the distinguishability and robustness, especially against geometric attacks. Then, master shares and ownership shares are generated from the content-based features and watermark according to (2,2) visual cryptography. Finally, the ownership shares are stored for authentication and copyright identification. For queried medical images, their content-based features are extracted and master shares are generated. Their watermarks for authentication and copyright identification are recovered by stacking the generated master shares and stored ownership shares. 200 different medical images of 5 types are collected as the testing data and our experimental results demonstrate that DRZW ensures both the accuracy and reliability of authentication and copyright identification. When fixing the false positive rate to 1.00%, the average value of false negative rates by using DRZW is only 1.75% under 20 common attacks with different parameters.
ERIC Educational Resources Information Center
Kiliç, Çigdem
2017-01-01
In that current study, pattern conversion ability of 25 pre-service mathematics teachers (producing figural patterns following number patterns) was investigated. During the study participants were asked to generate figural patterns based on those number patterns. The results of the study indicate that many participants could generate different…
Predicate Oriented Pattern Analysis for Biomedical Knowledge Discovery
Shen, Feichen; Liu, Hongfang; Sohn, Sunghwan; Larson, David W.; Lee, Yugyung
2017-01-01
In the current biomedical data movement, numerous efforts have been made to convert and normalize a large number of traditional structured and unstructured data (e.g., EHRs, reports) to semi-structured data (e.g., RDF, OWL). With the increasing number of semi-structured data coming into the biomedical community, data integration and knowledge discovery from heterogeneous domains become important research problem. In the application level, detection of related concepts among medical ontologies is an important goal of life science research. It is more crucial to figure out how different concepts are related within a single ontology or across multiple ontologies by analysing predicates in different knowledge bases. However, the world today is one of information explosion, and it is extremely difficult for biomedical researchers to find existing or potential predicates to perform linking among cross domain concepts without any support from schema pattern analysis. Therefore, there is a need for a mechanism to do predicate oriented pattern analysis to partition heterogeneous ontologies into closer small topics and do query generation to discover cross domain knowledge from each topic. In this paper, we present such a model that predicates oriented pattern analysis based on their close relationship and generates a similarity matrix. Based on this similarity matrix, we apply an innovated unsupervised learning algorithm to partition large data sets into smaller and closer topics and generate meaningful queries to fully discover knowledge over a set of interlinked data sources. We have implemented a prototype system named BmQGen and evaluate the proposed model with colorectal surgical cohort from the Mayo Clinic. PMID:28983419
Li, Jun; Tibshirani, Robert
2015-01-01
We discuss the identification of features that are associated with an outcome in RNA-Sequencing (RNA-Seq) and other sequencing-based comparative genomic experiments. RNA-Seq data takes the form of counts, so models based on the normal distribution are generally unsuitable. The problem is especially challenging because different sequencing experiments may generate quite different total numbers of reads, or ‘sequencing depths’. Existing methods for this problem are based on Poisson or negative binomial models: they are useful but can be heavily influenced by ‘outliers’ in the data. We introduce a simple, nonparametric method with resampling to account for the different sequencing depths. The new method is more robust than parametric methods. It can be applied to data with quantitative, survival, two-class or multiple-class outcomes. We compare our proposed method to Poisson and negative binomial-based methods in simulated and real data sets, and find that our method discovers more consistent patterns than competing methods. PMID:22127579
Does Risk Aversion Affect Transmission and Generation Planning? A Western North America Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munoz, Francisco; van der Weijde, Adriaan Hendrik; Hobbs, Benjamin F.
Here, we investigate the effects of risk aversion on optimal transmission and generation expansion planning in a competitive and complete market. To do so, we formulate a stochastic model that minimizes a weighted average of expected transmission and generation costs and their conditional value at risk (CVaR). We also show that the solution of this optimization problem is equivalent to the solution of a perfectly competitive risk-averse Stackelberg equilibrium, in which a risk-averse transmission planner maximizes welfare after which risk-averse generators maximize profits. Furthermore, this model is then applied to a 240-bus representation of the Western Electricity Coordinating Council, inmore » which we examine the impact of risk aversion on levels and spatial patterns of generation and transmission investment. Although the impact of risk aversion remains small at an aggregate level, state-level impacts on generation and transmission investment can be significant, which emphasizes the importance of explicit consideration of risk aversion in planning models.« less
Does Risk Aversion Affect Transmission and Generation Planning? A Western North America Case Study
Munoz, Francisco; van der Weijde, Adriaan Hendrik; Hobbs, Benjamin F.; ...
2017-04-07
Here, we investigate the effects of risk aversion on optimal transmission and generation expansion planning in a competitive and complete market. To do so, we formulate a stochastic model that minimizes a weighted average of expected transmission and generation costs and their conditional value at risk (CVaR). We also show that the solution of this optimization problem is equivalent to the solution of a perfectly competitive risk-averse Stackelberg equilibrium, in which a risk-averse transmission planner maximizes welfare after which risk-averse generators maximize profits. Furthermore, this model is then applied to a 240-bus representation of the Western Electricity Coordinating Council, inmore » which we examine the impact of risk aversion on levels and spatial patterns of generation and transmission investment. Although the impact of risk aversion remains small at an aggregate level, state-level impacts on generation and transmission investment can be significant, which emphasizes the importance of explicit consideration of risk aversion in planning models.« less
An ultra-sparse code underliesthe generation of neural sequences in a songbird
NASA Astrophysics Data System (ADS)
Hahnloser, Richard H. R.; Kozhevnikov, Alexay A.; Fee, Michale S.
2002-09-01
Sequences of motor activity are encoded in many vertebrate brains by complex spatio-temporal patterns of neural activity; however, the neural circuit mechanisms underlying the generation of these pre-motor patterns are poorly understood. In songbirds, one prominent site of pre-motor activity is the forebrain robust nucleus of the archistriatum (RA), which generates stereotyped sequences of spike bursts during song and recapitulates these sequences during sleep. We show that the stereotyped sequences in RA are driven from nucleus HVC (high vocal centre), the principal pre-motor input to RA. Recordings of identified HVC neurons in sleeping and singing birds show that individual HVC neurons projecting onto RA neurons produce bursts sparsely, at a single, precise time during the RA sequence. These HVC neurons burst sequentially with respect to one another. We suggest that at each time in the RA sequence, the ensemble of active RA neurons is driven by a subpopulation of RA-projecting HVC neurons that is active only at that time. As a population, these HVC neurons may form an explicit representation of time in the sequence. Such a sparse representation, a temporal analogue of the `grandmother cell' concept for object recognition, eliminates the problem of temporal interference during sequence generation and learning attributed to more distributed representations.
Where Lies the Harm in Lottery Gambling? A Portrait of Gambling Practices and Associated Problems.
Costes, Jean-Michel; Kairouz, Sylvia; Monson, Eva; Eroukmanoff, Vincent
2018-03-13
Lotteries are one of the most prevalent forms of gambling and generate substantial state revenues. They are also argued to be one of the least harmful forms of gambling. This paper is one of the first to examine exclusive lottery gamblers and compares their gambling patterns and problems as well other associated risky behaviours to those who are not exclusive lottery gamblers. Data were derived from two large surveys conducted with representative adult samples in France (n = 15,635) and Québec (n = 23,896). Participants were separated into two groups: exclusive lottery gamblers (ELGs) and non-exclusive lottery gamblers. Using multivariate analysis, study results reveal that ELGs, who represent two thirds of gamblers, generally exhibit less intensive gambling patterns and are less likely to report other risky behaviours. However, harms associated with moderate risk and problem gambling are found to be concentrated in specific subpopulations for both groups, primarily males, older individuals, and those who report lower income and education level. Given widespread participation in lotteries and concentration of harm within specific subgroups, these findings point to the need for prevention efforts despite the lower levels of harm associated with lottery gambling.
Simulation of Forward and Inverse X-ray Scattering From Shocked Materials
NASA Astrophysics Data System (ADS)
Barber, John; Marksteiner, Quinn; Barnes, Cris
2012-02-01
The next generation of high-intensity, coherent light sources should generate sufficient brilliance to perform in-situ coherent x-ray diffraction imaging (CXDI) of shocked materials. In this work, we present beginning-to-end simulations of this process. This includes the calculation of the partially-coherent intensity profiles of self-amplified stimulated emission (SASE) x-ray free electron lasers (XFELs), as well as the use of simulated, shocked molecular-dynamics-based samples to predict the evolution of the resulting diffraction patterns. In addition, we will explore the corresponding inverse problem by performing iterative phase retrieval to generate reconstructed images of the simulated sample. The development of these methods in the context of materials under extreme conditions should provide crucial insights into the design and capabilities of shocked in-situ imaging experiments.
Introduction to the Focus Issue: Chemo-Hydrodynamic Patterns and Instabilities
NASA Astrophysics Data System (ADS)
De Wit, A.; Eckert, K.; Kalliadasis, S.
2012-09-01
Pattern forming instabilities are often encountered in a wide variety of natural phenomena and technological applications, from self-organization in biological and chemical systems to oceanic or atmospheric circulation and heat and mass transport processes in engineering systems. Spatio-temporal structures are ubiquitous in hydrodynamics where numerous different convective instabilities generate pattern formation and complex spatiotemporal dynamics, which have been much studied both theoretically and experimentally. In parallel, reaction-diffusion processes provide another large family of pattern forming instabilities and spatio-temporal structures which have been analyzed for several decades. At the intersection of these two fields, "chemo-hydrodynamic patterns and instabilities" resulting from the coupling of hydrodynamic and reaction-diffusion processes have been less studied. The exploration of the new instability and symmetry-breaking scenarios emerging from the interplay between chemical reactions, diffusion and convective motions is a burgeoning field in which numerous exciting problems have emerged during the last few years. These problems range from fingering instabilities of chemical fronts and reactive fluid-fluid interfaces to the dynamics of reaction-diffusion systems in the presence of chaotic mixing. The questions to be addressed are at the interface of hydrodynamics, chemistry, engineering or environmental sciences to name a few and, as a consequence, they have started to draw the attention of several communities including both the nonlinear chemical dynamics and hydrodynamics communities. The collection of papers gathered in this Focus Issue sheds new light on a wide range of phenomena in the general area of chemo-hydrodynamic patterns and instabilities. It also serves as an overview of the current research and state-of-the-art in the field.
Non-contact finger vein acquisition system using NIR laser
NASA Astrophysics Data System (ADS)
Kim, Jiman; Kong, Hyoun-Joong; Park, Sangyun; Noh, SeungWoo; Lee, Seung-Rae; Kim, Taejeong; Kim, Hee Chan
2009-02-01
Authentication using finger vein pattern has substantial advantage than other biometrics. Because human vein patterns are hidden inside the skin and tissue, it is hard to forge vein structure. But conventional system using NIR LED array has two drawbacks. First, direct contact with LED array raise sanitary problem. Second, because of discreteness of LEDs, non-uniform illumination exists. We propose non-contact finger vein acquisition system using NIR laser and Laser line generator lens. Laser line generator lens makes evenly distributed line laser from focused laser light. Line laser is aimed on the finger longitudinally. NIR camera was used for image acquisition. 200 index finger vein images from 20 candidates are collected. Same finger vein pattern extraction algorithm was used to evaluate two sets of images. Acquired images from proposed non-contact system do not show any non-uniform illumination in contrary with conventional system. Also results of matching are comparable to conventional system. We developed Non-contact finger vein acquisition system. It can prevent potential cross contamination of skin diseases. Also the system can produce uniformly illuminated images unlike conventional system. With the benefit of non-contact, proposed system shows almost equivalent performance compared with conventional system.
Everyday problem solving across the adult life span: solution diversity and efficacy.
Mienaltowski, Andrew
2011-10-01
Everyday problem solving involves examining the solutions that individuals generate when faced with problems that take place in their everyday experiences. Problems can range from medication adherence and meal preparation to disagreeing with a physician over a recommended medical procedure or compromising with extended family members over where to host Thanksgiving dinner. Across the life span, research has demonstrated divergent patterns of change in performance based on the type of everyday problems used as well as based on the way that problem-solving efficacy is operationally defined. Advancing age is associated with worsening performance when tasks involve single-solution or fluency-based definitions of effectiveness. However, when efficacy is defined in terms of the diversity of strategies used, as well as by the social and emotional impact of solution choice on the individual, performance is remarkably stable and sometimes even improves in the latter half of life. This article discusses how both of these approaches to everyday problem solving inform research on the influence that aging has on everyday functioning. © 2011 New York Academy of Sciences.
Everyday problem solving across the adult life span: solution diversity and efficacy
Mienaltowski, Andrew
2013-01-01
Everyday problem solving involves examining the solutions that individuals generate when faced with problems that take place in their everyday experiences. Problems can range from medication adherence and meal preparation to disagreeing with a physician over a recommended medical procedure or compromising with extended family members over where to host Thanksgiving dinner. Across the life span, research has demonstrated divergent patterns of change in performance based on the type of everyday problems used as well as based on the way that problem-solving efficacy is operationally defined. Advancing age is associated with worsening performance when tasks involve single-solution or fluency-based definitions of effectiveness. However, when efficacy is defined in terms of the diversity of strategies used, as well as by the social and emotional impact of solution choice on the individual, performance is remarkably stable and sometimes even improves in the latter half of life. This article discusses how both of these approaches to everyday problem solving inform research on the influence that aging has on everyday functioning. PMID:22023569
Investigating Anomalies in the Output Generated by the Weather Research and Forecasting (WRF) Model
NASA Astrophysics Data System (ADS)
Decicco, Nicholas; Trout, Joseph; Manson, J. Russell; Rios, Manny; King, David
2015-04-01
The Weather Research and Forecasting (WRF) model is an advanced mesoscale numerical weather prediction (NWP) model comprised of two numerical cores, the Numerical Mesoscale Modeling (NMM) core, and the Advanced Research WRF (ARW) core. An investigation was done to determine the source of erroneous output generated by the NMM core. In particular were the appearance of zero values at regularly spaced grid cells in output fields and the NMM core's evident (mis)use of static geographic information at a resolution lower than the nesting level for which the core is performing computation. A brief discussion of the high-level modular architecture of the model is presented as well as methods utilized to identify the cause of these problems. Presented here are the initial results from a research grant, ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA''.
Self-organized adaptation of a simple neural circuit enables complex robot behaviour
NASA Astrophysics Data System (ADS)
Steingrube, Silke; Timme, Marc; Wörgötter, Florentin; Manoonpong, Poramate
2010-03-01
Controlling sensori-motor systems in higher animals or complex robots is a challenging combinatorial problem, because many sensory signals need to be simultaneously coordinated into a broad behavioural spectrum. To rapidly interact with the environment, this control needs to be fast and adaptive. Present robotic solutions operate with limited autonomy and are mostly restricted to few behavioural patterns. Here we introduce chaos control as a new strategy to generate complex behaviour of an autonomous robot. In the presented system, 18 sensors drive 18 motors by means of a simple neural control circuit, thereby generating 11 basic behavioural patterns (for example, orienting, taxis, self-protection and various gaits) and their combinations. The control signal quickly and reversibly adapts to new situations and also enables learning and synaptic long-term storage of behaviourally useful motor responses. Thus, such neural control provides a powerful yet simple way to self-organize versatile behaviours in autonomous agents with many degrees of freedom.
Option generation in the treatment of unstable patients: An experienced-novice comparison study.
Whyte, James; Pickett-Hauber, Roxanne; Whyte, Maria D
2016-09-01
There are a dearth of studies that quantitatively measure nurses' appreciation of stimuli and the subsequent generation of options in practice environments. The purpose of this paper was to provide an examination of nurses' ability to solve problems while quantifying the stimuli upon which they focus during patient care activities. The study used a quantitative descriptive method that gathered performance data from a simulated task environment using multi-angle video and audio. These videos were coded and transcripts of all of the actions that occurred in the scenario and the verbal reports of the participants were compiled. The results revealed a pattern of superiority of the experienced exemplar group. Novice actions were characterized by difficulty in following common protocols, inconsistencies in their evaluative approaches, and a pattern of omissions of key actions. The study provides support for the deliberate practice-based programs designed to facilitate higher-level performance in novices. © 2016 John Wiley & Sons Australia, Ltd.
An open-access CMIP5 pattern library for temperature and precipitation: description and methodology
NASA Astrophysics Data System (ADS)
Lynch, Cary; Hartin, Corinne; Bond-Lamberty, Ben; Kravitz, Ben
2017-05-01
Pattern scaling is used to efficiently emulate general circulation models and explore uncertainty in climate projections under multiple forcing scenarios. Pattern scaling methods assume that local climate changes scale with a global mean temperature increase, allowing for spatial patterns to be generated for multiple models for any future emission scenario. For uncertainty quantification and probabilistic statistical analysis, a library of patterns with descriptive statistics for each file would be beneficial, but such a library does not presently exist. Of the possible techniques used to generate patterns, the two most prominent are the delta and least squares regression methods. We explore the differences and statistical significance between patterns generated by each method and assess performance of the generated patterns across methods and scenarios. Differences in patterns across seasons between methods and epochs were largest in high latitudes (60-90° N/S). Bias and mean errors between modeled and pattern-predicted output from the linear regression method were smaller than patterns generated by the delta method. Across scenarios, differences in the linear regression method patterns were more statistically significant, especially at high latitudes. We found that pattern generation methodologies were able to approximate the forced signal of change to within ≤ 0.5 °C, but the choice of pattern generation methodology for pattern scaling purposes should be informed by user goals and criteria. This paper describes our library of least squares regression patterns from all CMIP5 models for temperature and precipitation on an annual and sub-annual basis, along with the code used to generate these patterns. The dataset and netCDF data generation code are available at doi:10.5281/zenodo.495632.
Calculation of periodic flows in a continuously stratified fluid
NASA Astrophysics Data System (ADS)
Vasiliev, A.
2012-04-01
Analytic theory of disturbances generated by an oscillating compact source in a viscous continuously stratified fluid was constructed. Exact solution of the internal waves generation problem was constructed taking into account diffusivity effects. This analysis is based on set of fundamental equations of incompressible flows. The linearized problem of periodic flows in a continuously stratified fluid, generated by an oscillating part of the inclined plane was solved by methods of singular perturbation theory. A rectangular or disc placed on a sloping plane and oscillating linearly in an arbitrary direction was selected as a source of disturbances. The solutions include regularly perturbed on dissipative component functions describing internal waves and a family of singularly perturbed functions. One of the functions from the singular components family has an analogue in a homogeneous fluid that is a periodic or Stokes' flow. Its thickness is defined by a universal micro scale depending on kinematics viscosity coefficient and a buoyancy frequency with a factor depending on the wave slope. Other singular perturbed functions are specific for stratified flows. Their thickness are defined the diffusion coefficient, kinematic viscosity and additional factor depending on geometry of the problem. Fields of fluid density, velocity, vorticity, pressure, energy density and flux as well as forces acting on the source are calculated for different types of the sources. It is shown that most effective source of waves is the bi-piston. Complete 3D problem is transformed in various limiting cases that are into 2D problem for source in stratified or homogeneous fluid and the Stokes problem for an oscillating infinite plane. The case of the "critical" angle that is equality of the emitting surface and the wave cone slope angles needs in separate investigations. In this case, the number of singular component is saved. Patterns of velocity and density fields were constructed and analyzed by methods of computational mathematics. Singular components of the solution affect the flow pattern of the inhomogeneous stratified fluid, not only near the source of the waves, but at a large distance. Analytical calculations of the structure of wave beams are matched with laboratory experiments. Some deviations at large distances from the source are formed due to the contribution of background wave field associated with seiches in the laboratory tank. In number of the experiments vortices with closed contours were observed on some distances from the disk. The work was supported by Ministry of Education and Science RF (Goscontract No. 16.518.11.7059), experiments were performed on set up USU "HPC IPMec RAS".
Using economy of means to evolve transition rules within 2D cellular automata.
Ripps, David L
2010-01-01
Running a cellular automaton (CA) on a rectangular lattice is a time-honored method for studying artificial life on a digital computer. Commonly, the researcher wishes to investigate some specific or general mode of behavior, say, the ability of a coherent pattern of points to glide within the lattice, or to generate copies of itself. This technique has a problem: how to design the transitions table-the set of distinct rules that specify the next content of a cell from its current content and that of its near neighbors. Often the table is painstakingly designed manually, rule by rule. The problem is exacerbated by the potentially vast number of individual rules that need be specified to cover all combinations of center and neighbors when there are several symbols in the alphabet of the CA. In this article a method is presented to have the set of rules evolve automatically while running the CA. The transition table is initially empty, with rules being added as the need arises. A novel principle drives the evolution: maximum economy of means-maximizing the reuse of rules introduced on previous cycles. This method may not be a panacea applicable to all CA studies. Nevertheless, it is sufficiently potent to evolve sets of rules and associated patterns of points that glide (periodically regenerate themselves at another location) and to generate gliding "children" that then "mate" by collision.
From Modelling to Execution of Enterprise Integration Scenarios: The GENIUS Tool
NASA Astrophysics Data System (ADS)
Scheibler, Thorsten; Leymann, Frank
One of the predominant problems IT companies are facing today is Enterprise Application Integration (EAI). Most of the infrastructures built to tackle integration issues are proprietary because no standards exist for how to model, develop, and actually execute integration scenarios. EAI patterns gain importance for non-technical business users to ease and harmonize the development of EAI scenarios. These patterns describe recurring EAI challenges and propose possible solutions in an abstract way. Therefore, one can use those patterns to describe enterprise architectures in a technology neutral manner. However, patterns are documentation only used by developers and systems architects to decide how to implement an integration scenario manually. Thus, patterns are not theoretical thought to stand for artefacts that will immediately be executed. This paper presents a tool supporting a method how EAI patterns can be used to generate executable artefacts for various target platforms automatically using a model-driven development approach, hence turning patterns into something executable. Therefore, we introduce a continuous tool chain beginning at the design phase and ending in executing an integration solution in a completely automatically manner. For evaluation purposes we introduce a scenario demonstrating how the tool is utilized for modelling and actually executing an integration scenario.
Extended generalized recurrence plot quantification of complex circular patterns
NASA Astrophysics Data System (ADS)
Riedl, Maik; Marwan, Norbert; Kurths, Jürgen
2017-03-01
The generalized recurrence plot is a modern tool for quantification of complex spatial patterns. Its application spans the analysis of trabecular bone structures, Turing patterns, turbulent spatial plankton patterns, and fractals. Determinism is a central measure in this framework quantifying the level of regularity of spatial structures. We show by basic examples of fully regular patterns of different symmetries that this measure underestimates the orderliness of circular patterns resulting from rotational symmetries. We overcome this crucial problem by checking additional structural elements of the generalized recurrence plot which is demonstrated with the examples. Furthermore, we show the potential of the extended quantity of determinism applying it to more irregular circular patterns which are generated by the complex Ginzburg-Landau-equation and which can be often observed in real spatially extended dynamical systems. So, we are able to reconstruct the main separations of the system's parameter space analyzing single snapshots of the real part only, in contrast to the use of the original quantity. This ability of the proposed method promises also an improved description of other systems with complicated spatio-temporal dynamics typically occurring in fluid dynamics, climatology, biology, ecology, social sciences, etc.
An open-access CMIP5 pattern library for temperature and precipitation: Description and methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lynch, Cary D.; Hartin, Corinne A.; Bond-Lamberty, Benjamin
Pattern scaling is used to efficiently emulate general circulation models and explore uncertainty in climate projections under multiple forcing scenarios. Pattern scaling methods assume that local climate changes scale with a global mean temperature increase, allowing for spatial patterns to be generated for multiple models for any future emission scenario. For uncertainty quantification and probabilistic statistical analysis, a library of patterns with descriptive statistics for each file would be beneficial, but such a library does not presently exist. Of the possible techniques used to generate patterns, the two most prominent are the delta and least squared regression methods. We exploremore » the differences and statistical significance between patterns generated by each method and assess performance of the generated patterns across methods and scenarios. Differences in patterns across seasons between methods and epochs were largest in high latitudes (60-90°N/S). Bias and mean errors between modeled and pattern predicted output from the linear regression method were smaller than patterns generated by the delta method. Across scenarios, differences in the linear regression method patterns were more statistically significant, especially at high latitudes. We found that pattern generation methodologies were able to approximate the forced signal of change to within ≤ 0.5°C, but choice of pattern generation methodology for pattern scaling purposes should be informed by user goals and criteria. As a result, this paper describes our library of least squared regression patterns from all CMIP5 models for temperature and precipitation on an annual and sub-annual basis, along with the code used to generate these patterns.« less
An open-access CMIP5 pattern library for temperature and precipitation: Description and methodology
Lynch, Cary D.; Hartin, Corinne A.; Bond-Lamberty, Benjamin; ...
2017-05-15
Pattern scaling is used to efficiently emulate general circulation models and explore uncertainty in climate projections under multiple forcing scenarios. Pattern scaling methods assume that local climate changes scale with a global mean temperature increase, allowing for spatial patterns to be generated for multiple models for any future emission scenario. For uncertainty quantification and probabilistic statistical analysis, a library of patterns with descriptive statistics for each file would be beneficial, but such a library does not presently exist. Of the possible techniques used to generate patterns, the two most prominent are the delta and least squared regression methods. We exploremore » the differences and statistical significance between patterns generated by each method and assess performance of the generated patterns across methods and scenarios. Differences in patterns across seasons between methods and epochs were largest in high latitudes (60-90°N/S). Bias and mean errors between modeled and pattern predicted output from the linear regression method were smaller than patterns generated by the delta method. Across scenarios, differences in the linear regression method patterns were more statistically significant, especially at high latitudes. We found that pattern generation methodologies were able to approximate the forced signal of change to within ≤ 0.5°C, but choice of pattern generation methodology for pattern scaling purposes should be informed by user goals and criteria. As a result, this paper describes our library of least squared regression patterns from all CMIP5 models for temperature and precipitation on an annual and sub-annual basis, along with the code used to generate these patterns.« less
NASA Astrophysics Data System (ADS)
Tíjaro Rojas, Omar J.; Torres Moreno, Yezid; Rhodes, William T.
2017-06-01
Different theories including Kolmogorov have been valid to explain and model physic phenomenal like vertical atmospheric turbulence. In horizontal path, we still have many questions, due to weather problems and consequences that it generates. To emulate some conditions of environment, we built an Optical Turbulence Generator (OTG) having spatial, humidity and temperature, measurements that were captured in the same time from optical synchronization. This development was made using digital modules as ADC (Analog to Digital Converters) and communications protocol as SPI. We all made from microcontrollers. On the other hand, to measure optical signal, we used a photomultiplier tube (PMT) where captured the intensity of fringes that shifted with a known frequency. Outcomes show temporal shift and phase drive from dependent samples (in time domain) that correspond with frozen turbulence given by Taylor theory. Parameters studied were C2n, scintillation and inner scale in temporal patterns and analysis of their relationship with the physical associated variables. These patterns were taken from Young Interferometer in laboratory room scale. In the future, we hope with these studies, we will can implement an experiment to characterize atmospheric turbulence in a long distance, placed in the equatorial weather zone.
Peck, Stephen C.; Vida, Mina; Eccles, Jacquelynne S.
2015-01-01
Aims Use pattern-centered methods to examine how adolescents’ alcohol use and sports activities are related both to childhood sport and problem behavior and to heavy drinking in early adulthood. Design The data used in this study come from four waves of the Michigan Study of Adolescent Life Transitions (MSALT) that began in 1983, when participants were approximately age 12, and continued into early adulthood, when participants were approximately age 28. Participants Sixty per cent of the approximately 1000 MSALT youth living in south-eastern Michigan were females and 97% were European American. Approximately 28% of one or both parents held at least a college degree, and 45% held a high school diploma or lower. Findings Pattern-centered analyses revealed that the relation between adolescent sport activity and age 28 heavy alcohol use obtained primarily for sport participants who were also using more than the average amount of alcohol and other drugs at age 18. Similarly, children who were characterized by relatively high levels of sport participation, aggression and other problem behavior at age 12 were more likely than expected by chance to become sport participants who used more than the average amount of alcohol and other drugs at age 18. Conclusions The results indicate that childhood problem behavior and adolescent sport participation can, but do not necessarily, presage heavy drinking in adulthood and that pattern-centered analytical techniques are useful for revealing such theoretically generated predictions. PMID:18426541
NASA Astrophysics Data System (ADS)
Oliveira, Miguel; Santos, Cristina P.; Costa, Lino
2012-09-01
In this paper, a study based on sensitivity analysis is performed for a gait multi-objective optimization system that combines bio-inspired Central Patterns Generators (CPGs) and a multi-objective evolutionary algorithm based on NSGA-II. In this system, CPGs are modeled as autonomous differential equations, that generate the necessary limb movement to perform the required walking gait. In order to optimize the walking gait, a multi-objective problem with three conflicting objectives is formulated: maximization of the velocity, the wide stability margin and the behavioral diversity. The experimental results highlight the effectiveness of this multi-objective approach and the importance of the objectives to find different walking gait solutions for the quadruped robot.
Crooks, Noelle M.; Alibali, Martha W.
2013-01-01
This study investigated whether activating elements of prior knowledge can influence how problem solvers encode and solve simple mathematical equivalence problems (e.g., 3 + 4 + 5 = 3 + __). Past work has shown that such problems are difficult for elementary school students (McNeil and Alibali, 2000). One possible reason is that children's experiences in math classes may encourage them to think about equations in ways that are ultimately detrimental. Specifically, children learn a set of patterns that are potentially problematic (McNeil and Alibali, 2005a): the perceptual pattern that all equations follow an “operations = answer” format, the conceptual pattern that the equal sign means “calculate the total”, and the procedural pattern that the correct way to solve an equation is to perform all of the given operations on all of the given numbers. Upon viewing an equivalence problem, knowledge of these patterns may be reactivated, leading to incorrect problem solving. We hypothesized that these patterns may negatively affect problem solving by influencing what people encode about a problem. To test this hypothesis in children would require strengthening their misconceptions, and this could be detrimental to their mathematical development. Therefore, we tested this hypothesis in undergraduate participants. Participants completed either control tasks or tasks that activated their knowledge of the three patterns, and were then asked to reconstruct and solve a set of equivalence problems. Participants in the knowledge activation condition encoded the problems less well than control participants. They also made more errors in solving the problems, and their errors resembled the errors children make when solving equivalence problems. Moreover, encoding performance mediated the effect of knowledge activation on equivalence problem solving. Thus, one way in which experience may affect equivalence problem solving is by influencing what students encode about the equations. PMID:24324454
Path planning and energy management of solar-powered unmanned ground vehicles
NASA Astrophysics Data System (ADS)
Kaplan, Adam
Many of the applications pertinent to unmanned vehicles, such as environmental research and analysis, communications, and information-surveillance and reconnaissance, benefit from prolonged vehicle operation time. Conventional efforts to increase the operational time of electric-powered unmanned vehicles have traditionally focused on the design of energy-efficient components and the identification of energy efficient search patterns, while little attention has been paid to the vehicle's mission-level path plan and power management. This thesis explores the formulation and generation of integrated motion-plans and power-schedules for solar-panel equipped mobile robots operating under strict energy constraints, which cannot be effectively addressed through conventional motion planning algorithms. Transit problems are considered to design time-optimal paths using both Balkcom-Mason and Pseudo-Dubins curves. Additionally, a more complicated problem to generate mission plans for vehicles which must persistently travel between certain locations, similar to the traveling salesperson problem (TSP), is presented. A comparison between one of the common motion-planning algorithms and experimental results of the prescribed algorithms, made possible by use of a test environment and mobile robot designed and developed specifically for this research, are presented and discussed.
A comparative analysis of numerical approaches to the mechanics of elastic sheets
NASA Astrophysics Data System (ADS)
Taylor, Michael; Davidovitch, Benny; Qiu, Zhanlong; Bertoldi, Katia
2015-06-01
Numerically simulating deformations in thin elastic sheets is a challenging problem in computational mechanics due to destabilizing compressive stresses that result in wrinkling. Determining the location, structure, and evolution of wrinkles in these problems has important implications in design and is an area of increasing interest in the fields of physics and engineering. In this work, several numerical approaches previously proposed to model equilibrium deformations in thin elastic sheets are compared. These include standard finite element-based static post-buckling approaches as well as a recently proposed method based on dynamic relaxation, which are applied to the problem of an annular sheet with opposed tractions where wrinkling is a key feature. Numerical solutions are compared to analytic predictions of the ground state, enabling a quantitative evaluation of the predictive power of the various methods. Results indicate that static finite element approaches produce local minima that are highly sensitive to initial imperfections, relying on a priori knowledge of the equilibrium wrinkling pattern to generate optimal results. In contrast, dynamic relaxation is much less sensitive to initial imperfections and can generate low-energy solutions for a wide variety of loading conditions without requiring knowledge of the equilibrium solution beforehand.
Depth map generation using a single image sensor with phase masks.
Jang, Jinbeum; Park, Sangwoo; Jo, Jieun; Paik, Joonki
2016-06-13
Conventional stereo matching systems generate a depth map using two or more digital imaging sensors. It is difficult to use the small camera system because of their high costs and bulky sizes. In order to solve this problem, this paper presents a stereo matching system using a single image sensor with phase masks for the phase difference auto-focusing. A novel pattern of phase mask array is proposed to simultaneously acquire two pairs of stereo images. Furthermore, a noise-invariant depth map is generated from the raw format sensor output. The proposed method consists of four steps to compute the depth map: (i) acquisition of stereo images using the proposed mask array, (ii) variational segmentation using merging criteria to simplify the input image, (iii) disparity map generation using the hierarchical block matching for disparity measurement, and (iv) image matting to fill holes to generate the dense depth map. The proposed system can be used in small digital cameras without additional lenses or sensors.
Patterns of Home and School Behavior Problems in Rural and Urban Settings
Hope, Timothy L; Bierman, Karen L
2009-01-01
This study examined the cross-situational patterns of behavior problems shown by children in rural and urban communities at school entry. Behavior problems exhibited in home settings were not expected to vary significantly across urban and rural settings. In contrast, it was anticipated that child behavior at school would be heavily influenced by the increased exposure to aggressive models and deviant peer support experienced by children in urban as compared to rural schools, leading to higher rates of school conduct problems for children in urban settings. Statistical comparisons of the patterns of behavior problems shown by representative samples of 89 rural and 221 urban children provided support for these hypotheses, as significant rural-urban differences emerged in school and not in home settings. Cross-situational patterns of behavior problems also varied across setting, with home-only patterns of problems characterizing more children at the rural site and school-only, patterns of behavior problems characterizing more children at the urban sites. In addition, whereas externalizing behavior was the primary school problem exhibited by urban children, rural children displayed significantly higher rates of internalizing problems at school. The implications of these results are discussed for developmental models of behavior problems and for preventive interventions. PMID:19834584
Kim, Hyo-Jun; Shin, Min-Ho; Lee, Jae-Yong; Kim, Ji-Hoon; Kim, Young-Joo
2017-05-15
An optically efficient liquid-crystal display (LCD) structure using a patterned quantum dot (QD) film and a short-pass filter (SPF) was proposed and fabricated. The patterned QD film contributed to the generation of 95% in the area ratio (or 90% in the coverage ratio) of the Rec. 2020 color gamut. This was achieved by avoiding the problem of interaction between white backlight and broad transmittance spectra of color filters (CFs) as seen in a conventional LCD with a mixed QD film as a reference. The patterned QD film can maintain the narrow bandwidth of the green and the red QD colors before passing through the CFs. Additionally, the optical intensities of the red, green, and blue spectra were enhanced to 1.63, 1.72, and 2.16 times the reference LCD values, respectively. This was a result of separated emission of the red and green patterned QD film and reflection of the red and green light to the forward direction by the SPF.
Model-based multiple patterning layout decomposition
NASA Astrophysics Data System (ADS)
Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.
2015-10-01
As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this paper, we propose a model-based MPL layout decomposition method using a pre-simulated library of frequent layout patterns. Instead of using the graph G in the standard graph-coloring formulation, we build an expanded graph H where each vertex represents a group of adjacent features together with a coloring solution. By utilizing the library and running sophisticated graph algorithms on H, our approach can obtain optimal decomposition results efficiently. Our model-based solution can achieve a practical mask design which significantly improves the lithography quality on the wafer compared to the rule based decomposition.
Seasonality in seeking mental health information on Google.
Ayers, John W; Althouse, Benjamin M; Allem, Jon-Patrick; Rosenquist, J Niels; Ford, Daniel E
2013-05-01
Population mental health surveillance is an important challenge limited by resource constraints, long time lags in data collection, and stigma. One promising approach to bridge similar gaps elsewhere has been the use of passively generated digital data. This article assesses the viability of aggregate Internet search queries for real-time monitoring of several mental health problems, specifically in regard to seasonal patterns of seeking out mental health information. All Google mental health queries were monitored in the U.S. and Australia from 2006 to 2010. Additionally, queries were subdivided among those including the terms ADHD (attention deficit-hyperactivity disorder); anxiety; bipolar; depression; anorexia or bulimia (eating disorders); OCD (obsessive-compulsive disorder); schizophrenia; and suicide. A wavelet phase analysis was used to isolate seasonal components in the trends, and based on this model, the mean search volume in winter was compared with that in summer, as performed in 2012. All mental health queries followed seasonal patterns with winter peaks and summer troughs amounting to a 14% (95% CI=11%, 16%) difference in volume for the U.S. and 11% (95% CI=7%, 15%) for Australia. These patterns also were evident for all specific subcategories of illness or problem. For instance, seasonal differences ranged from 7% (95% CI=5%, 10%) for anxiety (followed by OCD, bipolar, depression, suicide, ADHD, schizophrenia) to 37% (95% CI=31%, 44%) for eating disorder queries in the U.S. Several nonclinical motivators for query seasonality (such as media trends or academic interest) were explored and rejected. Information seeking on Google across all major mental illnesses and/or problems followed seasonal patterns similar to those found for seasonal affective disorder. These are the first data published on patterns of seasonality in information seeking encompassing all the major mental illnesses, notable also because they likely would have gone undetected using traditional surveillance. Copyright © 2013. Published by Elsevier Inc.
A general introduction to aeroacoustics and atmospheric sound
NASA Technical Reports Server (NTRS)
Lighthill, James
1992-01-01
A single unifying principle (based upon the nonlinear 'momentum-flux' effects produced when different components of a motion transport different components of its momentum) is used to give a broad scientific background to several aspects of the interaction between airflows and atmospheric sound. First, it treats the generation of sound by airflows of many different types. These include, for example, jet-like flows involving convected turbulent motions (with the resulting aeroacoustic radiation sensitively dependent on the Mach number of convection) and they include, as an extreme case, the supersonic 'boom' (shock waves generated by a supersonically convected flow pattern). Next, an analysis is given of sound propagation through nonuniformly moving airflows, and the exchange is quantified of energy between flow and sound; while, finally, problems are examined of how sound waves 'on their own' may generate the airflows known as acoustic streaming.
Automated speech understanding: the next generation
NASA Astrophysics Data System (ADS)
Picone, J.; Ebel, W. J.; Deshmukh, N.
1995-04-01
Modern speech understanding systems merge interdisciplinary technologies from Signal Processing, Pattern Recognition, Natural Language, and Linguistics into a unified statistical framework. These systems, which have applications in a wide range of signal processing problems, represent a revolution in Digital Signal Processing (DSP). Once a field dominated by vector-oriented processors and linear algebra-based mathematics, the current generation of DSP-based systems rely on sophisticated statistical models implemented using a complex software paradigm. Such systems are now capable of understanding continuous speech input for vocabularies of several thousand words in operational environments. The current generation of deployed systems, based on small vocabularies of isolated words, will soon be replaced by a new technology offering natural language access to vast information resources such as the Internet, and provide completely automated voice interfaces for mundane tasks such as travel planning and directory assistance.
Nonlinearity response correction in phase-shifting deflectometry
NASA Astrophysics Data System (ADS)
Nguyen, Manh The; Kang, Pilseong; Ghim, Young-Sik; Rhee, Hyug-Gyo
2018-04-01
Owing to the nonlinearity response of digital devices such as screens and cameras in phase-shifting deflectometry, non-sinusoidal phase-shifted fringe patterns are generated and additional measurement errors are introduced. In this paper, a new deflectometry technique is described for overcoming these problems using a pre-distorted pattern combined with an advanced iterative algorithm. The experiment results show that this method can reconstruct the 3D surface map of a sample without fringe print-through caused by the nonlinearity response of digital devices. The proposed technique is verified by measuring the surface height variations in a deformable mirror and comparing them with the measurement result obtained using a coordinate measuring machine. The difference between the two measurement results is estimated to be less than 13 µm.
Quadruped robots' modular trajectories: Stability issues
NASA Astrophysics Data System (ADS)
Pinto, Carla M. A.
2012-09-01
Pinto, Santos, Rocha and Matos [13, 12] study a CPG model for the generation of modular trajectories of quadruped robots. They consider that each movement is composed of two types of primitives: rhythmic and discrete. The rhythmic primitive models the periodic patterns and the discrete primitive is inserted as a perturbation of those patterns. In this paper we begin to tackle numerically the problem of the stability of that mathematical model. We observe that if the discrete part is inserted in all limbs, with equal values, and as an offset of the rhythmic part, the obtained gait is stable and has the same spatial and spatio-temporal symmetry groups as the purely rhythmic gait, differing only on the value of the offset.
Economic consequences of high throughput maskless lithography
NASA Astrophysics Data System (ADS)
Hartley, John G.; Govindaraju, Lakshmi
2005-11-01
Many people in the semiconductor industry bemoan the high costs of masks and view mask cost as one of the significant barriers to bringing new chip designs to market. All that is needed is a viable maskless technology and the problem will go away. Numerous sites around the world are working on maskless lithography but inevitably, the question asked is "Wouldn't a one wafer per hour maskless tool make a really good mask writer?" Of course, the answer is yes, the hesitation you hear in the answer isn't based on technology concerns, it's financial. The industry needs maskless lithography because mask costs are too high. Mask costs are too high because mask pattern generators (PG's) are slow and expensive. If mask PG's become much faster, mask costs go down, the maskless market goes away and the PG supplier is faced with an even smaller tool demand from the mask shops. Technical success becomes financial suicide - or does it? In this paper we will present the results of a model that examines some of the consequences of introducing high throughput maskless pattern generation. Specific features in the model include tool throughput for masks and wafers, market segmentation by node for masks and wafers and mask cost as an entry barrier to new chip designs. How does the availability of low cost masks and maskless tools affect the industries tool makeup and what is the ultimate potential market for high throughput maskless pattern generators?
NASA Astrophysics Data System (ADS)
Pezzulo, Giovanni; Donnarumma, Francesco; Iodice, Pierpaolo; Prevete, Roberto; Dindo, Haris
2015-03-01
Controlling the body - given its huge number of degrees of freedom - poses severe computational challenges. Mounting evidence suggests that the brain alleviates this problem by exploiting "synergies", or patterns of muscle activities (and/or movement dynamics and kinematics) that can be combined to control action, rather than controlling individual muscles of joints [1-10].
Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Engelmann, Christian
Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on powermore » consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that addresses concrete problems in the design of resilient systems. The complete catalog of resilience design patterns provides designers with reusable design elements. We also define a framework that enhances a designer's understanding of the important constraints and opportunities for the design patterns to be implemented and deployed at various layers of the system stack. This design framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also supports optimization of the cost-benefit trade-offs among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner in spite of frequent faults, errors, and failures of various types.« less
A novel method for repeatedly generating speckle patterns used in digital image correlation
NASA Astrophysics Data System (ADS)
Zhang, Juan; Sweedy, Ahmed; Gitzhofer, François; Baroud, Gamal
2018-01-01
Speckle patterns play a key role in Digital Image Correlation (DIC) measurement, and generating an optimal speckle pattern has been the goal for decades now. The usual method of generating a speckle pattern is by manually spraying the paint on the specimen. However, this makes it difficult to reproduce the optimal pattern for maintaining identical testing conditions and achieving consistent DIC results. This study proposed and evaluated a novel method using an atomization system to repeatedly generate speckle patterns. To verify the repeatability of the speckle patterns generated by this system, simulation and experimental studies were systematically performed. The results from both studies showed that the speckle patterns and, accordingly, the DIC measurements become highly accurate and repeatable using the proposed atomization system.
Multisource Transfer Learning With Convolutional Neural Networks for Lung Pattern Analysis.
Christodoulidis, Stergios; Anthimopoulos, Marios; Ebner, Lukas; Christe, Andreas; Mougiakakou, Stavroula
2017-01-01
Early diagnosis of interstitial lung diseases is crucial for their treatment, but even experienced physicians find it difficult, as their clinical manifestations are similar. In order to assist with the diagnosis, computer-aided diagnosis systems have been developed. These commonly rely on a fixed scale classifier that scans CT images, recognizes textural lung patterns, and generates a map of pathologies. In a previous study, we proposed a method for classifying lung tissue patterns using a deep convolutional neural network (CNN), with an architecture designed for the specific problem. In this study, we present an improved method for training the proposed network by transferring knowledge from the similar domain of general texture classification. Six publicly available texture databases are used to pretrain networks with the proposed architecture, which are then fine-tuned on the lung tissue data. The resulting CNNs are combined in an ensemble and their fused knowledge is compressed back to a network with the original architecture. The proposed approach resulted in an absolute increase of about 2% in the performance of the proposed CNN. The results demonstrate the potential of transfer learning in the field of medical image analysis, indicate the textural nature of the problem and show that the method used for training a network can be as important as designing its architecture.
NASA Astrophysics Data System (ADS)
Xue, Xiaoxiao; Xuan, Yi; Bao, Chengying; Li, Shangyuan; Zheng, Xiaoping; Zhou, Bingkun; Qi, Minghao; Weiner, Andrew M.
2018-06-01
Microwave phased array antennas (PAAs) are very attractive to defense applications and high-speed wireless communications for their abilities of fast beam scanning and complex beam pattern control. However, traditional PAAs based on phase shifters suffer from the beam-squint problem and have limited bandwidths. True-time-delay (TTD) beamforming based on low-loss photonic delay lines can solve this problem. But it is still quite challenging to build large-scale photonic TTD beamformers due to their high hardware complexity. In this paper, we demonstrate a photonic TTD beamforming network based on a miniature microresonator frequency comb (microcomb) source and dispersive time delay. A method incorporating optical phase modulation and programmable spectral shaping is proposed for positive and negative apodization weighting to achieve arbitrary microwave beam pattern control. The experimentally demonstrated TTD beamforming network can support a PAA with 21 elements. The microwave frequency range is $\\mathbf{8\\sim20\\ {GHz}}$, and the beam scanning range is $\\mathbf{\\pm 60.2^\\circ}$. Detailed measurements of the microwave amplitudes and phases are performed. The beamforming performances of Gaussian, rectangular beams and beam notch steering are evaluated through simulations by assuming a uniform radiating antenna array. The scheme can potentially support larger PAAs with hundreds of elements by increasing the number of comb lines with broadband microcomb generation.
NASA Astrophysics Data System (ADS)
Ángel López Comino, José; Stich, Daniel; Ferreira, Ana M. G.; Morales Soto, José
2015-04-01
The inversion of seismic data for extended fault slip distributions provides us detailed models of earthquake sources. The validity of the solutions depends on the fit between observed and synthetic seismograms generated with the source model. However, there may exist more than one model that fit the data in a similar way, leading to a multiplicity of solutions. This underdetermined problem has been analyzed and studied by several authors, who agree that inverting for a single best model may become overly dependent on the details of the procedure. We have addressed this resolution problem by using a global search that scans the solutions domain using random slipmaps, applying a Popperian inversion strategy that involves the generation of a representative set of slip distributions. The proposed technique solves the forward problem for a large set of models calculating their corresponding synthetic seismograms. Then, we propose to perform extended fault inversion through falsification, that is, falsify inappropriate trial models that do not reproduce the data within a reasonable level of mismodelling. The remainder of surviving trial models forms our set of coequal solutions. Thereby the ambiguities that might exist can be detected by taking a look at the solutions, allowing for an efficient assessment of the resolution. The solution set may contain only members with similar slip distributions, or else uncover some fundamental ambiguity like, for example, different patterns of main slip patches or different patterns of rupture propagation. For a feasibility study, the proposed resolution test has been evaluated using teleseismic body wave recordings from the September 5th 2012 Nicoya, Costa Rica earthquake. Note that the inversion strategy can be applied to any type of seismic, geodetic or tsunami data for which we can handle the forward problem. A 2D von Karman distribution is used to describe the spectrum of heterogeneity in slipmaps, and we generate possible models by spectral synthesis for random phase, keeping the rake angle, rupture velocity and slip velocity function fixed. The 2012 Nicoya earthquake turns out to be relatively well constrained from 50 teleseismic waveforms. The solution set contains 252 out of 10.000 trial models with normalized L1-fit within 5 percent from the global minimum. The set includes only similar solutions -a single centred slip patch- with minor differences. Uncertainties are related to the details of the slip maximum, including the amount of peak slip (2m to 3.5m), as well as the characteristics of peripheral slip below 1 m. Synthetic tests suggest that slip patterns like Nicoya may be a fortunate case, while it may be more difficult to unambiguously reconstruct more distributed slip from teleseismic data.
Early stage hot spot analysis through standard cell base random pattern generation
NASA Astrophysics Data System (ADS)
Jeon, Joong-Won; Song, Jaewan; Kim, Jeong-Lim; Park, Seongyul; Yang, Seung-Hune; Lee, Sooryong; Kang, Hokyu; Madkour, Kareem; ElManhawy, Wael; Lee, SeungJo; Kwan, Joe
2017-04-01
Due to limited availability of DRC clean patterns during the process and RET recipe development, OPC recipes are not tested with high pattern coverage. Various kinds of pattern can help OPC engineer to detect sensitive patterns to lithographic effects. Random pattern generation is needed to secure robust OPC recipe. However, simple random patterns without considering real product layout style can't cover patterning hotspot in production levels. It is not effective to use them for OPC optimization thus it is important to generate random patterns similar to real product patterns. This paper presents a strategy for generating random patterns based on design architecture information and preventing hotspot in early process development stage through a tool called Layout Schema Generator (LSG). Using LSG, we generate standard cell based on random patterns reflecting real design cell structure - fin pitch, gate pitch and cell height. The output standard cells from LSG are applied to an analysis methodology to assess their hotspot severity by assigning a score according to their optical image parameters - NILS, MEEF, %PV band and thus potential hotspots can be defined by determining their ranking. This flow is demonstrated on Samsung 7nm technology optimizing OPC recipe and early enough in the process avoiding using problematic patterns.
Bolton, Matthew L.; Bass, Ellen J.; Siminiceanu, Radu I.
2012-01-01
Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel’s zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development. PMID:23105914
The brain-sex theory of occupational choice: a counterexample.
Esgate, Anthony; Flynn, Maria
2005-02-01
The brain-sex theory of occupational choice suggests that males and females in male-typical careers show a male pattern of cognitive ability in terms of better spatial than verbal performance on cognitive tests with the reverse pattern for females and males in female-typical careers. These differences are thought to result from patterns of cerebral functional lateralisation. This study sought such occupationally related effects using synonym generation (verbal ability) and mental rotation (spatial ability) tasks used previously. It also used entrants to these careers as participants to examine whether patterns of cognitive abilities might predate explicit training and practice. Using a population of entrants to sex-differentiated university courses, a moderate occupational effect on the synonym generation task was found, along with a weak (p < .10) sex effect on the mental rotation task. Highest performance on the mental rotation task was by female students in fashion design, a female-dominated occupation which makes substantial visuospatial demands and attracts many students with literacy problems such as dyslexia. This group then appears to be a counterexample to the brain-sex theory. However, methodological issues surrounding previous studies are highlighted: the simple synonym task appears to show limited discrimination of the sexes, leading to questions concerning the legitimacy of inferences about lateralisation based on scores from that test. Moreover, the human figure-based mental rotation task appears to tap the wrong aspect of visuospatial skill, likely to be needed for male-typical courses such as engineering. Since the fashion-design career is also one that attracts disproportionately many male students whose sexual orientation is homosexual, data were examined for evidence of female-typical patterns of cognitive performance among that subgroup. This was not found. This study therefore provides no evidence for the claim that female-pattern cerebral functional lateralisation is likely in gay males.
Stochastic Generation of Monthly Rainfall Data
NASA Astrophysics Data System (ADS)
Srikanthan, R.
2009-03-01
Monthly rainfall data is generally needed in the simulation of water resources systems, and in the estimation of water yield from large catchments. Monthly streamflow data generation models are usually applied to generate monthly rainfall data, but this presents problems for most regions, which have significant months of no rainfall. In an earlier study, Srikanthan et al. (J. Hydrol. Eng., ASCE 11(3) (2006) 222-229) recommended the modified method of fragments to disaggregate the annual rainfall data generated by a first-order autoregressive model. The main drawback of this approach is the occurrence of similar patterns when only a short length of historic data is available. Porter and Pink (Hydrol. Water Res. Symp. (1991) 187-191) used synthetic fragments from a Thomas-Fiering monthly model to overcome this drawback. As an alternative, a new two-part monthly model is nested in an annual model to generate monthly rainfall data which preserves both the monthly and annual characteristics. This nested model was applied to generate rainfall data from seven rainfall stations located in eastern and southern parts of Australia, and the results showed that the model performed satisfactorily.
Second-harmonic generation from a thin spherical layer and No-generation conditions
NASA Astrophysics Data System (ADS)
Kapshai, V. N.; Shamyna, A. A.
2017-09-01
In the Rayleigh-Gans-Debye approximation, we solve the problem of second-harmonic generation by an elliptically polarized electromagnetic wave incident on the surface of a spherical particle that is coated by an optically nonlinear layer and is placed in a dielectric. The formulas obtained characterize the spatial distribution of the electric field of the second harmonic in the far-field zone. The most general form of the second-order dielectric susceptibility tensor is considered, which contains four independent components, with three of them being nonchiral and one, chiral. Consistency and inconsistencies between the obtained solution and formulas from works of other authors are found. We analyze the directivity patterns that characterize the spatial distribution of the generated radiation for the nonchiral layer and their dependences on the anisotropy and ellipticity coefficients of the incident wave. It is found that, with increasing radius of the nonlinear layer, the generated radiation becomes more directional. Combinations of parameters for which no radiation is generated are revealed. Based on this, we propose methods for experimental determination of the anisotropy coefficients.
Hamilton, Craig S; Kruse, Regina; Sansoni, Linda; Barkhofen, Sonja; Silberhorn, Christine; Jex, Igor
2017-10-27
Boson sampling has emerged as a tool to explore the advantages of quantum over classical computers as it does not require universal control over the quantum system, which favors current photonic experimental platforms. Here, we introduce Gaussian Boson sampling, a classically hard-to-solve problem that uses squeezed states as a nonclassical resource. We relate the probability to measure specific photon patterns from a general Gaussian state in the Fock basis to a matrix function called the Hafnian, which answers the last remaining question of sampling from Gaussian states. Based on this result, we design Gaussian Boson sampling, a #P hard problem, using squeezed states. This demonstrates that Boson sampling from Gaussian states is possible, with significant advantages in the photon generation probability, compared to existing protocols.
Time studies in A&E departments--a useful tool for management.
Aharonson-Daniel, L; Fung, H; Hedley, A J
1996-01-01
A time and motion study was conducted in an accident and emergency (A&E) department in a Hong Kong Government hospital in order to suggest solutions for severe queuing problems found in A&E. The study provided useful information about the patterns of arrival and service; the throughput; and the factors that influence the length of the queue at the A&E department. Plans for building a computerized simulation model were dropped as new intelligence generated by the study enabled problem solving using simple statistical analysis and common sense. Demonstrates some potential benefits for management in applying operations research methods in busy clinical working environments. The implementation of the recommendations made by this study successfully eliminated queues in A&E.
Community detection in networks: A user guide
NASA Astrophysics Data System (ADS)
Fortunato, Santo; Hric, Darko
2016-11-01
Community detection in networks is one of the most popular topics of modern network science. Communities, or clusters, are usually groups of vertices having higher probability of being connected to each other than to members of other groups, though other patterns are possible. Identifying communities is an ill-defined problem. There are no universal protocols on the fundamental ingredients, like the definition of community itself, nor on other crucial issues, like the validation of algorithms and the comparison of their performances. This has generated a number of confusions and misconceptions, which undermine the progress in the field. We offer a guided tour through the main aspects of the problem. We also point out strengths and weaknesses of popular methods, and give directions to their use.
Cilia in vertebrate left–right patterning
Dasgupta, Agnik
2016-01-01
Understanding how left–right (LR) asymmetry is generated in vertebrate embryos is an important problem in developmental biology. In humans, a failure to align the left and right sides of cardiovascular and/or gastrointestinal systems often results in birth defects. Evidence from patients and animal models has implicated cilia in the process of left–right patterning. Here, we review the proposed functions for cilia in establishing LR asymmetry, which include creating transient leftward fluid flows in an embryonic ‘left–right organizer’. These flows direct asymmetric activation of a conserved Nodal (TGFβ) signalling pathway that guides asymmetric morphogenesis of developing organs. We discuss the leading hypotheses for how cilia-generated asymmetric fluid flows are translated into asymmetric molecular signals. We also discuss emerging mechanisms that control the subcellular positioning of cilia and the cellular architecture of the left–right organizer, both of which are critical for effective cilia function during left–right patterning. Finally, using mosaic cell-labelling and time-lapse imaging in the zebrafish embryo, we provide new evidence that precursor cells maintain their relative positions as they give rise to the ciliated left–right organizer. This suggests the possibility that these cells acquire left–right positional information prior to the appearance of cilia. This article is part of the themed issue ‘Provocative questions in left–right asymmetry’. PMID:27821522
Chwała, Wiesław; Koziana, Agnieszka; Kasperczyk, Tadeusz; Płaszewski, Maciej
2014-01-01
Background. The question of how to correct and rehabilitate scoliosis remains one of the most difficult problems of orthopaedics. Controversies continue to arise regarding various types of both symmetric and asymmetric scoliosis-specific therapeutic exercises. Objective. The aim of the present paper was to conduct an electromyographic assessment of functional symmetry of paraspinal muscles during symmetric and asymmetric exercises in adolescents with idiopathic scoliosis. Materials and Methods. The study was conducted in a group of 82 girls, mean age 12.4 ± 2.3 years with single- or double-major-idiopathic scoliosis, Cobb angle 24 ± 9.4°. The functional biopotentials during isometric work of paraspinal muscles in “at rest” position and during two symmetric and four asymmetric exercises were measured with the use of the Muscle Tester ME 6000 electromyograph. Results. In general, asymmetric exercises were characterised by larger differences in bioelectrical activity of paraspinal muscles, in comparison with symmetric exercises, both in the groups of patients with single-curve and double-curve scoliosis. Conclusion. During symmetric and asymmetric exercises, muscle tension patterns differed significantly in both groups, in comparison with the examination at rest, in most cases generating positive corrective patterns. Asymmetric exercises generated divergent muscle tension patterns on the convex and concave sides of the deformity. PMID:25258713
Assembler: Efficient Discovery of Spatial Co-evolving Patterns in Massive Geo-sensory Data.
Zhang, Chao; Zheng, Yu; Ma, Xiuli; Han, Jiawei
2015-08-01
Recent years have witnessed the wide proliferation of geo-sensory applications wherein a bundle of sensors are deployed at different locations to cooperatively monitor the target condition. Given massive geo-sensory data, we study the problem of mining spatial co-evolving patterns (SCPs), i.e ., groups of sensors that are spatially correlated and co-evolve frequently in their readings. SCP mining is of great importance to various real-world applications, yet it is challenging because (1) the truly interesting evolutions are often flooded by numerous trivial fluctuations in the geo-sensory time series; and (2) the pattern search space is extremely large due to the spatiotemporal combinatorial nature of SCP. In this paper, we propose a two-stage method called Assembler. In the first stage, Assembler filters trivial fluctuations using wavelet transform and detects frequent evolutions for individual sensors via a segment-and-group approach. In the second stage, Assembler generates SCPs by assembling the frequent evolutions of individual sensors. Leveraging the spatial constraint, it conceptually organizes all the SCPs into a novel structure called the SCP search tree, which facilitates the effective pruning of the search space to generate SCPs efficiently. Our experiments on both real and synthetic data sets show that Assembler is effective, efficient, and scalable.
Cilia in vertebrate left-right patterning.
Dasgupta, Agnik; Amack, Jeffrey D
2016-12-19
Understanding how left-right (LR) asymmetry is generated in vertebrate embryos is an important problem in developmental biology. In humans, a failure to align the left and right sides of cardiovascular and/or gastrointestinal systems often results in birth defects. Evidence from patients and animal models has implicated cilia in the process of left-right patterning. Here, we review the proposed functions for cilia in establishing LR asymmetry, which include creating transient leftward fluid flows in an embryonic 'left-right organizer'. These flows direct asymmetric activation of a conserved Nodal (TGFβ) signalling pathway that guides asymmetric morphogenesis of developing organs. We discuss the leading hypotheses for how cilia-generated asymmetric fluid flows are translated into asymmetric molecular signals. We also discuss emerging mechanisms that control the subcellular positioning of cilia and the cellular architecture of the left-right organizer, both of which are critical for effective cilia function during left-right patterning. Finally, using mosaic cell-labelling and time-lapse imaging in the zebrafish embryo, we provide new evidence that precursor cells maintain their relative positions as they give rise to the ciliated left-right organizer. This suggests the possibility that these cells acquire left-right positional information prior to the appearance of cilia.This article is part of the themed issue 'Provocative questions in left-right asymmetry'. © 2016 The Author(s).
Determining the number of fingers in the lifting Hele-Shaw problem
NASA Astrophysics Data System (ADS)
Miranda, Jose; Dias, Eduardo
2013-11-01
The lifting Hele-Shaw cell flow is a variation of the celebrated radial viscous fingering problem for which the upper cell plate is lifted uniformly at a specified rate. This procedure causes the formation of intricate interfacial patterns. Most theoretical studies determine the total number of emerging fingers by maximizing the linear growth rate, but this generates discrepancies between theory and experiments. In this work, we tackle the number of fingers selection problem in the lifting Hele-Shaw cell by employing the recently proposed maximum-amplitude criterion. Our linear stability analysis accounts for the action of capillary, viscous normal stresses, and wetting effects, as well as the cell confinement. The comparison of our results with very precise laboratory measurements for the total number of fingers shows a significantly improved agreement between theoretical predictions and experimental data. We thank CNPq (Brazilian Sponsor) for financial support.
Buy three but get only two: the smallest effect in a 2 × 2 ANOVA is always uninterpretable.
Garcia-Marques, Leonel; Garcia-Marques, Teresa; Brauer, Markus
2014-12-01
Loftus (Memory & Cognition 6:312-319, 1978) distinguished between interpretable and uninterpretable interactions. Uninterpretable interactions are ambiguous, because they may be due to two additive main effects (no interaction) and a nonlinear relationship between the (latent) outcome variable and its indicator. Interpretable interactions can only be due to the presence of a true interactive effect in the outcome variable, regardless of the relationship that it establishes with its indicator. In the present article, we first show that same problem can arise when an unmeasured mediator has a nonlinear effect on the measured outcome variable. Then we integrate Loftus's arguments with a seemingly contradictory approach to interactions suggested by Rosnow and Rosenthal (Psychological Bulletin 105:143-146, 1989). We show that entire data patterns, not just interaction effects alone, produce interpretable or noninterpretable interactions. Next, we show that the same problem of interpretability can apply to main effects. Lastly, we give concrete advice on what researchers can do to generate data patterns that provide unambiguous evidence for hypothesized interactions.
Decimated Input Ensembles for Improved Generalization
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Oza, Nikunj C.; Norvig, Peter (Technical Monitor)
1999-01-01
Recently, many researchers have demonstrated that using classifier ensembles (e.g., averaging the outputs of multiple classifiers before reaching a classification decision) leads to improved performance for many difficult generalization problems. However, in many domains there are serious impediments to such "turnkey" classification accuracy improvements. Most notable among these is the deleterious effect of highly correlated classifiers on the ensemble performance. One particular solution to this problem is generating "new" training sets by sampling the original one. However, with finite number of patterns, this causes a reduction in the training patterns each classifier sees, often resulting in considerably worsened generalization performance (particularly for high dimensional data domains) for each individual classifier. Generally, this drop in the accuracy of the individual classifier performance more than offsets any potential gains due to combining, unless diversity among classifiers is actively promoted. In this work, we introduce a method that: (1) reduces the correlation among the classifiers; (2) reduces the dimensionality of the data, thus lessening the impact of the 'curse of dimensionality'; and (3) improves the classification performance of the ensemble.
Detection Copy Number Variants from NGS with Sparse and Smooth Constraints.
Zhang, Yue; Cheung, Yiu-Ming; Xu, Bo; Su, Weifeng
2017-01-01
It is known that copy number variations (CNVs) are associated with complex diseases and particular tumor types, thus reliable identification of CNVs is of great potential value. Recent advances in next generation sequencing (NGS) data analysis have helped manifest the richness of CNV information. However, the performances of these methods are not consistent. Reliably finding CNVs in NGS data in an efficient way remains a challenging topic, worthy of further investigation. Accordingly, we tackle the problem by formulating CNVs identification into a quadratic optimization problem involving two constraints. By imposing the constraints of sparsity and smoothness, the reconstructed read depth signal from NGS is anticipated to fit the CNVs patterns more accurately. An efficient numerical solution tailored from alternating direction minimization (ADM) framework is elaborated. We demonstrate the advantages of the proposed method, namely ADM-CNV, by comparing it with six popular CNV detection methods using synthetic, simulated, and empirical sequencing data. It is shown that the proposed approach can successfully reconstruct CNV patterns from raw data, and achieve superior or comparable performance in detection of the CNVs compared to the existing counterparts.
Stripes and belly-spots – a review of pigment cell morphogenesis in vertebrates
Kelsh, Robert N.; Harris, Melissa L.; Colanesi, Sarah; Erickson, Carol A.
2009-01-01
Pigment patterns in the integument have long-attracted attention from both scientists and non-scientists alike since their natural attractiveness combines with their excellence as models for the general problem of pattern formation. Pigment cells are formed from the neural crest and must migrate to reach their final locations. In this review, we focus on our current understanding of mechanisms underlying the control of pigment cell migration and patterning in diverse vertebrates. The model systems discussed here –chick, mouse, and zebrafish – each provide unique insights into the major morphogenetic events driving pigment pattern formation. In birds and mammals, melanoblasts must be specified before they can migrate on the dorsolateral pathway. Transmembrane receptors involved in guiding them onto this route include EphB2 and Ednrb2 in chick, and Kit in mouse. Terminal migration depends, in part, upon extracellular matrix reorganization by ADAMTS20. Invasion of the ectoderm, especially into the feather germ and hair follicles, requires specific signals that are beginning to be characterized. We summarize our current understanding of the mechanisms regulating melanoblast number and organization in the epidermis. We note the apparent differences in pigment pattern formation in poikilothermic vertebrates when compared with birds and mammals. With more pigment cell types, migration pathways are more complex and largely unexplored; nevertheless, a role for Kit signaling in melanophore migration is clear and indicates that at least some patterning mechanisms may be highly conserved. We summarize the multiple factors thought to contribute to zebrafish embryonic pigment pattern formation, highlighting a recent study identifying Sdf1a as one factor crucial for regulation of melanophore positioning. Finally, we discuss the mechanisms generating a second, metamorphic pigment pattern in adult fish, emphasizing recent studies strengthening the evidence that undifferentiated progenitor cells play a major role in generating adult pigment cells. PMID:18977309
Behavioral pattern identification for structural health monitoring in complex systems
NASA Astrophysics Data System (ADS)
Gupta, Shalabh
Estimation of structural damage and quantification of structural integrity are critical for safe and reliable operation of human-engineered complex systems, such as electromechanical, thermofluid, and petrochemical systems. Damage due to fatigue crack is one of the most commonly encountered sources of structural degradation in mechanical systems. Early detection of fatigue damage is essential because the resulting structural degradation could potentially cause catastrophic failures, leading to loss of expensive equipment and human life. Therefore, for reliable operation and enhanced availability, it is necessary to develop capabilities for prognosis and estimation of impending failures, such as the onset of wide-spread fatigue crack damage in mechanical structures. This dissertation presents information-based online sensing of fatigue damage using the analytical tools of symbolic time series analysis ( STSA). Anomaly detection using STSA is a pattern recognition method that has been recently developed based upon a fixed-structure, fixed-order Markov chain. The analysis procedure is built upon the principles of Symbolic Dynamics, Information Theory and Statistical Pattern Recognition. The dissertation demonstrates real-time fatigue damage monitoring based on time series data of ultrasonic signals. Statistical pattern changes are measured using STSA to monitor the evolution of fatigue damage. Real-time anomaly detection is presented as a solution to the forward (analysis) problem and the inverse (synthesis) problem. (1) the forward problem - The primary objective of the forward problem is identification of the statistical changes in the time series data of ultrasonic signals due to gradual evolution of fatigue damage. (2) the inverse problem - The objective of the inverse problem is to infer the anomalies from the observed time series data in real time based on the statistical information generated during the forward problem. A computer-controlled special-purpose fatigue test apparatus, equipped with multiple sensing devices (e.g., ultrasonics and optical microscope) for damage analysis, has been used to experimentally validate the STSA method for early detection of anomalous behavior. The sensor information is integrated with a software module consisting of the STSA algorithm for real-time monitoring of fatigue damage. Experiments have been conducted under different loading conditions on specimens constructed from the ductile aluminium alloy 7075 - T6. The dissertation has also investigated the application of the STSA method for early detection of anomalies in other engineering disciplines. Two primary applications include combustion instability in a generic thermal pulse combustor model and whirling phenomenon in a typical misaligned shaft.
NASA Astrophysics Data System (ADS)
Son, Yurak; Kamano, Takuya; Yasuno, Takashi; Suzuki, Takayuki; Harada, Hironobu
This paper describes the generation of adaptive gait patterns using new Central Pattern Generators (CPGs) including motor dynamic models for a quadruped robot under various environment. The CPGs act as the flexible oscillators of the joints and make the desired angle of the joints. The CPGs are mutually connected each other, and the sets of their coupling parameters are adjusted by genetic algorithm so that the quadruped robot can realize the stable and adequate gait patterns. As a result of generation, the suitable CPG networks for not only a walking straight gait pattern but also rotation gait patterns are obtained. Experimental results demonstrate that the proposed CPG networks are effective to automatically adjust the adaptive gait patterns for the tested quadruped robot under various environment. Furthermore, the target tracking control based on image processing is achieved by combining the generated gait patterns.
2010-01-01
Background In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.). Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. Results The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Conclusions Our algorithms prove to be effective and able to handle real data sets with multiple sequences, as well as biological patterns of interest, even when the latter display a high complexity (PROSITE signatures for example). In addition, these exact algorithms allow us to avoid the edge effect observed under the single sequence approximation, which leads to erroneous results, especially when the marginal distribution of the model displays a slow convergence toward the stationary distribution. We end up with a discussion on our method and on its potential improvements. PMID:20205909
Nuel, Gregory; Regad, Leslie; Martin, Juliette; Camproux, Anne-Claude
2010-01-26
In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.). Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Our algorithms prove to be effective and able to handle real data sets with multiple sequences, as well as biological patterns of interest, even when the latter display a high complexity (PROSITE signatures for example). In addition, these exact algorithms allow us to avoid the edge effect observed under the single sequence approximation, which leads to erroneous results, especially when the marginal distribution of the model displays a slow convergence toward the stationary distribution. We end up with a discussion on our method and on its potential improvements.
Image recombination transform algorithm for superresolution structured illumination microscopy
Zhou, Xing; Lei, Ming; Dan, Dan; Yao, Baoli; Yang, Yanlong; Qian, Jia; Chen, Guangde; Bianco, Piero R.
2016-01-01
Abstract. Structured illumination microscopy (SIM) is an attractive choice for fast superresolution imaging. The generation of structured illumination patterns made by interference of laser beams is broadly employed to obtain high modulation depth of patterns, while the polarizations of the laser beams must be elaborately controlled to guarantee the high contrast of interference intensity, which brings a more complex configuration for the polarization control. The emerging pattern projection strategy is much more compact, but the modulation depth of patterns is deteriorated by the optical transfer function of the optical system, especially in high spatial frequency near the diffraction limit. Therefore, the traditional superresolution reconstruction algorithm for interference-based SIM will suffer from many artifacts in the case of projection-based SIM that possesses a low modulation depth. Here, we propose an alternative reconstruction algorithm based on image recombination transform, which provides an alternative solution to address this problem even in a weak modulation depth. We demonstrated the effectiveness of this algorithm in the multicolor superresolution imaging of bovine pulmonary arterial endothelial cells in our developed projection-based SIM system, which applies a computer controlled digital micromirror device for fast fringe generation and multicolor light-emitting diodes for illumination. The merit of the system incorporated with the proposed algorithm allows for a low excitation intensity fluorescence imaging even less than 1 W/cm2, which is beneficial for the long-term, in vivo superresolved imaging of live cells and tissues. PMID:27653935
Automated Environment Generation for Software Model Checking
NASA Technical Reports Server (NTRS)
Tkachuk, Oksana; Dwyer, Matthew B.; Pasareanu, Corina S.
2003-01-01
A key problem in model checking open systems is environment modeling (i.e., representing the behavior of the execution context of the system under analysis). Software systems are fundamentally open since their behavior is dependent on patterns of invocation of system components and values defined outside the system but referenced within the system. Whether reasoning about the behavior of whole programs or about program components, an abstract model of the environment can be essential in enabling sufficiently precise yet tractable verification. In this paper, we describe an approach to generating environments of Java program fragments. This approach integrates formally specified assumptions about environment behavior with sound abstractions of environment implementations to form a model of the environment. The approach is implemented in the Bandera Environment Generator (BEG) which we describe along with our experience using BEG to reason about properties of several non-trivial concurrent Java programs.
Role of Prefrontal and Parietal Cortices in Associative Learning
Anderson, John R.; Byrne, Dana; Fincham, Jon M.; Gunn, Pat
2017-01-01
Two studies were performed that compared a “Paired” condition in which participants studied paired associates with a “Generated” condition in which participants completed word fragments to produce paired associates. In both tasks, participants were responsible for memory of the material either studied or generated. The experiments revealed significant differences between the responses of a predefined prefrontal region and a predefined parietal region. The parietal region responded more in the Generated condition than the Paired condition, whereas there was no difference in the prefrontal region. On the other hand, the prefrontal region responded to the delay between study and test in both the Paired and Generated conditions, whereas the parietal region only responded to delay in the Generated condition. This pattern of results is consistent with the hypothesis that the parietal region is responsive to changes in problem representation and the prefrontal region to retrieval operations. An information-processing model embodying these assumptions was fit to the blood oxygen level–dependent responses in these regions. PMID:17675369
Inferring a District-Based Hierarchical Structure of Social Contacts from Census Data
Yu, Zhiwen; Liu, Jiming; Zhu, Xianjun
2015-01-01
Researchers have recently paid attention to social contact patterns among individuals due to their useful applications in such areas as epidemic evaluation and control, public health decisions, chronic disease research and social network research. Although some studies have estimated social contact patterns from social networks and surveys, few have considered how to infer the hierarchical structure of social contacts directly from census data. In this paper, we focus on inferring an individual’s social contact patterns from detailed census data, and generate various types of social contact patterns such as hierarchical-district-structure-based, cross-district and age-district-based patterns. We evaluate newly generated contact patterns derived from detailed 2011 Hong Kong census data by incorporating them into a model and simulation of the 2009 Hong Kong H1N1 epidemic. We then compare the newly generated social contact patterns with the mixing patterns that are often used in the literature, and draw the following conclusions. First, the generation of social contact patterns based on a hierarchical district structure allows for simulations at different district levels. Second, the newly generated social contact patterns reflect individuals social contacts. Third, the newly generated social contact patterns improve the accuracy of the SEIR-based epidemic model. PMID:25679787
Tsai, Richard Tzong-Han; Sung, Cheng-Lung; Dai, Hong-Jie; Hung, Hsieh-Chuan; Sung, Ting-Yi; Hsu, Wen-Lian
2006-12-18
Biomedical named entity recognition (Bio-NER) is a challenging problem because, in general, biomedical named entities of the same category (e.g., proteins and genes) do not follow one standard nomenclature. They have many irregularities and sometimes appear in ambiguous contexts. In recent years, machine-learning (ML) approaches have become increasingly common and now represent the cutting edge of Bio-NER technology. This paper addresses three problems faced by ML-based Bio-NER systems. First, most ML approaches usually employ singleton features that comprise one linguistic property (e.g., the current word is capitalized) and at least one class tag (e.g., B-protein, the beginning of a protein name). However, such features may be insufficient in cases where multiple properties must be considered. Adding conjunction features that contain multiple properties can be beneficial, but it would be infeasible to include all conjunction features in an NER model since memory resources are limited and some features are ineffective. To resolve the problem, we use a sequential forward search algorithm to select an effective set of features. Second, variations in the numerical parts of biomedical terms (e.g., "2" in the biomedical term IL2) cause data sparseness and generate many redundant features. In this case, we apply numerical normalization, which solves the problem by replacing all numerals in a term with one representative numeral to help classify named entities. Third, the assignment of NE tags does not depend solely on the target word's closest neighbors, but may depend on words outside the context window (e.g., a context window of five consists of the current word plus two preceding and two subsequent words). We use global patterns generated by the Smith-Waterman local alignment algorithm to identify such structures and modify the results of our ML-based tagger. This is called pattern-based post-processing. To develop our ML-based Bio-NER system, we employ conditional random fields, which have performed effectively in several well-known tasks, as our underlying ML model. Adding selected conjunction features, applying numerical normalization, and employing pattern-based post-processing improve the F-scores by 1.67%, 1.04%, and 0.57%, respectively. The combined increase of 3.28% yields a total score of 72.98%, which is better than the baseline system that only uses singleton features. We demonstrate the benefits of using the sequential forward search algorithm to select effective conjunction feature groups. In addition, we show that numerical normalization can effectively reduce the number of redundant and unseen features. Furthermore, the Smith-Waterman local alignment algorithm can help ML-based Bio-NER deal with difficult cases that need longer context windows.
van Strien, Maarten J; Slager, Cornelis T J; de Vries, Bauke; Grêt-Regamey, Adrienne
2016-06-01
Many studies have assessed the effect of landscape patterns on spatial ecological processes by simulating these processes in computer-generated landscapes with varying composition and configuration. To generate such landscapes, various neutral landscape models have been developed. However, the limited set of landscape-level pattern variables included in these models is often inadequate to generate landscapes that reflect real landscapes. In order to achieve more flexibility and variability in the generated landscapes patterns, a more complete set of class- and patch-level pattern variables should be implemented in these models. These enhancements have been implemented in Landscape Generator (LG), which is a software that uses optimization algorithms to generate landscapes that match user-defined target values. Developed for participatory spatial planning at small scale, we enhanced the usability of LG and demonstrated how it can be used for larger scale ecological studies. First, we used LG to recreate landscape patterns from a real landscape (i.e., a mountainous region in Switzerland). Second, we generated landscape series with incrementally changing pattern variables, which could be used in ecological simulation studies. We found that LG was able to recreate landscape patterns that approximate those of real landscapes. Furthermore, we successfully generated landscape series that would not have been possible with traditional neutral landscape models. LG is a promising novel approach for generating neutral landscapes and enables testing of new hypotheses regarding the influence of landscape patterns on ecological processes. LG is freely available online.
Incremental terrain processing for large digital elevation models
NASA Astrophysics Data System (ADS)
Ye, Z.
2012-12-01
Incremental terrain processing for large digital elevation models Zichuan Ye, Dean Djokic, Lori Armstrong Esri, 380 New York Street, Redlands, CA 92373, USA (E-mail: zye@esri.com, ddjokic@esri.com , larmstrong@esri.com) Efficient analyses of large digital elevation models (DEM) require generation of additional DEM artifacts such as flow direction, flow accumulation and other DEM derivatives. When the DEMs to analyze have a large number of grid cells (usually > 1,000,000,000) the generation of these DEM derivatives is either impractical (it takes too long) or impossible (software is incapable of processing such a large number of cells). Different strategies and algorithms can be put in place to alleviate this situation. This paper describes an approach where the overall DEM is partitioned in smaller processing units that can be efficiently processed. The processed DEM derivatives for each partition can then be either mosaicked back into a single large entity or managed on partition level. For dendritic terrain morphologies, the way in which partitions are to be derived and the order in which they are to be processed depend on the river and catchment patterns. These patterns are not available until flow pattern of the whole region is created, which in turn cannot be established upfront due to the size issues. This paper describes a procedure that solves this problem: (1) Resample the original large DEM grid so that the total number of cells is reduced to a level for which the drainage pattern can be established. (2) Run standard terrain preprocessing operations on the resampled DEM to generate the river and catchment system. (3) Define the processing units and their processing order based on the river and catchment system created in step (2). (4) Based on the processing order, apply the analysis, i.e., flow accumulation operation to each of the processing units, at the full resolution DEM. (5) As each processing unit is processed based on the processing order defined in (3), compare the resulting drainage pattern with the drainage pattern established at the coarser scale and adjust the drainage boundaries and rivers if necessary.
Flow rate and trajectory of water spray produced by an aircraft tire
NASA Technical Reports Server (NTRS)
Daugherty, Robert H.; Stubbs, Sandy M.
1986-01-01
One of the risks associated with wet runway aircraft operation is the ingestion of water spray produced by an aircraft's tires into its engines. This problem can be especially dangerous at or near rotation speed on the takeoff roll. An experimental investigation was conducted in the NASA Langley Research Center Hydrodynamics Research Facility to measure the flow rate and trajectory of water spray produced by an aircraft nose tire operating on a flooded runway. The effects of various parameters on the spray patterns including distance aft of nosewheel, speed, load, and water depth were evaluated. Variations in the spray pattern caused by the airflow about primary structure such as the fuselage and wing are discussed. A discussion of events in and near the tire footprint concerning spray generation is included.
NASA Astrophysics Data System (ADS)
Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.
2004-07-01
The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.
Voronoi cell patterns: Theoretical model and applications
NASA Astrophysics Data System (ADS)
González, Diego Luis; Einstein, T. L.
2011-11-01
We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We use our model to describe the Voronoi cell patterns of several systems. Specifically, we study the island nucleation with irreversible attachment, the 1D car-parking problem, the formation of second-level administrative divisions, and the pattern formed by the Paris Métro stations.
NASA Astrophysics Data System (ADS)
Zhou, Yuping; Zhang, Qi
2018-04-01
In the information environment, digital and information processing to Li brocade patterns reveals an important means of Li ethnic style and inheriting the national culture. Adobe Illustrator CS3 and Java language were used in the paper to make "variation" processing to Li brocade patterns, and generate "Li brocade pattern mutant genes". The generation of pattern mutant genes includes color mutation, shape mutation, adding and missing transform, and twisted transform, etc. Research shows that Li brocade pattern mutant genes can be generated by using the Adobe Illustrator CS3 and the image processing tools of Java language edit, etc.
An investigative framework to facilitate epidemiological thinking during herd problem-solving.
More, Simon J; Doherty, Michael L; O'Grady, Luke
2017-01-01
Veterinary clinicians and students commonly use diagnostic approaches appropriate for individual cases when conducting herd problem-solving. However, these approaches can be problematic, in part because they make limited use of epidemiological principles and methods, which has clear application during the investigation of herd problems. In this paper, we provide an overview of diagnostic approaches that are used when investigating individual animal cases, and the challenges faced when these approaches are directly translated from the individual to the herd. Further, we propose an investigative framework to facilitate epidemiological thinking during herd problem-solving. A number of different approaches are used when making a diagnosis on an individual animal, including pattern recognition, hypothetico-deductive reasoning, and the key abnormality method. Methods commonly applied to individuals are often adapted for herd problem-solving: 'comparison with best practice' being a herd-level adaptation of pattern recognition, and 'differential diagnoses' a herd-level adaptation of hypothetico-deductive reasoning. These approaches can be effective, however, challenges can arise. Herds are complex; a collection of individual cows, but also additional layers relating to environment, management, feeding etc. It is unrealistic to expect seamless translation of diagnostic approaches from the individual to the herd. Comparison with best practice is time-consuming and prioritisation of actions can be problematic, whereas differential diagnoses can lead to 'pathogen hunting', particularly in complex cases. Epidemiology is the science of understanding disease in populations. The focus is on the population, underpinned by principles and utilising methods that seek to allow us to generate solid conclusions from apparently uncontrolled situations. In this paper, we argue for the inclusion of epidemiological principles and methods as an additional tool for herd problem-solving, and outline an investigative framework, with examples, to effectively incorporate these principles and methods with other diagnostic approaches during herd problem-solving. Relevant measures of performance are identified, and measures of case frequencies are calculated and compared across time, in space and among animal groupings, to identify patterns, clues and plausible hypotheses, consistent with potential biological processes. With this knowledge, the subsequent investigation (relevant on-farm activities, diagnostic testing and other examinations) can be focused, and actions prioritised (specifically, those actions that are likely to make the greatest difference in addressing the problem if enacted). In our experience, this investigative framework is an effective teaching tool, facilitating epidemiological thinking among students during herd problem-solving. It is a generic and robust process, suited to many herd-based problems.
Root Cause Failure Analysis of Stator Winding Insulation failure on 6.2 MW hydropower generator
NASA Astrophysics Data System (ADS)
Adhi Nugroho, Agus; Widihastuti, Ida; Ary, As
2017-04-01
Insulation failure on generator winding insulation occurred in the Wonogiri Hydropower plant has caused stator damage since ase was short circuited to ground. The fault has made the generator stop to operate. Wonogiri Hydropower plant is one of the hydroelectric plants run by PT. Indonesia Power UBP Mrica with capacity 2 × 6.2 MW. To prevent damage to occur again on hydropower generators, an analysis is carried out using Root Cause Failure Analysis RCFA is a systematic approach to identify the root cause of the main orbasic root cause of a problem or a condition that is not wanted. There are several aspects to concerned such as: loading pattern and operations, protection systems, generator insulation resistance, vibration, the cleanliness of the air and the ambient air. Insulation damage caused by gradual inhomogeneous cooling at the surface of winding may lead in to partial discharge. In homogeneous cooling may present due to lattice hampered by dust and oil deposits. To avoid repetitive defects and unwanted condition above, it is necessary to perform major maintenance overhaul every 5000-6000 hours of operation.
The effects of monitoring environment on problem-solving performance.
Laird, Brian K; Bailey, Charles D; Hester, Kim
2018-01-01
While effective and efficient solving of everyday problems is important in business domains, little is known about the effects of workplace monitoring on problem-solving performance. In a laboratory experiment, we explored the monitoring environment's effects on an individual's propensity to (1) establish pattern solutions to problems, (2) recognize when pattern solutions are no longer efficient, and (3) solve complex problems. Under three work monitoring regimes-no monitoring, human monitoring, and electronic monitoring-114 participants solved puzzles for monetary rewards. Based on research related to worker autonomy and theory of social facilitation, we hypothesized that monitored (versus non-monitored) participants would (1) have more difficulty finding a pattern solution, (2) more often fail to recognize when the pattern solution is no longer efficient, and (3) solve fewer complex problems. Our results support the first two hypotheses, but in complex problem solving, an interaction was found between self-assessed ability and the monitoring environment.
Ludwig, Kai; Speiser, Bernd
2004-01-01
We describe a modeling software component Ecco, implemented in the C++ programming language. It assists in the formulation of physicochemical systems including, in particular, electrochemical processes within general geometries. Ecco's kinetic part then translates any user defined reaction mechanism into an object-oriented representation and generates the according mathematical model equations. The input language, its grammar, the object-oriented design of Ecco, based on design patterns, and its integration into the open source software project EChem++ are discussed. Application Strategies are given.
Novel Driving Control of Power Assisted Wheelchair Based on Minimum Jerk Trajectory
NASA Astrophysics Data System (ADS)
Seki, Hirokazu; Sugimoto, Takeaki; Tadakuma, Susumu
This paper describes a novel trajectory control scheme for power assisted wheelchair. Human input torque patterns are always intermittent in power assisted wheelchairs, therefore, the suitable trajectories must be generated also after the human decreases his/her input torque. This paper tries to solve this significant problem based on minimum jerk model minimizing the changing rate of acceleration. The proposed control system based on minimum jerk trajectory is expected to improve the ride quality, stability and safety. Some experiments show the effectiveness of the proposed method.
Application of ERTS-1 imagery in the Vermont-New York dispute over pollution of Lake Champlain
NASA Technical Reports Server (NTRS)
Lind, A. O. (Principal Investigator)
1973-01-01
The author has identified the following significant results. ERTS-1 imagery and a composite map derived from ERTS-1 imagery were presented as evidence in a U.S. Supreme Court case involving the pollution of an interstate water body (Lake Champlain). A pollution problem generated by a large paper mill forms the basis of the suit (Vermont vs. International Paper Co. and State of New York) and ERTS-1 imagery shows the effluent pattern on the lake surface as extending into Vermont during three different times.
Recognition vs Reverse Engineering in Boolean Concepts Learning
ERIC Educational Resources Information Center
Shafat, Gabriel; Levin, Ilya
2012-01-01
This paper deals with two types of logical problems--recognition problems and reverse engineering problems, and with the interrelations between these types of problems. The recognition problems are modeled in the form of a visual representation of various objects in a common pattern, with a composition of represented objects in the pattern.…
Infrared stereo calibration for unmanned ground vehicle navigation
NASA Astrophysics Data System (ADS)
Harguess, Josh; Strange, Shawn
2014-06-01
The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.
Patterns and predictors of growth in divorced fathers' health status and substance use.
DeGarmo, David S; Reid, John B; Leve, Leslie D; Chamberlain, Patricia; Knutson, John F
2010-03-01
Health status and substance use trajectories are described over 18 months for a county sample of 230 divorced fathers of young children aged 4 to 11. One third of the sample was clinically depressed. Health problems, drinking, and hard drug use were stable over time for the sample, whereas depression, smoking, and marijuana use exhibited overall mean reductions. Variance components revealed significant individual differences in average levels and trajectories for health and substance use outcomes. Controlling for fathers' antisociality, negative life events, and social support, fathering identity predicted reductions in health-related problems and marijuana use. Father involvement reduced drinking and marijuana use. Antisociality was the strongest risk factor for health and substance use outcomes. Implications for application of a generative fathering perspective in practice and preventive interventions are discussed.
Leveraging Pattern Semantics for Extracting Entities in Enterprises
Tao, Fangbo; Zhao, Bo; Fuxman, Ariel; Li, Yang; Han, Jiawei
2015-01-01
Entity Extraction is a process of identifying meaningful entities from text documents. In enterprises, extracting entities improves enterprise efficiency by facilitating numerous applications, including search, recommendation, etc. However, the problem is particularly challenging on enterprise domains due to several reasons. First, the lack of redundancy of enterprise entities makes previous web-based systems like NELL and OpenIE not effective, since using only high-precision/low-recall patterns like those systems would miss the majority of sparse enterprise entities, while using more low-precision patterns in sparse setting also introduces noise drastically. Second, semantic drift is common in enterprises (“Blue” refers to “Windows Blue”), such that public signals from the web cannot be directly applied on entities. Moreover, many internal entities never appear on the web. Sparse internal signals are the only source for discovering them. To address these challenges, we propose an end-to-end framework for extracting entities in enterprises, taking the input of enterprise corpus and limited seeds to generate a high-quality entity collection as output. We introduce the novel concept of Semantic Pattern Graph to leverage public signals to understand the underlying semantics of lexical patterns, reinforce pattern evaluation using mined semantics, and yield more accurate and complete entities. Experiments on Microsoft enterprise data show the effectiveness of our approach. PMID:26705540
Leveraging Pattern Semantics for Extracting Entities in Enterprises.
Tao, Fangbo; Zhao, Bo; Fuxman, Ariel; Li, Yang; Han, Jiawei
2015-05-01
Entity Extraction is a process of identifying meaningful entities from text documents. In enterprises, extracting entities improves enterprise efficiency by facilitating numerous applications, including search, recommendation, etc. However, the problem is particularly challenging on enterprise domains due to several reasons. First, the lack of redundancy of enterprise entities makes previous web-based systems like NELL and OpenIE not effective, since using only high-precision/low-recall patterns like those systems would miss the majority of sparse enterprise entities, while using more low-precision patterns in sparse setting also introduces noise drastically. Second, semantic drift is common in enterprises ("Blue" refers to "Windows Blue"), such that public signals from the web cannot be directly applied on entities. Moreover, many internal entities never appear on the web. Sparse internal signals are the only source for discovering them. To address these challenges, we propose an end-to-end framework for extracting entities in enterprises, taking the input of enterprise corpus and limited seeds to generate a high-quality entity collection as output. We introduce the novel concept of Semantic Pattern Graph to leverage public signals to understand the underlying semantics of lexical patterns, reinforce pattern evaluation using mined semantics, and yield more accurate and complete entities. Experiments on Microsoft enterprise data show the effectiveness of our approach.
Generation 1.5 Written Error Patterns: A Comparative Study
ERIC Educational Resources Information Center
Doolan, Stephen M.; Miller, Donald
2012-01-01
In an attempt to contribute to existing research on Generation 1.5 students, the current study uses quantitative and qualitative methods to compare error patterns in a corpus of Generation 1.5, L1, and L2 community college student writing. This error analysis provides one important way to determine if error patterns in Generation 1.5 student…
SPIRAL PATTERNS IN PLANETESIMAL CIRCUMBINARY DISKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demidova, Tatiana V.; Shevchenko, Ivan I., E-mail: iis@gao.spb.ru
Planet formation scenarios and the observed planetary dynamics in binaries pose a number of theoretical challenges, especially concerning circumbinary planetary systems. We explore the dynamical stirring of a planetesimal circumbinary disk in the epoch when the gas component disappears. For this purpose, following theoretical approaches by Heppenheimer and Moriwaki and Nakagawa, we develop a secular theory of the dynamics of planetesimals in circumbinary disks. If a binary is eccentric and its components have unequal masses, a spiral density wave is generated, engulfing the disk on a secular timescale, which may exceed 10{sup 7} yr, depending on the problem parameters. The spiralmore » pattern is transient; thus, its observed presence may betray a system’s young age. We explore the pattern both analytically and in numerical experiments. The derived analytical spiral is a modified lituus; it matches the numerical density wave in the gas-free case perfectly. Using the smoothed particle hydrodynamics scheme, we explore the effect of residual gas on the wave propagation.« less
Rossi-Pool, Román; Salinas, Emilio; Zainos, Antonio; Alvarez, Manuel; Vergara, José; Parga, Néstor; Romo, Ranulfo
2016-01-01
The problem of neural coding in perceptual decision making revolves around two fundamental questions: (i) How are the neural representations of sensory stimuli related to perception, and (ii) what attributes of these neural responses are relevant for downstream networks, and how do they influence decision making? We studied these two questions by recording neurons in primary somatosensory (S1) and dorsal premotor (DPC) cortex while trained monkeys reported whether the temporal pattern structure of two sequential vibrotactile stimuli (of equal mean frequency) was the same or different. We found that S1 neurons coded the temporal patterns in a literal way and only during the stimulation periods and did not reflect the monkeys’ decisions. In contrast, DPC neurons coded the stimulus patterns as broader categories and signaled them during the working memory, comparison, and decision periods. These results show that the initial sensory representation is transformed into an intermediate, more abstract categorical code that combines past and present information to ultimately generate a perceptually informed choice. PMID:27872293
Efficient discovery of risk patterns in medical data.
Li, Jiuyong; Fu, Ada Wai-chee; Fahey, Paul
2009-01-01
This paper studies a problem of efficiently discovering risk patterns in medical data. Risk patterns are defined by a statistical metric, relative risk, which has been widely used in epidemiological research. To avoid fruitless search in the complete exploration of risk patterns, we define optimal risk pattern set to exclude superfluous patterns, i.e. complicated patterns with lower relative risk than their corresponding simpler form patterns. We prove that mining optimal risk pattern sets conforms an anti-monotone property that supports an efficient mining algorithm. We propose an efficient algorithm for mining optimal risk pattern sets based on this property. We also propose a hierarchical structure to present discovered patterns for the easy perusal by domain experts. The proposed approach is compared with two well-known rule discovery methods, decision tree and association rule mining approaches on benchmark data sets and applied to a real world application. The proposed method discovers more and better quality risk patterns than a decision tree approach. The decision tree method is not designed for such applications and is inadequate for pattern exploring. The proposed method does not discover a large number of uninteresting superfluous patterns as an association mining approach does. The proposed method is more efficient than an association rule mining method. A real world case study shows that the method reveals some interesting risk patterns to medical practitioners. The proposed method is an efficient approach to explore risk patterns. It quickly identifies cohorts of patients that are vulnerable to a risk outcome from a large data set. The proposed method is useful for exploratory study on large medical data to generate and refine hypotheses. The method is also useful for designing medical surveillance systems.
Mala, Sankeerti; Rathod, Vanita; Pundir, Siddharth; Dixit, Sudhanshu
2017-01-01
The unique pattern and structural diversity of fingerprints, lip prints, palatal rugae, and their occurrence in different patterns among individuals make it questionable whether they are completely unique even in a family hierarchy? Do they have any repetition of the patterns among the generations? Or is this a mere chaos theory? The present study aims to assess the pattern self-repetition of fingerprints, lip prints, and palatal rugae among three generations of ten different families. The present study was conducted at Rungta College of Dental Science and Research, Bhilai, India. Participants birth by origin of Chhattisgarh were only included in the study. Thirty participants from three consecutive generations of ten different families were briefed about the purpose of the study, and their fingerprints, lip prints, and palatal rugae impression were recorded and analyzed for the pattern of self-repetition. Multiple comparisons among the generations and one-way analysis of variance test were performed using SPSS 20 trial version. Among the pattern of primary palatal rugae, 10% showed repetition in all the three generations. Thirty percent showed repetition of the pattern of thumb fingerprints in all the three generation. The pattern of lip prints in the middle 1/3 rd of lower lip, 20% showed repetition in alternative generations. The evaluations of fingerprints, lip prints, and palatal rugae showed fractal dimensions, occurring variations in dimensions according to the complexity of each structure. Even though a minute self-repetition in the patterns of lip, thumb, and palate among the three consequent generations in a family was observed considering the sample size, these results need to be confirmed in a larger sample, either to establish the role of chaos theory in forensic science or identifying a particular pattern of the individual in his family hierarchy.
Miyawaki, Christina E
2016-03-01
This study is a cross-sectional investigation of caregiving practice patterns among Asian, Hispanic and non-Hispanic White American family caregivers of older adults across three immigrant generations. The 2009 California Health Interview Survey (CHIS) dataset was used, and 591 Asian, 989 Hispanic and 6537 non-Hispanic White American caregivers of older adults were selected. First, descriptive analyses of caregivers' characteristics, caregiving situations and practice patterns were examined by racial/ethnic groups and immigrant generations. Practice patterns measured were respite care use, hours and length of caregiving. Three hypotheses on caregiving patterns based on assimilation theory were tested and analyzed using logistic regression and generalized linear models by racial/ethnic groups and generations. Caregiving patterns of non-Hispanic White caregivers supported all three hypotheses regarding respite care use, caregiving hours and caregiving duration, showing less caregiving involvement in later generations. However, Asian and Hispanic counterparts showed mixed results. Third generation Asian and Hispanic caregivers used respite care the least and spent the most caregiving hours per week and had the longest caregiving duration compared to earlier generations. These caregiving patterns revealed underlying cultural values related to filial responsibility, even among later generations of caregivers of color. Findings suggest the importance of considering the cultural values of each racial/ethnic group regardless of generation when working with racially and ethnically diverse populations of family caregivers of older adults.
ERIC Educational Resources Information Center
Kapur, Manu
2018-01-01
The goal of this paper is to isolate the preparatory effects of problem-generation from solution generation in problem-posing contexts, and their underlying mechanisms on learning from instruction. Using a randomized-controlled design, students were assigned to one of two conditions: (a) problem-posing with solution generation, where they…
Resonant Mode-hopping Micromixing
Jang, Ling-Sheng; Chao, Shih-Hui; Holl, Mark R.; Meldrum, Deirdre R.
2009-01-01
A common micromixer design strategy is to generate interleaved flow topologies to enhance diffusion. However, problems with these designs include complicated structures and dead volumes within the flow fields. We present an active micromixer using a resonating piezoceramic/silicon composite diaphragm to generate acoustic streaming flow topologies. Circulation patterns are observed experimentally and correlate to the resonant mode shapes of the diaphragm. The dead volumes in the flow field are eliminated by rapidly switching from one discrete resonant mode to another (i.e., resonant mode-hop). Mixer performance is characterized by mixing buffer with a fluorescence tracer containing fluorescein. Movies of the mixing process are analyzed by converting fluorescent images to two-dimensional fluorescein concentration distributions. The results demonstrate that mode-hopping operation rapidly homogenized chamber contents, circumventing diffusion-isolated zones. PMID:19551159
Amornchaicharoensuk, Yupaporn
2016-09-01
Medical records of children less than 15-years of age admitted to hospital for urinary tract infection (UTI) from January 2010 to December 2014 were reviewed. Among 100 children (59% males and 41% females) with upper UTI, the most common pathogen (88%) was Escherichia coli, of which 69% were nonextended spectrum beta-lactamase (ESBL) and 19 % ESBL producers. Resistance to ampicillin and trimethoprim/sulfamethoxazole was 90% and 60%, respectively. All ESBL-producing E. coli were resistant to ampicillin and third generation cephalosporins (cefotaxime and ceftriaxone), while 87% and 1.5% of non ESBL-producing E. coli were resistant to ampicillin and the two third generation cephalosporins, respectively. These data highlight the high prevalence of ESBL-producing E. coli in pediatric UTI and the potential problem in treating such infections.
Scalable approximate policies for Markov decision process models of hospital elective admissions.
Zhu, George; Lizotte, Dan; Hoey, Jesse
2014-05-01
To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.
Choosing experiments to accelerate collective discovery
Rzhetsky, Andrey; Foster, Jacob G.; Foster, Ian T.
2015-01-01
A scientist’s choice of research problem affects his or her personal career trajectory. Scientists’ combined choices affect the direction and efficiency of scientific discovery as a whole. In this paper, we infer preferences that shape problem selection from patterns of published findings and then quantify their efficiency. We represent research problems as links between scientific entities in a knowledge network. We then build a generative model of discovery informed by qualitative research on scientific problem selection. We map salient features from this literature to key network properties: an entity’s importance corresponds to its degree centrality, and a problem’s difficulty corresponds to the network distance it spans. Drawing on millions of papers and patents published over 30 years, we use this model to infer the typical research strategy used to explore chemical relationships in biomedicine. This strategy generates conservative research choices focused on building up knowledge around important molecules. These choices become more conservative over time. The observed strategy is efficient for initial exploration of the network and supports scientific careers that require steady output, but is inefficient for science as a whole. Through supercomputer experiments on a sample of the network, we study thousands of alternatives and identify strategies much more efficient at exploring mature knowledge networks. We find that increased risk-taking and the publication of experimental failures would substantially improve the speed of discovery. We consider institutional shifts in grant making, evaluation, and publication that would help realize these efficiencies. PMID:26554009
Pe'er, Guy; Zurita, Gustavo A.; Schober, Lucia; Bellocq, Maria I.; Strer, Maximilian; Müller, Michael; Pütz, Sandro
2013-01-01
Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model “G-RaFFe” generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature. PMID:23724108
Pe'er, Guy; Zurita, Gustavo A; Schober, Lucia; Bellocq, Maria I; Strer, Maximilian; Müller, Michael; Pütz, Sandro
2013-01-01
Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model "G-RaFFe" generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature.
Maskless micro-ion-beam reduction lithography system
Leung, Ka-Ngo; Barletta, William A.; Patterson, David O.; Gough, Richard A.
2005-05-03
A maskless micro-ion-beam reduction lithography system is a system for projecting patterns onto a resist layer on a wafer with feature size down to below 100 nm. The MMRL system operates without a stencil mask. The patterns are generated by switching beamlets on and off from a two electrode blanking system or pattern generator. The pattern generator controllably extracts the beamlet pattern from an ion source and is followed by a beam reduction and acceleration column.
Sparse Substring Pattern Set Discovery Using Linear Programming Boosting
NASA Astrophysics Data System (ADS)
Kashihara, Kazuaki; Hatano, Kohei; Bannai, Hideo; Takeda, Masayuki
In this paper, we consider finding a small set of substring patterns which classifies the given documents well. We formulate the problem as 1 norm soft margin optimization problem where each dimension corresponds to a substring pattern. Then we solve this problem by using LPBoost and an optimal substring discovery algorithm. Since the problem is a linear program, the resulting solution is likely to be sparse, which is useful for feature selection. We evaluate the proposed method for real data such as movie reviews.
Design space exploration for early identification of yield limiting patterns
NASA Astrophysics Data System (ADS)
Li, Helen; Zou, Elain; Lee, Robben; Hong, Sid; Liu, Square; Wang, JinYan; Du, Chunshan; Zhang, Recco; Madkour, Kareem; Ali, Hussein; Hsu, Danny; Kabeel, Aliaa; ElManhawy, Wael; Kwan, Joe
2016-03-01
In order to resolve the causality dilemma of which comes first, accurate design rules or real designs, this paper presents a flow for exploration of the layout design space to early identify problematic patterns that will negatively affect the yield. A new random layout generating method called Layout Schema Generator (LSG) is reported in this paper, this method generates realistic design-like layouts without any design rule violation. Lithography simulation is then used on the generated layout to discover the potentially problematic patterns (hotspots). These hotspot patterns are further explored by randomly inducing feature and context variations to these identified hotspots through a flow called Hotspot variation Flow (HSV). Simulation is then performed on these expanded set of layout clips to further identify more problematic patterns. These patterns are then classified into design forbidden patterns that should be included in the design rule checker and legal patterns that need better handling in the RET recipes and processes.
An Ensemble of Neural Networks for Stock Trading Decision Making
NASA Astrophysics Data System (ADS)
Chang, Pei-Chann; Liu, Chen-Hao; Fan, Chin-Yuan; Lin, Jun-Lin; Lai, Chih-Ming
Stock turning signals detection are very interesting subject arising in numerous financial and economic planning problems. In this paper, Ensemble Neural Network system with Intelligent Piecewise Linear Representation for stock turning points detection is presented. The Intelligent piecewise linear representation method is able to generate numerous stocks turning signals from the historic data base, then Ensemble Neural Network system will be applied to train the pattern and retrieve similar stock price patterns from historic data for training. These turning signals represent short-term and long-term trading signals for selling or buying stocks from the market which are applied to forecast the future turning points from the set of test data. Experimental results demonstrate that the hybrid system can make a significant and constant amount of profit when compared with other approaches using stock data available in the market.
Unsupervised learning of natural languages
Solan, Zach; Horn, David; Ruppin, Eytan; Edelman, Shimon
2005-01-01
We address the problem, fundamental to linguistics, bioinformatics, and certain other disciplines, of using corpora of raw symbolic sequential data to infer underlying rules that govern their production. Given a corpus of strings (such as text, transcribed speech, chromosome or protein sequence data, sheet music, etc.), our unsupervised algorithm recursively distills from it hierarchically structured patterns. The adios (automatic distillation of structure) algorithm relies on a statistical method for pattern extraction and on structured generalization, two processes that have been implicated in language acquisition. It has been evaluated on artificial context-free grammars with thousands of rules, on natural languages as diverse as English and Chinese, and on protein data correlating sequence with function. This unsupervised algorithm is capable of learning complex syntax, generating grammatical novel sentences, and proving useful in other fields that call for structure discovery from raw data, such as bioinformatics. PMID:16087885
Unsupervised learning of natural languages.
Solan, Zach; Horn, David; Ruppin, Eytan; Edelman, Shimon
2005-08-16
We address the problem, fundamental to linguistics, bioinformatics, and certain other disciplines, of using corpora of raw symbolic sequential data to infer underlying rules that govern their production. Given a corpus of strings (such as text, transcribed speech, chromosome or protein sequence data, sheet music, etc.), our unsupervised algorithm recursively distills from it hierarchically structured patterns. The adios (automatic distillation of structure) algorithm relies on a statistical method for pattern extraction and on structured generalization, two processes that have been implicated in language acquisition. It has been evaluated on artificial context-free grammars with thousands of rules, on natural languages as diverse as English and Chinese, and on protein data correlating sequence with function. This unsupervised algorithm is capable of learning complex syntax, generating grammatical novel sentences, and proving useful in other fields that call for structure discovery from raw data, such as bioinformatics.
Elliott, Luther; Ream, Geoffrey; McGinsky, Elizabeth; Dunlap, Eloise
2012-12-01
AIMS: To assess the contribution of patterns of video game play, including game genre, involvement, and time spent gaming, to problem use symptomatology. DESIGN: Nationally representative survey. SETTING: Online. PARTICIPANTS: Large sample (n=3,380) of adult video gamers in the US. MEASUREMENTS: Problem video game play (PVGP) scale, video game genre typology, use patterns (gaming days in the past month and hours on days used), enjoyment, consumer involvement, and background variables. FINDINGS: Study confirms game genre's contribution to problem use as well as demographic variation in play patterns that underlie problem video game play vulnerability. CONCLUSIONS: Identification of a small group of game types positively correlated with problem use suggests new directions for research into the specific design elements and reward mechanics of "addictive" video games. Unique vulnerabilities to problem use among certain groups demonstrate the need for ongoing investigation of health disparities related to contextual dimensions of video game play.
Ream, Geoffrey; McGinsky, Elizabeth; Dunlap, Eloise
2012-01-01
Aims To assess the contribution of patterns of video game play, including game genre, involvement, and time spent gaming, to problem use symptomatology. Design Nationally representative survey. Setting Online. Participants Large sample (n=3,380) of adult video gamers in the US. Measurements Problem video game play (PVGP) scale, video game genre typology, use patterns (gaming days in the past month and hours on days used), enjoyment, consumer involvement, and background variables. Findings Study confirms game genre's contribution to problem use as well as demographic variation in play patterns that underlie problem video game play vulnerability. Conclusions Identification of a small group of game types positively correlated with problem use suggests new directions for research into the specific design elements and reward mechanics of “addictive” video games. Unique vulnerabilities to problem use among certain groups demonstrate the need for ongoing investigation of health disparities related to contextual dimensions of video game play. PMID:23284310
Chang, Ling-Yin; Chang, Hsing-Yi; Lin, Linen Nymphas; Wu, Chi-Chen; Yen, Lee-Lan
2018-01-01
Adolescence is a developmental period with high vulnerability to sleep problems. However, research identifying distinct patterns and underlying determinants of sleep problems is scarce. This study investigated discrete subgroups of, changes in, and stability of sleep problems. We also examined whether peer victimization influenced sleep problem subgroups and transitions in patterns of sleep problems from late adolescence to young adulthood. Sex differences in the effects of peer victimization were also explored. In total, 1,455 male and 1,399 female adolescents from northern Taiwan participated in this longitudinal study. Latent transition analysis was used to examine changes in patterns of sleep problems and the effects of peer victimization on these changes. We identified three subgroups of sleep problems in males and two in females, and found that there was a certain level of instability in patterns of sleep problems during the study period. For both sexes, those with greater increases in peer victimization over time were more likely to change from being a good sleeper to a poor sleeper. The effects of peer victimization on baseline status of sleep problems, however, was only significant for males, with those exposed to higher levels of peer victimization more likely to be poor sleepers at baseline. Our findings reveal an important role of peer victimization in predicting transitions in patterns of sleep problems. Intervention programs aimed at decreasing peer victimization may help reduce the development and escalation of sleep problems among adolescents, especially in males. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Desnijder, Karel; Hanselaer, Peter; Meuret, Youri
2016-04-01
A key requirement to obtain a uniform luminance for a side-lit LED backlight is the optimised spatial pattern of structures on the light guide that extract the light. The generation of such a scatter pattern is usually performed by applying an iterative approach. In each iteration, the luminance distribution of the backlight with a particular scatter pattern is analysed. This is typically performed with a brute-force ray-tracing algorithm, although this approach results in a time-consuming optimisation process. In this study, the Adding-Doubling method is explored as an alternative way for evaluating the luminance of a backlight. Due to the similarities between light propagating in a backlight with extraction structures and light scattering in a cloud of light scatterers, the Adding-Doubling method which is used to model the latter could also be used to model the light distribution in a backlight. The backlight problem is translated to a form upon which the Adding-Doubling method is directly applicable. The calculated luminance for a simple uniform extraction pattern with the Adding-Doubling method matches the luminance generated by a commercial raytracer very well. Although successful, no clear computational advantage over ray tracers is realised. However, the dynamics of light propagation in a light guide as used the Adding-Doubling method, also allow to enhance the efficiency of brute-force ray-tracing algorithms. The performance of this enhanced ray-tracing approach for the simulation of backlights is also evaluated against a typical brute-force ray-tracing approach.
A random utility based estimation framework for the household activity pattern problem.
DOT National Transportation Integrated Search
2016-06-01
This paper develops a random utility based estimation framework for the Household Activity : Pattern Problem (HAPP). Based on the realization that output of complex activity-travel decisions : form a continuous pattern in space-time dimension, the es...
Run-time scheduling and execution of loops on message passing machines
NASA Technical Reports Server (NTRS)
Crowley, Kay; Saltz, Joel; Mirchandaney, Ravi; Berryman, Harry
1989-01-01
Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these structures rather than being explicitly represented in the program. Good methods for parallelizing and partitioning these types of problems require assignment of computations in rather arbitrary ways. Naive implementations of programs on distributed memory machines requiring general loop partitions can be extremely inefficient. Instead, the scheduling mechanism needs to capture the data reference patterns of the loops in order to partition the problem. First, the indices assigned to each processor must be locally numbered. Next, it is necessary to precompute what information is needed by each processor at various points in the computation. The precomputed information is then used to generate an execution template designed to carry out the computation, communication, and partitioning of data, in an optimized manner. The design is presented for a general preprocessor and schedule executer, the structures of which do not vary, even though the details of the computation and of the type of information are problem dependent.
Run-time scheduling and execution of loops on message passing machines
NASA Technical Reports Server (NTRS)
Saltz, Joel; Crowley, Kathleen; Mirchandaney, Ravi; Berryman, Harry
1990-01-01
Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these structures rather than being explicitly represented in the program. Good methods for parallelizing and partitioning these types of problems require assignment of computations in rather arbitrary ways. Naive implementations of programs on distributed memory machines requiring general loop partitions can be extremely inefficient. Instead, the scheduling mechanism needs to capture the data reference patterns of the loops in order to partition the problem. First, the indices assigned to each processor must be locally numbered. Next, it is necessary to precompute what information is needed by each processor at various points in the computation. The precomputed information is then used to generate an execution template designed to carry out the computation, communication, and partitioning of data, in an optimized manner. The design is presented for a general preprocessor and schedule executer, the structures of which do not vary, even though the details of the computation and of the type of information are problem dependent.
Feng, Xiao; Peng, Li; Chang-Quan, Long; Yi, Lei; Hong, Li
2014-09-01
Most previous studies investigating relational reasoning have used visuo-spatial materials. This fMRI study aimed to determine how relational complexity affects brain activity during inductive reasoning, using numerical materials. Three numerical relational levels of the number series completion task were adopted for use: 0-relational (e.g., "23 23 23"), 1-relational ("32 30 28") and 2-relational ("12 13 15") problems. The fMRI results revealed that the bilateral dorsolateral prefrontal cortex (DLPFC) showed enhanced activity associated with relational complexity. Bilateral inferior parietal lobule (IPL) activity was greater during the 1- and 2-relational level problems than during the 0-relational level problems. In addition, the left fronto-polar cortex (FPC) showed selective activity during the 2-relational level problems. The bilateral DLPFC may be involved in the process of hypothesis generation, whereas the bilateral IPL may be sensitive to calculation demands. Moreover, the sensitivity of the left FPC to the multiple relational problems may be related to the integration of numerical relations. The present study extends our knowledge of the prefrontal activity pattern underlying numerical relational processing. Copyright © 2014 Elsevier B.V. All rights reserved.
Reconstruction of network topology using status-time-series data
NASA Astrophysics Data System (ADS)
Pandey, Pradumn Kumar; Badarla, Venkataramana
2018-01-01
Uncovering the heterogeneous connection pattern of a networked system from the available status-time-series (STS) data of a dynamical process on the network is of great interest in network science and known as a reverse engineering problem. Dynamical processes on a network are affected by the structure of the network. The dependency between the diffusion dynamics and structure of the network can be utilized to retrieve the connection pattern from the diffusion data. Information of the network structure can help to devise the control of dynamics on the network. In this paper, we consider the problem of network reconstruction from the available status-time-series (STS) data using matrix analysis. The proposed method of network reconstruction from the STS data is tested successfully under susceptible-infected-susceptible (SIS) diffusion dynamics on real-world and computer-generated benchmark networks. High accuracy and efficiency of the proposed reconstruction procedure from the status-time-series data define the novelty of the method. Our proposed method outperforms compressed sensing theory (CST) based method of network reconstruction using STS data. Further, the same procedure of network reconstruction is applied to the weighted networks. The ordering of the edges in the weighted networks is identified with high accuracy.
Anderson, John R; Betts, Shawn; Ferris, Jennifer L; Fincham, Jon M
2011-03-01
Students were taught an algorithm for solving a new class of mathematical problems. Occasionally in the sequence of problems, they encountered exception problems that required that they extend the algorithm. Regular and exception problems were associated with different patterns of brain activation. Some regions showed a Cognitive pattern of being active only until the problem was solved and no difference between regular or exception problems. Other regions showed a Metacognitive pattern of greater activity for exception problems and activity that extended into the post-solution period, particularly when an error was made. The Cognitive regions included some of parietal and prefrontal regions associated with the triple-code theory of (Dehaene, S., Piazza, M., Pinel, P., & Cohen, L. (2003). Three parietal circuits for number processing. Cognitive Neuropsychology, 20, 487-506) and associated with algebra equation solving in the ACT-R theory (Anderson, J. R. (2005). Human symbol manipulation within an 911 integrated cognitive architecture. Cognitive science, 29, 313-342. Metacognitive regions included the superior prefrontal gyrus, the angular gyrus of the triple-code theory, and frontopolar regions.
NASA Technical Reports Server (NTRS)
Knasel, T. Michael
1996-01-01
The primary goal of the Adaptive Vision Laboratory Research project was to develop advanced computer vision systems for automatic target recognition. The approach used in this effort combined several machine learning paradigms including evolutionary learning algorithms, neural networks, and adaptive clustering techniques to develop the E-MOR.PH system. This system is capable of generating pattern recognition systems to solve a wide variety of complex recognition tasks. A series of simulation experiments were conducted using E-MORPH to solve problems in OCR, military target recognition, industrial inspection, and medical image analysis. The bulk of the funds provided through this grant were used to purchase computer hardware and software to support these computationally intensive simulations. The payoff from this effort is the reduced need for human involvement in the design and implementation of recognition systems. We have shown that the techniques used in E-MORPH are generic and readily transition to other problem domains. Specifically, E-MORPH is multi-phase evolutionary leaming system that evolves cooperative sets of features detectors and combines their response using an adaptive classifier to form a complete pattern recognition system. The system can operate on binary or grayscale images. In our most recent experiments, we used multi-resolution images that are formed by applying a Gabor wavelet transform to a set of grayscale input images. To begin the leaming process, candidate chips are extracted from the multi-resolution images to form a training set and a test set. A population of detector sets is randomly initialized to start the evolutionary process. Using a combination of evolutionary programming and genetic algorithms, the feature detectors are enhanced to solve a recognition problem. The design of E-MORPH and recognition results for a complex problem in medical image analysis are described at the end of this report. The specific task involves the identification of vertebrae in x-ray images of human spinal columns. This problem is extremely challenging because the individual vertebra exhibit variation in shape, scale, orientation, and contrast. E-MORPH generated several accurate recognition systems to solve this task. This dual use of this ATR technology clearly demonstrates the flexibility and power of our approach.
Using Self-Generated Drawings to Solve Arithmetic Word Problems.
ERIC Educational Resources Information Center
Van Essen, Gerard; Hamaker, Christiaan
1990-01-01
Results are presented from two intervention studies which investigate whether encouraging elementary students to generate drawings of arithmetic word problems facilitates problem-solving performance. Findings indicate that fifth graders (N=50) generated many drawings of word problems and improved problem solutions after the intervention, whereas…
SCALE PROBLEMS IN REPORTING LANDSCAPE PATTERN AT THE REGIONAL SCALE
Remotely sensed data for Southeastern United States (Standard Federal Region 4) are used to examine the scale problems involved in reporting landscape pattern for a large, heterogeneous region. Frequency distributions of landscape indices illustrate problems associated with the g...
Detection of artifacts from high energy bursts in neonatal EEG.
Bhattacharyya, Sourya; Biswas, Arunava; Mukherjee, Jayanta; Majumdar, Arun Kumar; Majumdar, Bandana; Mukherjee, Suchandra; Singh, Arun Kumar
2013-11-01
Detection of non-cerebral activities or artifacts, intermixed within the background EEG, is essential to discard them from subsequent pattern analysis. The problem is much harder in neonatal EEG, where the background EEG contains spikes, waves, and rapid fluctuations in amplitude and frequency. Existing artifact detection methods are mostly limited to detect only a subset of artifacts such as ocular, muscle or power line artifacts. Few methods integrate different modules, each for detection of one specific category of artifact. Furthermore, most of the reference approaches are implemented and tested on adult EEG recordings. Direct application of those methods on neonatal EEG causes performance deterioration, due to greater pattern variation and inherent complexity. A method for detection of a wide range of artifact categories in neonatal EEG is thus required. At the same time, the method should be specific enough to preserve the background EEG information. The current study describes a feature based classification approach to detect both repetitive (generated from ECG, EMG, pulse, respiration, etc.) and transient (generated from eye blinking, eye movement, patient movement, etc.) artifacts. It focuses on artifact detection within high energy burst patterns, instead of detecting artifacts within the complete background EEG with wide pattern variation. The objective is to find true burst patterns, which can later be used to identify the Burst-Suppression (BS) pattern, which is commonly observed during newborn seizure. Such selective artifact detection is proven to be more sensitive to artifacts and specific to bursts, compared to the existing artifact detection approaches applied on the complete background EEG. Several time domain, frequency domain, statistical features, and features generated by wavelet decomposition are analyzed to model the proposed bi-classification between burst and artifact segments. A feature selection method is also applied to select the feature subset producing highest classification accuracy. The suggested feature based classification method is executed using our recorded neonatal EEG dataset, consisting of burst and artifact segments. We obtain 78% sensitivity and 72% specificity as the accuracy measures. The accuracy obtained using the proposed method is found to be about 20% higher than that of the reference approaches. Joint use of the proposed method with our previous work on burst detection outperforms reference methods on simultaneous burst and artifact detection. As the proposed method supports detection of a wide range of artifact patterns, it can be improved to incorporate the detection of artifacts within other seizure patterns and background EEG information as well. © 2013 Elsevier Ltd. All rights reserved.
Automatic Generation of English-Japanese Translation Pattern Utilizing Genetic Programming Technique
NASA Astrophysics Data System (ADS)
Matsumura, Koki; Tamekuni, Yuji; Kimura, Shuhei
There are a lot of constructional differences in an English-Japanese phrase template, and that often makes the act of translation difficult. Moreover, there exist various and tremendous phrase templates and sentence to be refered to. It is not easy to prepare the corpus that covers the all. Therefore, it is very significant to generate the translation pattern of the sentence pattern automatically from a viewpoint of the translation success rate and the capacity of the pattern dictionary. Then, for the purpose of realizing the automatic generation of the translation pattern, this paper proposed the new method for the generation of the translation pattern by using the genetic programming technique (GP). The technique tries to generate the translation pattern of various sentences which are not registered in the phrase template dictionary automatically by giving the genetic operation to the parsing tree of a basic pattern. The tree consists of the pair of the English-Japanese sentence generated as the first stage population. The analysis tree data base with 50,100,150,200 pairs was prepared as the first stage population. And this system was applied and executed for an English input of 1,555 sentences. As a result, the analysis tree increases from 200 to 517, and the accuracy rate of the translation pattern has improved from 42.57% to 70.10%. And, 86.71% of the generated translations was successfully done, whose meanings are enough acceptable and understandable. It seemed that this proposal technique became a clue to raise the translation success rate, and to find the possibility of the reduction of the analysis tree data base.
Water Tunnel Flow Visualization Study Through Poststall of 12 Novel Planform Shapes
NASA Technical Reports Server (NTRS)
Gatlin, Gregory M.; Neuhart, Dan H.
1996-01-01
To determine the flow field characteristics of 12 planform geometries, a flow visualization investigation was conducted in the Langley 16- by 24-Inch Water Tunnel. Concepts studied included flat plate representations of diamond wings, twin bodies, double wings, cutout wing configurations, and serrated forebodies. The off-surface flow patterns were identified by injecting colored dyes from the model surface into the free-stream flow. These dyes generally were injected so that the localized vortical flow patterns were visualized. Photographs were obtained for angles of attack ranging from 10' to 50', and all investigations were conducted at a test section speed of 0.25 ft per sec. Results from the investigation indicate that the formation of strong vortices on highly swept forebodies can improve poststall lift characteristics; however, the asymmetric bursting of these vortices could produce substantial control problems. A wing cutout was found to significantly alter the position of the forebody vortex on the wing by shifting the vortex inboard. Serrated forebodies were found to effectively generate multiple vortices over the configuration. Vortices from 65' swept forebody serrations tended to roll together, while vortices from 40' swept serrations were more effective in generating additional lift caused by their more independent nature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benigni, R.; Andreoli, C.; Giuliani, A.
1989-01-01
The interrelationships among carcinogenicity, mutagenicity, acute toxicity (LD50), and a number of molecular descriptors were studied by computerized data analysis methods on the data base generated by the International Program for the Evaluation of Short-Term Test for Carcinogens (IPESTTC). With the use of statistical regression methods, three main associations were evidenced: (1) the well-known correlation between carcinogenicity and mutagenicity; (2) a correlation between mutagenicity and toxicity (LD50 ip in mice); and (3) a correlation between toxicity and a recently introduced estimator of the free energy of binding of the molecules to biological receptors. As expected on the basis of themore » large variety of chemical classes represented in the IPESTTC data base, no simple relationship between mutagenicity or carcinogenicity and chemical descriptors was found. To overcome this problem, a new pattern recognition method (REPAD), developed by us for structure-activity studies of noncongeneric chemicals, has been used. This allowed us to highlight a significant difference between the whole patterns of relationships among chemicophysical variables in the two groups to active (mutagenicity and/or carcinogenic) and inactive chemicals. This approach generated a classification rule able to correctly assign about 80% of carcinogens or mutagens.« less
On the role of the reticular formation in vocal pattern generation.
Jürgens, Uwe; Hage, Steffen R
2007-09-04
This review is an attempt to localize the brain region responsible for pattern generation of species-specific vocalizations. A catalogue is set up, listing the criteria considered to be essential for a vocal pattern generator. According to this catalogue, a vocal pattern generator should show vocalization-correlated activity, starting before vocal onset and reflecting specific acoustic features of the vocalization. Artificial activation by electrical or glutamatergic stimulation should produce artificially sounding vocalization. Lesioning is expected to have an inhibitory or deteriorating effect on vocalization. Anatomically, a vocal pattern generator can be assumed to have direct or, at least, oligosynaptic connections with all the motoneuron pools involved in phonation. A survey of the literature reveals that the only area meeting all these criteria is a region, reaching from the parvocellular pontine reticular formation just above the superior olive through the lateral reticular formation around the facial nucleus and nucleus ambiguus down to the caudalmost medulla, including the dorsal and ventral reticular nuclei and nucleus retroambiguus. It is proposed that vocal pattern generation takes place within this whole region.
NASA Astrophysics Data System (ADS)
Mues, Sarah; Lilge, Inga; Schönherr, Holger; Kemper, Björn; Schnekenburger, Jürgen
2017-02-01
The major problem of Digital Holographic Microscopy (DHM) long term live cell imaging is that over time most of the tracked cells move out of the image area and other ones move in. Therefore, most of the cells are lost for the evaluation of individual cellular processes. Here, we present an effective solution for this crucial problem of long-term microscopic live cell analysis. We have generated functionalized slides containing areas of 250 μm per 200 μm. These micropatterned biointerfaces consist of passivating polyaclrylamide brushes (PAAm). Inner areas are backfilled with octadecanthiol (ODT), which allows cell attachment. The fouling properties of these surfaces are highly controllable and therefore the defined areas designed for the size our microscopic image areas were effective in keeping all cells inside the rectangles over the selected imaging period.
Patterns and Predictors of Growth in Divorced Fathers' Health Status and Substance Use
DeGarmo, David S.; Reid, John B.; Leve, Leslie D.; Chamberlain, Patricia; Knutson, John F.
2009-01-01
Health status and substance use trajectories are described over 18 months for a county sample of 230 divorced fathers of young children aged 4 to 11. One third of the sample was clinically depressed. Health problems, drinking, and hard drug use were stable over time for the sample, whereas depression, smoking, and marijuana use exhibited overall mean reductions. Variance components revealed significant individual differences in average levels and trajectories for health and substance use outcomes. Controlling for fathers' antisociality, negative life events, and social support, fathering identity predicted reductions in health-related problems and marijuana use. Father involvement reduced drinking and marijuana use. Antisociality was the strongest risk factor for health and substance use outcomes. Implications for application of a generative fathering perspective in practice and preventive interventions are discussed. PMID:19477763
Spatial patterns of plastic debris along Estuarine shorelines.
Browne, Mark A; Galloway, Tamara S; Thompson, Richard C
2010-05-01
The human population generates vast quantities of waste material. Macro (>1 mm) and microscopic (<1 mm) fragments of plastic debris represent a substantial contamination problem. Here, we test hypotheses about the influence of wind and depositional regime on spatial patterns of micro- and macro-plastic debris within the Tamar Estuary, UK. Debris was identified to the type of polymer using Fourier-transform infrared spectroscopy (FT-IR) and categorized according to density. In terms of abundance, microplastic accounted for 65% of debris recorded and mainly comprised polyvinylchloride, polyester, and polyamide. Generally, there were greater quantities of plastic at downwind sites. For macroplastic, there were clear patterns of distribution for less dense items, while for microplastic debris, clear patterns were for denser material. Small particles of sediment and plastic are both likely to settle slowly from the water-column and are likely to be transported by the flow of water and be deposited in areas where the movements of water are slower. There was, however, no relationship between the abundance of microplastic and the proportion of clay in sediments from the strandline. These results illustrate how FT-IR spectroscopy can be used to identify the different types of plastic and in this case was used to indicate spatial patterns, demonstrating habitats that are downwind acting as potential sinks for the accumulation of debris.
Impact of Feed Delivery Pattern on Aerial Particulate Matter and Behavior of Feedlot Cattle †
Mitloehner, Frank M.; Dailey, Jeff W.; Morrow, Julie L.; McGlone, John J.
2017-01-01
Simple Summary Fine particulate matter (with less than 2.5 microns diameter; aka PM2.5) are a human and animal health concern because they can carry microbes and chemicals into the lungs. Particulate matter (PM) in general emitted from cattle feedlots can reach high concentrations. When feedlot cattle were given an altered feeding schedule (ALT) that more closely reflected their biological feeding times compared with conventional morning feeding (CON), PM2.5 generation at peak times was substantially lowered. Average daily generation of PM2.5 was decreased by 37% when cattle behavior was redirected away from PM-generating behaviors and toward evening feeding behaviors. Behavioral problems such as agonistic (i.e., aggressive) and bulling (i.e., mounting each other) behaviors also were reduced several fold among ALT compared with CON cattle. Intake of feed was less and daily body weight gain tended to be less with the altered feeding schedule while efficiency of feed utilization was not affected. Although ALT may pose a challenge in feed delivery and labor scheduling, cattle had fewer behavioral problems and reduced PM2.5 generation when feed delivery times matched with the natural drive to eat in a crepuscular pattern. Abstract Fine particulate matter with less than 2.5 microns diameter (PM2.5) generated by cattle in feedlots is an environmental pollutant and a potential human and animal health issue. The objective of this study was to determine if a feeding schedule affects cattle behaviors that promote PM2.5 in a commercial feedlot. The study used 2813 crossbred steers housed in 14 adjacent pens at a large-scale commercial West Texas feedlot. Treatments were conventional feeding at 0700, 1000, and 1200 (CON) or feeding at 0700, 1000, and 1830 (ALT), the latter feeding time coincided with dusk. A mobile behavior lab was used to quantify behaviors of steers that were associated with generation of PM2.5 (e.g., fighting, mounting of peers, and increased locomotion). PM2.5 samplers measured respirable particles with a mass median diameter ≤2.5 μm (PM2.5) every 15 min over a period of 7 d in April and May. Simultaneously, the ambient temperature, humidity, wind speed and direction, precipitation, air pressure, and solar radiation were measured with a weather station. Elevated downwind PM2.5 concentrations were measured at dusk, when cattle that were fed according to the ALT vs. the CON feeding schedule, demonstrated less PM2.5-generating behaviors (p < 0.05). At dusk, steers on ALT vs. CON feeding schedules ate or were waiting to eat (standing in second row behind feeding cattle) at much greater rates (p < 0.05). Upwind PM2.5 concentrations were similar between the treatments. Downwind PM2.5 concentrations averaged over 24 h were lower from ALT compared with CON pens (0.072 vs. 0.115 mg/m3, p < 0.01). However, dry matter intake (DMI) was less (p < 0.05), and average daily gain (ADG) tended to be less (p < 0.1) in cattle that were fed according to the ALT vs. the CON feeding schedules, whereas feed efficiency (aka gain to feed, G:F) was not affected. Although ALT feeding may pose a challenge in feed delivery and labor scheduling, cattle exhibited fewer PM2.5-generating behaviors and reduced generation of PM2.5 when feed delivery times matched the natural desires of cattle to eat in a crepuscular pattern. PMID:28257061
ERIC Educational Resources Information Center
Yaremchuk, V.; Willson, L.R.; Spetch, M.L.; Dawson, M.R.W.
2005-01-01
Animal learning researchers have argued that one example of a linearly nonseparable problem is negative patterning, and therefore they have used more complicated multilayer networks to study this kind of discriminant learning. However, it is shown in this paper that previous attempts to define negative patterning problems to artificial neural…
Pingle, Hitesh; Wang, Peng-Yuan; Thissen, Helmut; McArthur, Sally; Kingshott, Peter
2015-12-02
Biofilm formation on medical implants and subsequent infections are a global problem. A great deal of effort has focused on developing chemical contrasts based on micro- and nanopatterning for studying and controlling cells and bacteria at surfaces. It has been known that micro- and nanopatterns on surfaces can influence biomolecule adsorption, and subsequent cell and bacterial adhesion. However, less focus has been on precisely controlling patterns to study the initial bacterial attachment mechanisms and subsequently how the patterning influences the role played by biomolecular adsorption on biofilm formation. In this work, the authors have used colloidal self-assembly in a confined area to pattern surfaces with colloidal crystals and used them as masks during allylamine plasma polymer (AAMpp) deposition to generate highly ordered patterns from the micro- to the nanoscale. Polyethylene glycol (PEG)-aldehyde was grafted to the plasma regions via "cloud point" grafting to prevent the attachment of bacteria on the plasma patterned surface regions, thereby controlling the adhesive sites by choice of the colloidal crystal morphology. Pseudomonas aeruginosa was chosen to study the bacterial interactions with these chemically patterned surfaces. Scanning electron microscope, x-ray photoelectron spectroscopy (XPS), atomic force microscopy, and epifluorescence microscopy were used for pattern characterization, surface chemical analysis, and imaging of attached bacteria. The AAMpp influenced bacterial attachment because of the amine groups displaying a positive charge. XPS results confirm the successful grafting of PEG on the AAMpp surfaces. The results showed that PEG patterns can be used as a surface for bacterial patterning including investigating the role of biomolecular patterning on bacterial attachment. These types of patterns are easy to fabricate and could be useful in further applications in biomedical research.
Oellingrath, Inger M; Svendsen, Martin V; Hestetun, Ingebjørg
2014-11-01
To investigate the association between eating patterns and mental health problems in young Norwegian adolescents (12-13 years of age). Cross-sectional study. Dietary information was reported by parents using a retrospective FFQ. Eating patterns were identified using principal component analysis. The Strengths and Difficulties Questionnaire was used to measure mental health problems. The association between eating patterns and mental health problems was examined using multiple logistic regression analysis. Primary schools, Telemark County, Norway. Children (n 1095) aged 12-13 years and their parents. Children with high scores on a 'varied Norwegian' eating pattern were less likely to have indications of any psychiatric disorders (adjusted OR = 0·5; 95 % CI 0·3, 1·0) and hyperactivity-inattention disorders (adjusted OR = 0·4; 95 % CI 0·2, 0·8) than children with low scores on this pattern. Children with high scores on a 'junk/convenient' eating pattern were more likely to have indications of hyperactivity-inattention disorders (adjusted OR = 3·4; 95 % CI 1·3, 8·6) than children with low scores on this pattern. Children with high scores on a 'snacking' eating pattern were more likely to have indications of conduct/oppositional disorders (adjusted OR = 3·8; 95 % CI 1·2, 11·5) than those with low scores on this eating pattern. We identified a significant association between eating patterns and mental health problems in young adolescents, independently of physical activity, sedentary activity and background variables. A diverse diet rich in unrefined plant foods, fish and regular meals was associated with better mental health, while energy-dense, nutrient-poor diets and irregular meals were associated with poorer mental health.
Model for the computation of self-motion in biological systems
NASA Technical Reports Server (NTRS)
Perrone, John A.
1992-01-01
A technique is presented by which direction- and speed-tuned cells, such as those commonly found in the middle temporal region of the primate brain, can be utilized to analyze the patterns of retinal image motion that are generated during observer movement through the environment. The developed model determines heading by finding the peak response in a population of detectors or neurons each tuned to a particular heading direction. It is suggested that a complex interaction of multiple cell networks is required for the solution of the self-motion problem in the primate brain.
FaceIt: face recognition from static and live video for law enforcement
NASA Astrophysics Data System (ADS)
Atick, Joseph J.; Griffin, Paul M.; Redlich, A. N.
1997-01-01
Recent advances in image and pattern recognition technology- -especially face recognition--are leading to the development of a new generation of information systems of great value to the law enforcement community. With these systems it is now possible to pool and manage vast amounts of biometric intelligence such as face and finger print records and conduct computerized searches on them. We review one of the enabling technologies underlying these systems: the FaceIt face recognition engine; and discuss three applications that illustrate its benefits as a problem-solving technology and an efficient and cost effective investigative tool.
Dai, Shengfa; Wei, Qingguo
2017-01-01
Common spatial pattern algorithm is widely used to estimate spatial filters in motor imagery based brain-computer interfaces. However, use of a large number of channels will make common spatial pattern tend to over-fitting and the classification of electroencephalographic signals time-consuming. To overcome these problems, it is necessary to choose an optimal subset of the whole channels to save computational time and improve the classification accuracy. In this paper, a novel method named backtracking search optimization algorithm is proposed to automatically select the optimal channel set for common spatial pattern. Each individual in the population is a N-dimensional vector, with each component representing one channel. A population of binary codes generate randomly in the beginning, and then channels are selected according to the evolution of these codes. The number and positions of 1's in the code denote the number and positions of chosen channels. The objective function of backtracking search optimization algorithm is defined as the combination of classification error rate and relative number of channels. Experimental results suggest that higher classification accuracy can be achieved with much fewer channels compared to standard common spatial pattern with whole channels.
Myers, Jonathan; McAuley, Paul; Lavie, Carl J; Despres, Jean-Pierre; Arena, Ross; Kokkinos, Peter
2015-01-01
The evolution from hunting and gathering to agriculture, followed by industrialization, has had a profound effect on human physical activity (PA) patterns. Current PA patterns are undoubtedly the lowest they have been in human history, with particularly marked declines in recent generations, and future projections indicate further declines around the globe. Non-communicable health problems that afflict current societies are fundamentally attributable to the fact that PA patterns are markedly different than those for which humans were genetically adapted. The advent of modern statistics and epidemiological methods has made it possible to quantify the independent effects of cardiorespiratory fitness (CRF) and PA on health outcomes. Based on more than five decades of epidemiological studies, it is now widely accepted that higher PA patterns and levels of CRF are associated with better health outcomes. This review will discuss the evidence supporting the premise that PA and CRF are independent risk factors for cardiovascular disease (CVD) as well as the interplay between both PA and CRF and other CVD risk factors. A particular focus will be given to the interplay between CRF, metabolic risk and obesity. Published by Elsevier Inc.
Domino structures evolution in strike-slip shear zones; the importance of the cataclastic flow
NASA Astrophysics Data System (ADS)
Moreira, N.; Dias, R.
2018-05-01
The Porto-Tomar-Ferreira do Alentejo dextral Shear Zone is one of the most important structures of the Iberian Variscides. In its vicinity, close to Abrantes (Central Portugal), a localized heterogeneous strain pattern developed in a decimetric metamorphic siliceous multilayer. This complex pattern was induced by the D2 dextral shearing of the early S0//S1 foliation in brittle-ductile conditions, giving rise to three main shear zone families. One of these families, with antithetic kinematics, delimits blocks with rigid clockwise rotation surrounded by coeval cataclasites, generating a local domino structure. The proposed geometrical and kinematic analysis, coupled with statistical studies, highlights the relation between subsidiary shear zones and the main shear zone. Despite the heterogeneous strain pattern, a quantitative approach of finite strain was applied based on the restoration of the initial fracture pattern. This approach shows the importance of the cataclastic flow coupled with the translational displacement of the domino domain in solving space problems related to the rigid block rotation. Such processes are key in allowing the rigid block rotation inside shear zones whenever the simple shear component is a fundamental mechanism.
Lee, Jaewoo; Lee, Youngju; Xu, Li; White, Rebekah; Sullenger, Bruce A
2017-06-07
Activation of the RNA-sensing pattern recognition receptor (PRR) in cancer cells leads to cell death and cytokine expression. This cancer cell death releases tumor antigens and damage-associated molecular patterns (DAMPs) that induce anti-tumor immunity. However, these cytokines and DAMPs also cause adverse inflammatory and thrombotic complications that can limit the overall therapeutic benefits of PRR-targeting anti-cancer therapies. To overcome this problem, we generated and evaluated two novel and distinct ssRNA molecules (immunogenic cell-killing RNA [ICR]2 and ICR4). ICR2 and ICR4 differentially stimulated cell death and PRR signaling pathways and induced different patterns of cytokine expression in cancer and innate immune cells. Interestingly, DAMPs released from ICR2- and ICR4-treated cancer cells had distinct patterns of stimulation of innate immune receptors and coagulation. Finally, ICR2 and ICR4 inhibited in vivo tumor growth as effectively as poly(I:C). ICR2 and ICR4 are potential therapeutic agents that differentially induce cell death, immune stimulation, and coagulation when introduced into tumors. Copyright © 2017 The American Society of Gene and Cell Therapy. Published by Elsevier Inc. All rights reserved.
Generation of Customizable Micro-wavy Pattern through Grayscale Direct Image Lithography
He, Ran; Wang, Shunqiang; Andrews, Geoffrey; Shi, Wentao; Liu, Yaling
2016-01-01
With the increasing amount of research work in surface studies, a more effective method of producing patterned microstructures is highly desired due to the geometric limitations and complex fabricating process of current techniques. This paper presents an efficient and cost-effective method to generate customizable micro-wavy pattern using direct image lithography. This method utilizes a grayscale Gaussian distribution effect to model inaccuracies inherent in the polymerization process, which are normally regarded as trivial matters or errors. The measured surface profiles and the mathematical prediction show a good agreement, demonstrating the ability of this method to generate wavy patterns with precisely controlled features. An accurate pattern can be generated with customizable parameters (wavelength, amplitude, wave shape, pattern profile, and overall dimension). This mask-free photolithography approach provides a rapid fabrication method that is capable of generating complex and non-uniform 3D wavy patterns with the wavelength ranging from 12 μm to 2100 μm and an amplitude-to-wavelength ratio as large as 300%. Microfluidic devices with pure wavy and wavy-herringbone patterns suitable for capture of circulating tumor cells are made as a demonstrative application. A completely customized microfluidic device with wavy patterns can be created within a few hours without access to clean room or commercial photolithography equipment. PMID:26902520
Self-Generated Coping Strategies Among Muslim Athletes During Ramadan Fasting
Roy, Jolly; Hwa, Ooi Cheong; Singh, Rabindarjeet; Aziz, Abdul Rashid; Jin, Chai Wen
2011-01-01
The study explored the self-generated coping strategies employed by Muslim athletes from South East Asian region during the Ramadan fasting month. Sixty-five National elite Muslim athletes responded to an open-ended question on coping strategies employed during Ramadan fasting. Inductive content analysis identified five general dimensions from 54 meaning units which were abstracted into 14 first-order themes and 10 second order themes. The general dimension included four problem-focused coping: training modifications, dietary habits, psychological, rest and recovery, and one emotion-focused coping i.e., self- control. The coping strategies employed were diverse and dynamic in nature and no specific pattern was evident. The most frequently employed strategies were associated with training and dietary habits. Emotion focused coping was the least frequently used by the athletes. Key points Muslim athletes employ diverse self -generated coping strategies during Ramadan fasting which can be categorized as anticipatory coping, preventative coping and proactive coping. Frequently employed coping strategies are task focused such as training modifications and adjustments in dietary habits. PMID:24149306
NASA Technical Reports Server (NTRS)
Voigt, Kerstin
1992-01-01
We present MENDER, a knowledge based system that implements software design techniques that are specialized to automatically compile generate-and-patch problem solvers that satisfy global resource assignments problems. We provide empirical evidence of the superior performance of generate-and-patch over generate-and-test: even with constrained generation, for a global constraint in the domain of '2D-floorplanning'. For a second constraint in '2D-floorplanning' we show that even when it is possible to incorporate the constraint into a constrained generator, a generate-and-patch problem solver may satisfy the constraint more rapidly. We also briefly summarize how an extended version of our system applies to a constraint in the domain of 'multiprocessor scheduling'.
Identification of nuclear weapons
Mihalczo, J.T.; King, W.T.
1987-04-10
A method and apparatus for non-invasively indentifying different types of nuclear weapons is disclosed. A neutron generator is placed against the weapon to generate a stream of neutrons causing fissioning within the weapon. A first detects the generation of the neutrons and produces a signal indicative thereof. A second particle detector located on the opposite side of the weapon detects the fission particles and produces signals indicative thereof. The signals are converted into a detected pattern and a computer compares the detected pattern with known patterns of weapons and indicates which known weapon has a substantially similar pattern. Either a time distribution pattern or noise analysis pattern, or both, is used. Gamma-neutron discrimination and a third particle detector for fission particles adjacent the second particle detector are preferably used. The neutrons are generated by either a decay neutron source or a pulled neutron particle accelerator.
Image Correlation Pattern Optimization for Micro-Scale In-Situ Strain Measurements
NASA Technical Reports Server (NTRS)
Bomarito, G. F.; Hochhalter, J. D.; Cannon, A. H.
2016-01-01
The accuracy and precision of digital image correlation (DIC) is a function of three primary ingredients: image acquisition, image analysis, and the subject of the image. Development of the first two (i.e. image acquisition techniques and image correlation algorithms) has led to widespread use of DIC; however, fewer developments have been focused on the third ingredient. Typically, subjects of DIC images are mechanical specimens with either a natural surface pattern or a pattern applied to the surface. Research in the area of DIC patterns has primarily been aimed at identifying which surface patterns are best suited for DIC, by comparing patterns to each other. Because the easiest and most widespread methods of applying patterns have a high degree of randomness associated with them (e.g., airbrush, spray paint, particle decoration, etc.), less effort has been spent on exact construction of ideal patterns. With the development of patterning techniques such as microstamping and lithography, patterns can be applied to a specimen pixel by pixel from a patterned image. In these cases, especially because the patterns are reused many times, an optimal pattern is sought such that error introduced into DIC from the pattern is minimized. DIC consists of tracking the motion of an array of nodes from a reference image to a deformed image. Every pixel in the images has an associated intensity (grayscale) value, with discretization depending on the bit depth of the image. Because individual pixel matching by intensity value yields a non-unique scale-dependent problem, subsets around each node are used for identification. A correlation criteria is used to find the best match of a particular subset of a reference image within a deformed image. The reader is referred to references for enumerations of typical correlation criteria. As illustrated by Schreier and Sutton and Lu and Cary systematic errors can be introduced by representing the underlying deformation with under-matched shape functions. An important implication, as discussed by Sutton et al., is that in the presence of highly localized deformations (e.g., crack fronts), error can be reduced by minimizing the subset size. In other words, smaller subsets allow the more accurate resolution of localized deformations. Contrarily, the choice of optimal subset size has been widely studied and a general consensus is that larger subsets with more information content are less prone to random error. Thus, an optimal subset size balances the systematic error from under matched deformations with random error from measurement noise. The alternative approach pursued in the current work is to choose a small subset size and optimize the information content within (i.e., optimizing an applied DIC pattern), rather than finding an optimal subset size. In the literature, many pattern quality metrics have been proposed, e.g., sum of square intensity gradient (SSSIG), mean subset fluctuation, gray level co-occurrence, autocorrelation-based metrics, and speckle-based metrics. The majority of these metrics were developed to quantify the quality of common pseudo-random patterns after they have been applied, and were not created with the intent of pattern generation. As such, it is found that none of the metrics examined in this study are fit to be the objective function of a pattern generation optimization. In some cases, such as with speckle-based metrics, application to pixel by pixel patterns is ill-conditioned and requires somewhat arbitrary extensions. In other cases, such as with the SSSIG, it is shown that trivial solutions exist for the optimum of the metric which are ill-suited for DIC (such as a checkerboard pattern). In the current work, a multi-metric optimization method is proposed whereby quality is viewed as a combination of individual quality metrics. Specifically, SSSIG and two auto-correlation metrics are used which have generally competitive objectives. Thus, each metric could be viewed as a constraint imposed upon the others, thereby precluding the achievement of their trivial solutions. In this way, optimization produces a pattern which balances the benefits of multiple quality metrics. The resulting pattern, along with randomly generated patterns, is subjected to numerical deformations and analyzed with DIC software. The optimal pattern is shown to outperform randomly generated patterns.
PLAN-IT-2: The next generation planning and scheduling tool
NASA Technical Reports Server (NTRS)
Eggemeyer, William C.; Cruz, Jennifer W.
1990-01-01
PLAN-IT is a scheduling program which has been demonstrated and evaluated in a variety of scheduling domains. The capability enhancements being made for the next generation of PLAN-IT, called PLAN-IT-2 is discussed. PLAN-IT-2 represents a complete rewrite of the original PLAN-IT incorporating major changes as suggested by the application experiences with the original PLAN-IT. A few of the enhancements described are additional types of constraints, such as states and resettable-depletables (batteries), dependencies between constraints, multiple levels of activity planning during the scheduling process, pattern constraint searching for opportunities as opposed to just minimizing the amount of conflicts, additional customization construction features for display and handling of diverse multiple time systems, and reduction in both the size and the complexity for creating the knowledge-base to address the different problem domains.
Reinforcement learning for a biped robot based on a CPG-actor-critic method.
Nakamura, Yutaka; Mori, Takeshi; Sato, Masa-aki; Ishii, Shin
2007-08-01
Animals' rhythmic movements, such as locomotion, are considered to be controlled by neural circuits called central pattern generators (CPGs), which generate oscillatory signals. Motivated by this biological mechanism, studies have been conducted on the rhythmic movements controlled by CPG. As an autonomous learning framework for a CPG controller, we propose in this article a reinforcement learning method we call the "CPG-actor-critic" method. This method introduces a new architecture to the actor, and its training is roughly based on a stochastic policy gradient algorithm presented recently. We apply this method to an automatic acquisition problem of control for a biped robot. Computer simulations show that training of the CPG can be successfully performed by our method, thus allowing the biped robot to not only walk stably but also adapt to environmental changes.
Antenna analysis using neural networks
NASA Technical Reports Server (NTRS)
Smith, William T.
1992-01-01
Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern shaping. The interesting thing about D-C synthesis is that the side lobes have the same amplitude. Five-element arrays were used. Again, 41 pattern samples were used for the input. Nine actual D-C patterns ranging from -10 dB to -30 dB side lobe levels were used to train the network. A comparison between simulated and actual D-C techniques for a pattern with -22 dB side lobe level is shown. The goal for this research was to evaluate the performance of neural network computing with antennas. Future applications will employ the backpropagation training algorithm to drastically reduce the computational complexity involved in performing EM compensation for surface errors in large space reflector antennas.
Antenna analysis using neural networks
NASA Astrophysics Data System (ADS)
Smith, William T.
1992-09-01
Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary).
A toolbox to visually explore cerebellar shape changes in cerebellar disease and dysfunction.
Abulnaga, S Mazdak; Yang, Zhen; Carass, Aaron; Kansal, Kalyani; Jedynak, Bruno M; Onyike, Chiadi U; Ying, Sarah H; Prince, Jerry L
2016-02-27
The cerebellum plays an important role in motor control and is also involved in cognitive processes. Cerebellar function is specialized by location, although the exact topographic functional relationship is not fully understood. The spinocerebellar ataxias are a group of neurodegenerative diseases that cause regional atrophy in the cerebellum, yielding distinct motor and cognitive problems. The ability to study the region-specific atrophy patterns can provide insight into the problem of relating cerebellar function to location. In an effort to study these structural change patterns, we developed a toolbox in MATLAB to provide researchers a unique way to visually explore the correlation between cerebellar lobule shape changes and function loss, with a rich set of visualization and analysis modules. In this paper, we outline the functions and highlight the utility of the toolbox. The toolbox takes as input landmark shape representations of subjects' cerebellar substructures. A principal component analysis is used for dimension reduction. Following this, a linear discriminant analysis and a regression analysis can be performed to find the discriminant direction associated with a specific disease type, or the regression line of a specific functional measure can be generated. The characteristic structural change pattern of a disease type or of a functional score is visualized by sampling points on the discriminant or regression line. The sampled points are used to reconstruct synthetic cerebellar lobule shapes. We showed a few case studies highlighting the utility of the toolbox and we compare the analysis results with the literature.
A toolbox to visually explore cerebellar shape changes in cerebellar disease and dysfunction
NASA Astrophysics Data System (ADS)
Abulnaga, S. Mazdak; Yang, Zhen; Carass, Aaron; Kansal, Kalyani; Jedynak, Bruno M.; Onyike, Chiadi U.; Ying, Sarah H.; Prince, Jerry L.
2016-03-01
The cerebellum plays an important role in motor control and is also involved in cognitive processes. Cerebellar function is specialized by location, although the exact topographic functional relationship is not fully understood. The spinocerebellar ataxias are a group of neurodegenerative diseases that cause regional atrophy in the cerebellum, yielding distinct motor and cognitive problems. The ability to study the region-specific atrophy patterns can provide insight into the problem of relating cerebellar function to location. In an effort to study these structural change patterns, we developed a toolbox in MATLAB to provide researchers a unique way to visually explore the correlation between cerebellar lobule shape changes and function loss, with a rich set of visualization and analysis modules. In this paper, we outline the functions and highlight the utility of the toolbox. The toolbox takes as input landmark shape representations of subjects' cerebellar substructures. A principal component analysis is used for dimension reduction. Following this, a linear discriminant analysis and a regression analysis can be performed to find the discriminant direction associated with a specific disease type, or the regression line of a specific functional measure can be generated. The characteristic structural change pattern of a disease type or of a functional score is visualized by sampling points on the discriminant or regression line. The sampled points are used to reconstruct synthetic cerebellar lobule shapes. We showed a few case studies highlighting the utility of the toolbox and we compare the analysis results with the literature.
Impact of Uncertainty from Load-Based Reserves and Renewables on Dispatch Costs and Emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Bowen; Maroukis, Spencer D.; Lin, Yashen
2016-11-21
Aggregations of controllable loads are considered to be a fast-responding, cost-efficient, and environmental-friendly candidate for power system ancillary services. Unlike conventional service providers, the potential capacity from the aggregation is highly affected by factors like ambient conditions and load usage patterns. Previous work modeled aggregations of controllable loads (such as air conditioners) as thermal batteries, which are capable of providing reserves but with uncertain capacity. A stochastic optimal power flow problem was formulated to manage this uncertainty, as well as uncertainty in renewable generation. In this paper, we explore how the types and levels of uncertainty, generation reserve costs, andmore » controllable load capacity affect the dispatch solution, operational costs, and CO2 emissions. We also compare the results of two methods for solving the stochastic optimization problem, namely the probabilistically robust method and analytical reformulation assuming Gaussian distributions. Case studies are conducted on a modified IEEE 9-bus system with renewables, controllable loads, and congestion. We find that different types and levels of uncertainty have significant impacts on dispatch and emissions. More controllable loads and less conservative solution methodologies lead to lower costs and emissions.« less
Woods, H Arthur; Dillon, Michael E; Pincebourde, Sylvain
2015-12-01
We analyze the effects of changing patterns of thermal availability, in space and time, on the performance of small ectotherms. We approach this problem by breaking it into a series of smaller steps, focusing on: (1) how macroclimates interact with living and nonliving objects in the environment to produce a mosaic of thermal microclimates and (2) how mobile ectotherms filter those microclimates into realized body temperatures by moving around in them. Although the first step (generation of mosaics) is conceptually straightforward, there still exists no general framework for predicting spatial and temporal patterns of microclimatic variation. We organize potential variation along three axes-the nature of the objects producing the microclimates (abiotic versus biotic), how microclimates translate macroclimatic variation (amplify versus buffer), and the temporal and spatial scales over which microclimatic conditions vary (long versus short). From this organization, we propose several general rules about patterns of microclimatic diversity. To examine the second step (behavioral sampling of locally available microclimates), we construct a set of models that simulate ectotherms moving on a thermal landscape according to simple sets of diffusion-based rules. The models explore the effects of both changes in body size (which affect the time scale over which organisms integrate operative body temperatures) and increases in the mean and variance of temperature on the thermal landscape. Collectively, the models indicate that both simple behavioral rules and interactions between body size and spatial patterns of thermal variation can profoundly affect the distribution of realized body temperatures experienced by ectotherms. These analyses emphasize the rich set of problems still to solve before arriving at a general, predictive theory of the biological consequences of climate change. Copyright © 2014 Elsevier Ltd. All rights reserved.
Kinesthetic information disambiguates visual motion signals.
Hu, Bo; Knill, David C
2010-05-25
Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.
Thermal control of electroosmotic flow in a microchannel through temperature-dependent properties.
Kwak, Ho Sang; Kim, Hyoungsoo; Hyun, Jae Min; Song, Tae-Ho
2009-07-01
A numerical investigation is conducted on the electroosmotic flow and associated heat transfer in a two-dimensional microchannel. The objective of this study is to explore a new conceptual idea that is control of an electroosmotic flow by using a thermal field effect through the temperature-dependent physical properties. Two exemplary problems are examined: a flow in a microchannel with a constant vertical temperature difference between two horizontal walls and a flow in a microchannel with the wall temperatures varying horizontally in a sinusoidal manner. The results of numerical computations showed that a proper control of thermal field may be a viable means to manipulate various non-plug-like flow patterns. A constant vertical temperature difference across the channel produces a shear flow. The horizontally-varying thermal condition results in spatial variation of physical properties to generate fluctuating flow patterns. The temperature variation at the wall with alternating vertical temperature gradient induces a wavy flow.
Model-free inference of direct network interactions from nonlinear collective dynamics.
Casadiego, Jose; Nitzan, Mor; Hallerberg, Sarah; Timme, Marc
2017-12-19
The topology of interactions in network dynamical systems fundamentally underlies their function. Accelerating technological progress creates massively available data about collective nonlinear dynamics in physical, biological, and technological systems. Detecting direct interaction patterns from those dynamics still constitutes a major open problem. In particular, current nonlinear dynamics approaches mostly require to know a priori a model of the (often high dimensional) system dynamics. Here we develop a model-independent framework for inferring direct interactions solely from recording the nonlinear collective dynamics generated. Introducing an explicit dependency matrix in combination with a block-orthogonal regression algorithm, the approach works reliably across many dynamical regimes, including transient dynamics toward steady states, periodic and non-periodic dynamics, and chaos. Together with its capabilities to reveal network (two point) as well as hypernetwork (e.g., three point) interactions, this framework may thus open up nonlinear dynamics options of inferring direct interaction patterns across systems where no model is known.
Data Auditor: Analyzing Data Quality Using Pattern Tableaux
NASA Astrophysics Data System (ADS)
Srivastava, Divesh
Monitoring databases maintain configuration and measurement tables about computer systems, such as networks and computing clusters, and serve important business functions, such as troubleshooting customer problems, analyzing equipment failures, planning system upgrades, etc. These databases are prone to many data quality issues: configuration tables may be incorrect due to data entry errors, while measurement tables may be affected by incorrect, missing, duplicate and delayed polls. We describe Data Auditor, a tool for analyzing data quality and exploring data semantics of monitoring databases. Given a user-supplied constraint, such as a boolean predicate expected to be satisfied by every tuple, a functional dependency, or an inclusion dependency, Data Auditor computes "pattern tableaux", which are concise summaries of subsets of the data that satisfy or fail the constraint. We discuss the architecture of Data Auditor, including the supported types of constraints and the tableau generation mechanism. We also show the utility of our approach on an operational network monitoring database.
Human mobility: Models and applications
NASA Astrophysics Data System (ADS)
Barbosa, Hugo; Barthelemy, Marc; Ghoshal, Gourab; James, Charlotte R.; Lenormand, Maxime; Louail, Thomas; Menezes, Ronaldo; Ramasco, José J.; Simini, Filippo; Tomasini, Marcello
2018-03-01
Recent years have witnessed an explosion of extensive geolocated datasets related to human movement, enabling scientists to quantitatively study individual and collective mobility patterns, and to generate models that can capture and reproduce the spatiotemporal structures and regularities in human trajectories. The study of human mobility is especially important for applications such as estimating migratory flows, traffic forecasting, urban planning, and epidemic modeling. In this survey, we review the approaches developed to reproduce various mobility patterns, with the main focus on recent developments. This review can be used both as an introduction to the fundamental modeling principles of human mobility, and as a collection of technical methods applicable to specific mobility-related problems. The review organizes the subject by differentiating between individual and population mobility and also between short-range and long-range mobility. Throughout the text the description of the theory is intertwined with real-world applications.
Rotation-invariant neural pattern recognition system with application to coin recognition.
Fukumi, M; Omatu, S; Takeda, F; Kosaka, T
1992-01-01
In pattern recognition, it is often necessary to deal with problems to classify a transformed pattern. A neural pattern recognition system which is insensitive to rotation of input pattern by various degrees is proposed. The system consists of a fixed invariance network with many slabs and a trainable multilayered network. The system was used in a rotation-invariant coin recognition problem to distinguish between a 500 yen coin and a 500 won coin. The results show that the approach works well for variable rotation pattern recognition.
CrowdPhase: crowdsourcing the phase problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jorda, Julien; Sawaya, Michael R.; Yeates, Todd O., E-mail: yeates@mbi.ucla.edu
The idea of attacking the phase problem by crowdsourcing is introduced. Using an interactive, multi-player, web-based system, participants work simultaneously to select phase sets that correspond to better electron-density maps in order to solve low-resolution phasing problems. The human mind innately excels at some complex tasks that are difficult to solve using computers alone. For complex problems amenable to parallelization, strategies can be developed to exploit human intelligence in a collective form: such approaches are sometimes referred to as ‘crowdsourcing’. Here, a first attempt at a crowdsourced approach for low-resolution ab initio phasing in macromolecular crystallography is proposed. A collaborativemore » online game named CrowdPhase was designed, which relies on a human-powered genetic algorithm, where players control the selection mechanism during the evolutionary process. The algorithm starts from a population of ‘individuals’, each with a random genetic makeup, in this case a map prepared from a random set of phases, and tries to cause the population to evolve towards individuals with better phases based on Darwinian survival of the fittest. Players apply their pattern-recognition capabilities to evaluate the electron-density maps generated from these sets of phases and to select the fittest individuals. A user-friendly interface, a training stage and a competitive scoring system foster a network of well trained players who can guide the genetic algorithm towards better solutions from generation to generation via gameplay. CrowdPhase was applied to two synthetic low-resolution phasing puzzles and it was shown that players could successfully obtain phase sets in the 30° phase error range and corresponding molecular envelopes showing agreement with the low-resolution models. The successful preliminary studies suggest that with further development the crowdsourcing approach could fill a gap in current crystallographic methods by making it possible to extract meaningful information in cases where limited resolution might otherwise prevent initial phasing.« less
The New England travel market: generational travel patterns, 1979 to 1996
Rod Warnick
2002-01-01
Generations of travelers who select New England as a primary destination are examined over time from the years of 1979 through 1996 and the analysis serves to update an earlier review of generational travel patterns of the region (Warnick, 1994). Changes in travel patterns are noted by overall adjusted annual change rates by demographic and geographic regions of...
Westcott, Nathan P; Pulsipher, Abigail; Lamb, Brian M; Yousaf, Muhammad N
2008-09-02
An expedient and inexpensive method to generate patterned aldehydes on self-assembled monolayers (SAMs) of alkanethiolates on gold with control of density for subsequent chemoselective immobilization from commercially available starting materials has been developed. Utilizing microfluidic cassettes, primary alcohol oxidation of tetra(ethylene glycol) undecane thiol and 11-mercapto-1-undecanol SAMs was performed directly on the surface generating patterned aldehyde groups with pyridinium chlorochromate. The precise density of surface aldehydes generated can be controlled and characterized by electrochemistry. For biological applications, fibroblast cells were seeded on patterned surfaces presenting biospecifc cell adhesive (Arg-Glyc-Asp) RGD peptides.
Automated branching pattern report generation for laparoscopic surgery assistance
NASA Astrophysics Data System (ADS)
Oda, Masahiro; Matsuzaki, Tetsuro; Hayashi, Yuichiro; Kitasaka, Takayuki; Misawa, Kazunari; Mori, Kensaku
2015-05-01
This paper presents a method for generating branching pattern reports of abdominal blood vessels for laparoscopic gastrectomy. In gastrectomy, it is very important to understand branching structure of abdominal arteries and veins, which feed and drain specific abdominal organs including the stomach, the liver and the pancreas. In the real clinical stage, a surgeon creates a diagnostic report of the patient anatomy. This report summarizes the branching patterns of the blood vessels related to the stomach. The surgeon decides actual operative procedure. This paper shows an automated method to generate a branching pattern report for abdominal blood vessels based on automated anatomical labeling. The report contains 3D rendering showing important blood vessels and descriptions of branching patterns of each vessel. We have applied this method for fifty cases of 3D abdominal CT scans and confirmed the proposed method can automatically generate branching pattern reports of abdominal arteries.
The calculating hemispheres: studies of a split-brain patient.
Funnell, Margaret G; Colvin, Mary K; Gazzaniga, Michael S
2007-06-11
The purpose of the study was to investigate simple calculation in the two cerebral hemispheres of a split-brain patient. In a series of four experiments, the left hemisphere was superior to the right in simple calculation, confirming the previously reported left hemisphere specialization for calculation. In two different recognition paradigms, right hemisphere performance was at chance for all arithmetic operations, with the exception of subtraction in a two-alternative forced choice paradigm (performance was at chance when the lure differed from the correct answer by a magnitude of 1 but above chance when the magnitude difference was 4). In a recall paradigm, the right hemisphere performed above chance for both addition and subtraction, but performed at chance levels for multiplication and division. The error patterns in that experiment suggested that for subtraction and addition, the right hemisphere does have some capacity for approximating the solution even when it is unable to generate the exact solution. Furthermore, right hemisphere accuracy in addition and subtraction was higher for problems with small operands than with large operands. An additional experiment assessed approximate and exact addition in the two hemispheres for problems with small and large operands. The left hemisphere was equally accurate in both tasks but the right hemisphere was more accurate in approximate addition than in exact addition. In exact addition, right hemisphere accuracy was higher for problems with small operands than large, but the opposite pattern was found for approximate addition.
The construct-behavior gap in behavioral decision research: A challenge beyond replicability.
Regenwetter, Michel; Robinson, Maria M
2017-10-01
Behavioral decision research compares theoretical constructs like preferences to behavior such as observed choices. Three fairly common links from constructs to behavior are (1) to tally, across participants and decision problems, the number of choices consistent with one predicted pattern of pairwise preferences; (2) to compare what most people choose in each decision problem against a predicted preference pattern; or (3) to enumerate the decision problems in which two experimental conditions generate a 1-sided significant difference in choice frequency 'consistent' with the theory. Although simple, these theoretical links are heuristics. They are subject to well-known reasoning fallacies, most notably the fallacy of sweeping generalization and the fallacy of composition. No amount of replication can alleviate these fallacies. On the contrary, reiterating logically inconsistent theoretical reasoning over and again across studies obfuscates science. As a case in point, we consider pairwise choices among simple lotteries and the hypotheses of overweighting or underweighting of small probabilities, as well as the description-experience gap. We discuss ways to avoid reasoning fallacies in bridging the conceptual gap between hypothetical constructs, such as, for example, "overweighting" to observable pairwise choice data. Although replication is invaluable, successful replication of hard-to-interpret results is not. Behavioral decision research stands to gain much theoretical and empirical clarity by spelling out precise and formally explicit theories of how hypothetical constructs translate into observable behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Laser induced Erasable Patterns in a N* Liquid Crystal on an Iron Doped Lithium Niobate (Postprint)
2017-10-12
be applied selectively to erase these patterns. Thus, a promising method is reported to generate reconfigurable patterns, photonic motives , and...erase these patterns. Thus, a promising method is reported to generate reconfigurable patterns, photonic motives , and touch sensitive devices in a...release (PA): distribution unlimited. loss of the patterns inscribed. Possible motives are not limited to graphics. It should be also possible to write
Salisbury, Dean F; McCathern, Alexis G
2016-11-01
The simple mismatch negativity (MMN) to tones deviating physically (in pitch, loudness, duration, etc.) from repeated standard tones is robustly reduced in schizophrenia. Although generally interpreted to reflect memory or cognitive processes, simple MMN likely contains some activity from non-adapted sensory cells, clouding what process is affected in schizophrenia. Research in healthy participants has demonstrated that MMN can be elicited by deviations from abstract auditory patterns and complex rules that do not cause sensory adaptation. Whether persons with schizophrenia show abnormalities in the complex MMN is unknown. Fourteen schizophrenia participants and 16 matched healthy underwent EEG recording while listening to 400 groups of 6 tones 330 ms apart, separated by 800 ms. Occasional deviant groups were missing the 4th or 6th tone (50 groups each). Healthy participants generated a robust response to a missing but expected tone. The schizophrenia group was significantly impaired in activating the missing stimulus MMN, generating no significant activity at all. Schizophrenia affects the ability of "primitive sensory intelligence" and pre-attentive perceptual mechanisms to form implicit groups in the auditory environment. Importantly, this deficit must relate to abnormalities in abstract complex pattern analysis rather than sensory problems in the disorder. The results indicate a deficit in parsing of the complex auditory scene which likely impacts negatively on successful social navigation in schizophrenia. Knowledge of the location and circuit architecture underlying the true novelty-related MMN and its pathophysiology in schizophrenia will help target future interventions.
The effect of butterfly scales on flight efficiency and leading edge vortex formation
NASA Astrophysics Data System (ADS)
Lang, Amy; Wilroy, Jacob; Wahidi, Redha; Slegers, Nathan; Heilman, Micahel; Cranford, Jacob
2016-11-01
It is hypothesized that the scales on a butterfly wing lead to increased flight efficiency. Recent testing of live butterflies tracked their motion over 246 flights for 11 different specimens. Results show a 37.8 percent mean decrease in flight efficiency and a flapping amplitude reduction of 6.7 percent once the scales were removed. This change could be largely a result of how the leading edge vortex (LEV) interacts with the wing. To simplify this complex flow problem, an experiment was designed to focus on the alteration of 2-D vortex development with a variation in surface patterning. Specifically, the secondary vorticity generated by the LEV interacting at the patterned surface was studied, as well as the subsequent effect on the LEV's growth rate and peak circulation. For this experiment butterfly inspired grooves were created using additive manufacturing and were attached to a flat plate with a chordwise orientation, thus increasing plate surface area. The vortex generated by the grooved plate was then compared to a smooth case as the plate translated vertically through a tow tank at Re = 1500, 3000, and 6000. Using DPIV, the vortex formation was documented and a maximum vortex formation time of 4.22 was found based on the flat plate travel distance and chord length. Results indicate that the patterned surface slows down the growth of the vortex which corroborates the flight test results. Funding from NSF CBET Fluid Dynamcis is gratefully acknowledged.
ERIC Educational Resources Information Center
Erdogan, Abdulkadir
2015-01-01
Turkish primary mathematics curriculum emphasizes the role of problem solving for teaching mathematics and pays particular attention to problem solving strategies. Patterns as a subject and the use of patterns as a non-routine problem solving strategy are also emphasized in the curriculum. The primary purpose of this study was to determine how…
Curriculum enactment patterns and associated factors from teachers' perspectives
NASA Astrophysics Data System (ADS)
Son, Ji-Won; Kim, Ok-Kyeong
2016-12-01
As part of a larger effort to improve teacher capacity for high-quality mathematics instruction, we investigated the factors that are associated with different enactment patterns at three levels: contextual (e.g., type and quality of textbook), individual (e.g., teacher knowledge), and teachers' opportunity-to-learn (e.g., professional development experiences). Analysis of 183 teachers' self-reports on their practices revealed three notable findings. First, the factors at the three levels were all found to be significantly related to the different patterns of enacted curriculum. However, the use of quality textbooks and the alignment of teachers' views and instructional goals with curriculum goals were found to be the two factors that are most strongly associated with the enactment pattern of high-level problems and high-level teacher questions in instruction. Furthermore, teachers with the enactment pattern of increasing lower cognitive demand of problems into higher ones tended to rate their curriculum knowledge higher than teachers with the enactment pattern of using low-level problems and teacher questions in their teaching. In particular, deviation from and dissatisfaction with their assigned low-quality textbooks were found to be critical factors that are associated with the enactment pattern of increasing lower cognitive demands of problems in instruction.
Design optimization of large-size format edge-lit light guide units
NASA Astrophysics Data System (ADS)
Hastanin, J.; Lenaerts, C.; Fleury-Frenette, K.
2016-04-01
In this paper, we present an original method of dot pattern generation dedicated to large-size format light guide plate (LGP) design optimization, such as photo-bioreactors, the number of dots greatly exceeds the maximum allowable number of optical objects supported by most common ray-tracing software. In the proposed method, in order to simplify the computational problem, the original optical system is replaced by an equivalent one. Accordingly, an original dot pattern is splitted into multiple small sections, inside which the dot size variation is less than the ink dots printing typical resolution. Then, these sections are replaced by equivalent cells with continuous diffusing film. After that, we adjust the TIS (Total Integrated Scatter) two-dimensional distribution over the grid of equivalent cells, using an iterative optimization procedure. Finally, the obtained optimal TIS distribution is converted into the dot size distribution by applying an appropriate conversion rule. An original semi-empirical equation dedicated to rectangular large-size LGPs is proposed for the initial guess of TIS distribution. It allows significantly reduce the total time needed to dot pattern optimization.
Short-term forecasts gain in accuracy. [Regression technique using ''Box-Jenkins'' analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Box-Jenkins time-series models offer accuracy for short-term forecasts that compare with large-scale macroeconomic forecasts. Utilities need to be able to forecast peak demand in order to plan their generating, transmitting, and distribution systems. This new method differs from conventional models by not assuming specific data patterns, but by fitting available data into a tentative pattern on the basis of auto-correlations. Three types of models (autoregressive, moving average, or mixed autoregressive/moving average) can be used according to which provides the most appropriate combination of autocorrelations and related derivatives. Major steps in choosing a model are identifying potential models, estimating the parametersmore » of the problem, and running a diagnostic check to see if the model fits the parameters. The Box-Jenkins technique is well suited for seasonal patterns, which makes it possible to have as short as hourly forecasts of load demand. With accuracy up to two years, the method will allow electricity price-elasticity forecasting that can be applied to facility planning and rate design. (DCK)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Ximan
The shrinking of IC devices has followed the Moore's Law for over three decades, which states that the density of transistors on integrated circuits will double about every two years. This great achievement is obtained via continuous advance in lithography technology. With the adoption of complicated resolution enhancement technologies, such as the phase shifting mask (PSM), the optical proximity correction (OPC), optical lithography with wavelength of 193 nm has enabled 45 nm printing by immersion method. However, this achievement comes together with the skyrocketing cost of masks, which makes the production of low volume application-specific IC (ASIC) impractical. In ordermore » to provide an economical lithography approach for low to medium volume advanced IC fabrication, a maskless ion beam lithography method, called Maskless Micro-ion-beam Reduction Lithography (MMRL), has been developed in the Lawrence Berkeley National Laboratory. The development of the prototype MMRL system has been described by Dr. Vinh Van Ngo in his Ph.D. thesis. But the resolution realized on the prototype MMRL system was far from the design expectation. In order to improve the resolution of the MMRL system, the ion optical system has been investigated. By integrating a field-free limiting aperture into the optical column, reducing the electromagnetic interference and cleaning the RF plasma, the resolution has been improved to around 50 nm. Computational analysis indicates that the MMRL system can be operated with an exposure field size of 0.25 mm and a beam half angle of 1.0 mrad on the wafer plane. Ion-ion interactions have been studied with a two-particle physics model. The results are in excellent agreement with those published by the other research groups. The charge-interaction analysis of MMRL shows that the ion-ion interactions must be reduced in order to obtain a throughput higher than 10 wafers per hour on 300-mm wafers. In addition, two different maskless lithography strategies have been studied. The dependence of the throughput with the exposure field size and the speed of the mechanical stage has been investigated. In order to perform maskless lithography, different micro-fabricated pattern generators have been developed for the MMRL system. Ion beamlet switching has been successfully demonstrated on the MMRL system. A positive bias voltage around 10 volts is sufficient to switch off the ion current on the micro-fabricated pattern generators. Some unexpected problems, such as the high-energy secondary electron radiations, have been discovered during the experimental investigation. Thermal and structural analysis indicates that the aperture displacement error induced by thermal expansion can satisfy the 3δ CD requirement for lithography nodes down to 25 nm. The cross-talking effect near the surface and inside the apertures of the pattern generator has been simulated in a 3-D ray-tracing code. New pattern generator design has been proposed to reduce the cross-talking effect. In order to eliminate the surface charging effect caused by the secondary electrons, a new beam-switching scheme in which the switching electrodes are immersed in the plasma has been demonstrated on a mechanically fabricated pattern generator.« less
Thinking in Italian: Problem-Solving Activities for the Italian Classroom.
ERIC Educational Resources Information Center
Danesi, Marcel
1985-01-01
Looks at devices that stimulate intralinguistic thinking patterns, that is, patterns that can only be induced by the structures and forms of the target language without any dependency upon native-language patterns. Focuses on problem-solving activities that help the learner develop modes of thought that can be solved "Italian." (SED)
Supporting Abstraction Processes in Problem Solving through Pattern-Oriented Instruction
ERIC Educational Resources Information Center
Muller, Orna; Haberman, Bruria
2008-01-01
Abstraction is a major concept in computer science and serves as a powerful tool in software development. Pattern-oriented instruction (POI) is a pedagogical approach that incorporates patterns in an introductory computer science course in order to structure the learning of algorithmic problem solving. This paper examines abstraction processes in…
Touch Processing and Social Behavior in ASD.
O Miguel, Helga; Sampaio, Adriana; Martínez-Regueiro, Rocío; Gómez-Guerrero, Lorena; López-Dóriga, Cristina Gutiérrez; Gómez, Sonia; Carracedo, Ángel; Fernández-Prieto, Montse
2017-08-01
Abnormal patterns of touch processing have been linked to core symptoms in ASD. This study examined the relation between tactile processing patterns and social problems in 44 children and adolescents with ASD, aged 6-14 (M = 8.39 ± 2.35). Multiple linear regression indicated significant associations between touch processing and social problems. No such relationships were found for social problems and autism severity. Within touch processing, patterns of hyper-responsiveness and hypo-responsiveness best predicted social problems, whereas sensory-seeking did not. These results support that atypical touch processing in individuals with ASD might be contributing to the social problems they present. Moreover, it the need to explore more in depth the contribution of sensory features to the ASD phenotype.
Pattern of mathematic representation ability in magnetic electricity problem
NASA Astrophysics Data System (ADS)
Hau, R. R. H.; Marwoto, P.; Putra, N. M. D.
2018-03-01
The mathematic representation ability in solving magnetic electricity problem gives information about the way students understand magnetic electricity. Students have varied mathematic representation pattern ability in solving magnetic electricity problem. This study aims to determine the pattern of students' mathematic representation ability in solving magnet electrical problems.The research method used is qualitative. The subject of this study is the fourth semester students of UNNES Physics Education Study Program. The data collection is done by giving a description test that refers to the test of mathematical representation ability and interview about field line topic and Gauss law. The result of data analysis of student's mathematical representation ability in solving magnet electric problem is categorized into high, medium and low category. The ability of mathematical representations in the high category tends to use a pattern of making known and asked symbols, writing equations, using quantities of physics, substituting quantities into equations, performing calculations and final answers. The ability of mathematical representation in the medium category tends to use several patterns of writing the known symbols, writing equations, using quantities of physics, substituting quantities into equations, performing calculations and final answers. The ability of mathematical representations in the low category tends to use several patterns of making known symbols, writing equations, substituting quantities into equations, performing calculations and final answer.
Patterning roadmap: 2017 prospects
NASA Astrophysics Data System (ADS)
Neisser, Mark
2017-06-01
Road mapping of semiconductor chips has been underway for over 20 years, first with the International Technology Roadmap for Semiconductors (ITRS) roadmap and now with the International Roadmap for Devices and Systems (IRDS) roadmap. The original roadmap was mostly driven bottom up and was developed to ensure that the large numbers of semiconductor producers and suppliers had good information to base their research and development on. The current roadmap is generated more top-down, where the customers of semiconductor chips anticipate what will be needed in the future and the roadmap projects what will be needed to fulfill that demand. The More Moore section of the roadmap projects that advanced logic will drive higher-resolution patterning, rather than memory chips. Potential solutions for patterning future logic nodes can be derived as extensions of `next-generation' patterning technologies currently under development. Advanced patterning has made great progress, and two `next-generation' patterning technologies, EUV and nanoimprint lithography, have potential to be in production as early as 2018. The potential adoption of two different next-generation patterning technologies suggests that patterning technology is becoming more specialized. This is good for the industry in that it lowers overall costs, but may lead to slower progress in extending any one patterning technology in the future.
Resolving embarrassing medical conditions with online health information.
Redston, Sarah; de Botte, Sharon; Smith, Carl
2018-06-01
Reliance on online health information is proliferating and the Internet has the potential to revolutionize the provision of public health information. The anonymity of online health information may be particularly appealing to people seeking advice on 'embarrassing' health problems. The purpose of this study was to investigate (1) whether data generated by the embarrassingproblems.com health information site showed any temporal patterns in problem resolution, and (2) whether successful resolution of a medical problem using online information varied with the type of medical problem. We analyzed the responses of visitors to the embarrassingproblems.com website on the resolution of their problems. The dataset comprised 100,561 responses to information provided on 77 different embarrassing problems grouped into 9 classes of medical problem over an 82-month period. Data were analyzed with a Bernoulli Generalized Linear Model using Bayesian inference. We detected a statistically important interaction between embarrassing problem type and the time period in which data were collected, with an improvement in problem resolution over time for all of the classes of medical problem on the website but with a lower rate of increase in resolution for urinary health problems and medical problems associated with the mouth and face. As far as we are aware, this is the first analysis of data of this nature. Findings support the growing recognition that online health information can contribute to the resolution of embarrassing medical problems, but demonstrate that outcomes may vary with medical problem type. The results indicate that building data collection into online information provision can help to refine and focus health information for online users. Copyright © 2018 Elsevier B.V. All rights reserved.
Analysis of the Problems of the Chinese College Students' EFL Classroom Writings
ERIC Educational Resources Information Center
Yu, Aiju
2012-01-01
This paper explores the problems of EFL classroom writings in the Chinese college teaching context from the perspective of textual organization and pragmatic strategy. Influence of their native cultural thought pattern causes the problem of discourse pattern and cohesion; lack of sufficient pragmatic strategy renders students' unawareness in…
ERIC Educational Resources Information Center
Leckman-Westin, Emily; Cohen, Patricia R.; Stueve, Ann
2009-01-01
Objective: Increased behavior problems have been reported in offspring of mothers with depression. In-home observations link maternal depressive symptoms (MDS) and mother-child interaction patterns with toddler behavior problems and examine their persistence into late childhood. Method: Maternal characteristics (N = 153) and behaviors of…
NASA Astrophysics Data System (ADS)
Iqbal, Z.; Azhar, Ehtsham; Mehmood, Zaffar; Maraj, E. N.
Present article is a study of stagnation point flow over Riga plate with erratic thickness. Riga plate is an electromagnetic surface in which electrodes are assembled alternatively. This arrangement generates electromagnetic hydrodynamic behavior in the fluid flow. This is an attempt to investigate influence of melting heat, thermal radiation and viscous dissipation effects on Riga plate. A traversal electric and magnetic fields are produced by Riga plate. It causes Lorentz force parallel to wall which contributes in directing flow pattern. Physical problem is modeled and reduced nonlinear system is solved numerically. Comparative analysis is carried out between solutions obtained by Keller Box Method and shooting technique with Runge-Kutta Fehlberg method of order 5. It is noted that melting heat transfer reduces temperature distribution whereas radiation parameter upsurge it. Velocity is accelerated by modified Hartman number and Eckert number contributes in raising temperature.
The value of volume and growth measurements in timber sales management of the National Forests
NASA Technical Reports Server (NTRS)
Lietzke, K. R.
1977-01-01
This paper summarizes work performed in the estimation of gross social value of timber volume and growth rate information used in making regional harvest decisions in the National Forest System. A model was developed to permit parametric analysis. The problem is formulated as one of finding optimal inventory holding patterns. Public timber management differs from other inventory holding problems in that the inventory, itself, generates value over time in providing recreational, aesthetic and environmental goods. 'Nontimber' demand estimates are inferred from past Forest Service harvest and sales levels. The solution requires a description of the harvest rates which maintain the optimum inventory level. Gross benefits of the Landsat systems are estimated by comparison with Forest Service information gathering models. Gross annual benefits are estimated to be $5.9 million for the MSS system and $7.2 million for the TM system.
Trongnetrpunya, Amy; Nandi, Bijurika; Kang, Daesung; Kocsis, Bernat; Schroeder, Charles E; Ding, Mingzhou
2015-01-01
Multielectrode voltage data are usually recorded against a common reference. Such data are frequently used without further treatment to assess patterns of functional connectivity between neuronal populations and between brain areas. It is important to note from the outset that such an approach is valid only when the reference electrode is nearly electrically silent. In practice, however, the reference electrode is generally not electrically silent, thereby adding a common signal to the recorded data. Volume conduction further complicates the problem. In this study we demonstrate the adverse effects of common signals on the estimation of Granger causality, which is a statistical measure used to infer synaptic transmission and information flow in neural circuits from multielectrode data. We further test the hypothesis that the problem can be overcome by utilizing bipolar derivations where the difference between two nearby electrodes is taken and treated as a representation of local neural activity. Simulated data generated by a neuronal network model where the connectivity pattern is known were considered first. This was followed by analyzing data from three experimental preparations where a priori predictions regarding the patterns of causal interactions can be made: (1) laminar recordings from the hippocampus of an anesthetized rat during theta rhythm, (2) laminar recordings from V4 of an awake-behaving macaque monkey during alpha rhythm, and (3) ECoG recordings from electrode arrays implanted in the middle temporal lobe and prefrontal cortex of an epilepsy patient during fixation. For both simulation and experimental analysis the results show that bipolar derivations yield the expected connectivity patterns whereas the untreated data (referred to as unipolar signals) do not. In addition, current source density signals, where applicable, yield results that are close to the expected connectivity patterns, whereas the commonly practiced average re-reference method leads to erroneous results.
Direct generation of abruptly focusing vortex beams using a 3/2 radial phase-only pattern.
Davis, Jeffrey A; Cottrell, Don M; Zinn, Jonathan M
2013-03-20
Abruptly focusing Airy beams have previously been generated using a radial cubic phase pattern that represents the Fourier transform of the Airy beam. The Fourier transform of this pattern is formed using a system length of 2f, where f is the focal length of the Fourier transform lens. In this work, we directly generate these abruptly focusing Airy beams using a 3/2 radial phase pattern encoded onto a liquid crystal display. The resulting optical system is much shorter. In addition, we can easily produce vortex patterns at the focal point of these beams. Experimental results match theoretical predictions.
Exposure to Violence, Social Information Processing, and Problem Behavior in Preschool Children
Ziv, Yair
2012-01-01
Understanding the mechanisms by which early risk factors for social maladjustment contribute to disruptive behaviors in social settings is vital to developmental research and practice. A major risk factor for social maladjustment is early exposure to violence which was examined in this short-term longitudinal study in relation to social information processing patterns and externalizing and internalizing behaviors in a sample of 256 preschool children. Data on exposure to violence were obtained via parent report, data on social information processing were obtained via child interview, and data on child problem behavior were obtained via teacher report. Findings supported the hypothesis that, compared to children not exposed to violence, children reported to witness and/or experience violence are more likely to attribute hostile intent to peers, generate aggressive responses, and evaluate socially unaccepted responses (aggressive and inept) as socially suitable. The former were also found to exhibit higher levels of externalizing and internalizing behaviors. Finally, social information processing mediated the link between exposure to violence and problem behavior thus supporting this study’s general approach which argues that the link between exposure to violence and children’s problem behaviors are better understood within the context of their perceptions about social relationships. PMID:23011955
Exposure to violence, social information processing, and problem behavior in preschool children.
Ziv, Yair
2012-01-01
Understanding the mechanisms by which early risk factors for social maladjustment contribute to disruptive behaviors in social settings is vital to developmental research and practice. A major risk factor for social maladjustment is early exposure to violence, which was examined in this short-term longitudinal study in relation to social information processing (SIP) patterns and externalizing and internalizing behaviors in a sample of 256 preschool children. Data on exposure to violence were obtained via parent report, data on SIP were obtained via child interview, and data on child problem behavior were obtained via teacher report. Findings supported the hypothesis that, compared to children not exposed to violence, children reported to witness and/or experience violence are more likely to attribute hostile intent to peers, generate aggressive responses, and evaluate socially unaccepted responses (aggressive and inept) as socially suitable. The former were also found to exhibit higher levels of externalizing and internalizing behaviors. Finally, SIP mediated the link between exposure to violence and problem behavior thus supporting this study's general approach, which argues that the link between exposure to violence and children's problem behaviors are better understood within the context of their perceptions about social relationships. © 2012 Wiley Periodicals, Inc.
Ashkenazi, Sarit; Rosenberg-Lee, Miriam; Tenison, Caitlin; Menon, Vinod
2015-01-01
Developmental dyscalculia (DD) is a disability that impacts math learning and skill acquisition in school-age children. Here we investigate arithmetic problem solving deficits in young children with DD using univariate and multivariate analysis of fMRI data. During fMRI scanning, 17 children with DD (ages 7–9, grades 2 and 3) and 17 IQ- and reading ability-matched typically developing (TD) children performed complex and simple addition problems which differed only in arithmetic complexity. While the TD group showed strong modulation of brain responses with increasing arithmetic complexity, children with DD failed to show such modulation. Children with DD showed significantly reduced activation compared to TD children in the intraparietal sulcus, superior parietal lobule, supramarginal gyrus and bilateral dorsolateral prefrontal cortex in relation to arithmetic complexity. Critically, multivariate representational similarity revealed that brain response patterns to complex and simple problems were less differentiated in the DD group in bilateral anterior IPS, independent of overall differences in signal level. Taken together, these results show that children with DD not only under-activate key brain regions implicated in mathematical cognition, but they also fail to generate distinct neural responses and representations for different arithmetic problems. Our findings provide novel insights into the neural basis of DD. PMID:22682904
Ashkenazi, Sarit; Rosenberg-Lee, Miriam; Tenison, Caitlin; Menon, Vinod
2012-02-15
Developmental dyscalculia (DD) is a disability that impacts math learning and skill acquisition in school-age children. Here we investigate arithmetic problem solving deficits in young children with DD using univariate and multivariate analysis of fMRI data. During fMRI scanning, 17 children with DD (ages 7-9, grades 2 and 3) and 17 IQ- and reading ability-matched typically developing (TD) children performed complex and simple addition problems which differed only in arithmetic complexity. While the TD group showed strong modulation of brain responses with increasing arithmetic complexity, children with DD failed to show such modulation. Children with DD showed significantly reduced activation compared to TD children in the intraparietal sulcus, superior parietal lobule, supramarginal gyrus and bilateral dorsolateral prefrontal cortex in relation to arithmetic complexity. Critically, multivariate representational similarity revealed that brain response patterns to complex and simple problems were less differentiated in the DD group in bilateral anterior IPS, independent of overall differences in signal level. Taken together, these results show that children with DD not only under-activate key brain regions implicated in mathematical cognition, but they also fail to generate distinct neural responses and representations for different arithmetic problems. Our findings provide novel insights into the neural basis of DD. Copyright © 2011 Elsevier Ltd. All rights reserved.
Test pattern generation for ILA sequential circuits
NASA Technical Reports Server (NTRS)
Feng, YU; Frenzel, James F.; Maki, Gary K.
1993-01-01
An efficient method of generating test patterns for sequential machines implemented using one-dimensional, unilateral, iterative logic arrays (ILA's) of BTS pass transistor networks is presented. Based on a transistor level fault model, the method affords a unique opportunity for real-time fault detection with improved fault coverage. The resulting test sets are shown to be equivalent to those obtained using conventional gate level models, thus eliminating the need for additional test patterns. The proposed method advances the simplicity and ease of the test pattern generation for a special class of sequential circuitry.
JAGGED Controls Arabidopsis Petal Growth and Shape by Interacting with a Divergent Polarity Field
Sauret-Güeto, Susanna; Schiessl, Katharina; Bangham, Andrew; Sablowski, Robert; Coen, Enrico
2013-01-01
A flowering plant generates many different organs such as leaves, petals, and stamens, each with a particular function and shape. These types of organ are thought to represent variations on a common underlying developmental program. However, it is unclear how this program is modulated under different selective constraints to generate the diversity of forms observed. Here we address this problem by analysing the development of Arabidopsis petals and comparing the results to models of leaf development. We show that petal development involves a divergent polarity field with growth rates perpendicular to local polarity increasing towards the distal end of the petal. The hypothesis is supported by the observed pattern of clones induced at various stages of development and by analysis of polarity markers, which show a divergent pattern. We also show that JAGGED (JAG) has a key role in promoting distal enhancement of growth rates and influences the extent of the divergent polarity field. Furthermore, we reveal links between the polarity field and auxin function: auxin-responsive markers such as DR5 have a broader distribution along the distal petal margin, consistent with the broad distal organiser of polarity, and PETAL LOSS (PTL), which has been implicated in the control of auxin dynamics during petal initiation, is directly repressed by JAG. By comparing these results with those from studies on leaf development, we show how simple modifications of an underlying developmental system may generate distinct forms, providing flexibility for the evolution of different organ functions. PMID:23653565
Local Conjecturing Process in the Solving of Pattern Generalization Problem
ERIC Educational Resources Information Center
Sutarto; Nusantara, Toto; Subanji; Sisworo
2016-01-01
This aim of this study is to describe the process of local conjecturing in generalizing patterns based on Action, Process, Object, Schema (APOS) theory. The subjects were 16 grade 8 students from a junior high school. Data collection used Pattern Generalization Problem (PGP) and interviews. In the first stage, students completed PGP; in the second…
ERIC Educational Resources Information Center
Spaulding, Scott A.; Irvin, Larry K.; Horner, Robert H.; May, Seth L.; Emeldi, Monica; Tobin, Tary J.; Sugai, George
2010-01-01
Office discipline referral (ODR) data provide useful information about problem behavior and consequence patterns, social-behavioral climates, and effects of social-behavioral interventions in schools. The authors report patterns of ODRs and subsequent administrative decisions from 1,510 schools nationwide that used the School-Wide Information…
ERIC Educational Resources Information Center
Yilmaz, Yasemin; Durmus, Soner; Yaman, Hakan
2018-01-01
This study investigated the pattern problems posed by middle school mathematics preservice teachers using multiple representations to determine both their pattern knowledge levels and their abilities to transfer this knowledge to students. The design of the study is the survey method, one of the quantitative research methods. The study group was…
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2011-08-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3" (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2010-09-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting a very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TGR only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3'' (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
NASA Astrophysics Data System (ADS)
Hettiarachchi, Ranga; Yokoyama, Mitsuo; Uehara, Hideyuki
This paper presents a novel interference cancellation (IC) scheme for both synchronous and asynchronous direct-sequence code-division multiple-access (DS-CDMA) wireless channels. In the DS-CDMA system, the multiple access interference (MAI) and the near-far problem (NFP) are the two factors which reduce the capacity of the system. In this paper, we propose a new algorithm that is able to detect all interference signals as an individual MAI signal by maximum correlation detection. It is based on the discovery of all the unknowing spreading codes of the interference signals. Then, all possible MAI patterns so called replicas are generated as a summation of interference signals. And the true MAI pattern is found by taking correlation between the received signal and the replicas. Moreover, the receiver executes MAI cancellation in a successive manner, removing all interference signals by single-stage. Numerical results will show that the proposed IC strategy, which alleviates the detrimental effect of the MAI and the near-far problem, can significantly improve the system performance. Especially, we can obtain almost the same receiving characteristics as in the absense of interference for asynchrnous system when received powers are equal. Also, the same performances can be seen under any received power state for synchronous system.
Self-Assembly of Human Serum Albumin: A Simplex Phenomenon
Thakur, Garima; Prashanthi, Kovur; Jiang, Keren; Thundat, Thomas
2017-01-01
Spontaneous self-assemblies of biomolecules can generate geometrical patterns. Our findings provide an insight into the mechanism of self-assembled ring pattern generation by human serum albumin (HSA). The self-assembly is a process guided by kinetic and thermodynamic parameters. The generated protein ring patterns display a behavior which is geometrically related to a n-simplex model and is explained through thermodynamics and chemical kinetics. PMID:28930179
NASA Astrophysics Data System (ADS)
Goto, Kota; Takagi, Ryo; Miyashita, Takuya; Jimbo, Hayato; Yoshizawa, Shin; Umemura, Shin-ichiro
2015-07-01
High-intensity focused ultrasound (HIFU) is a noninvasive treatment for tumors such as cancer. In this method, ultrasound is generated outside the body and focused to the target tissue. Therefore, physical and mental stresses on the patient are minimal. A drawback of the HIFU treatment is a long treatment time for a large tumor due to the small therapeutic volume by a single exposure. Enhancing the heating effect of ultrasound by cavitation bubbles may solve this problem. However, this is rather difficult because cavitation clouds tend to be formed backward from the focal point while ultrasonic intensity for heating is centered at the focal point. In this study, the focal points of the trigger pulses to generate cavitation were offset forward from those of the heating ultrasound to match the cavitation clouds with the heating patterns. Results suggest that the controlled offset of focal points makes the thermal coagulation more predictable.
Pre-service teachers' metaphorical perceptions of "physics" as a concept
NASA Astrophysics Data System (ADS)
Aykutlu, Isil; Bayrak, Celal; Bezen, Sevim
2018-02-01
In this study, the aim is to reveal pre-service biology, chemistry and mathematics teachers' metaphorical perceptions for physics. This study was patterned by employing phenomenology, which is one of the qualitative research methods. Sampling of the study consists of 90 pre-service teachers enrolled at the departments of biology, chemistry, and mathematics education at the faculty of education of a state university in Ankara. A metaphor form was prepared to determine pre-service teachers' mental metaphors for the physics concept. Then, it was determined that a total of 80 pre-service teachers generated 34 different metaphors for physics concept. As a result of the study, 34 metaphors generated by pre-service teachers for "physics" concept were gathered under seven different categories. Also, it was determined that pre-service teachers express most frequently "life" (26,25%) and "a difficult to solve problem"(21,25%) which take place in conceptual categories.
Bulk Genotyping of Biopsies Can Create Spurious Evidence for Hetereogeneity in Mutation Content.
Kostadinov, Rumen; Maley, Carlo C; Kuhner, Mary K
2016-04-01
When multiple samples are taken from the neoplastic tissues of a single patient, it is natural to compare their mutation content. This is often done by bulk genotyping of whole biopsies, but the chance that a mutation will be detected in bulk genotyping depends on its local frequency in the sample. When the underlying mutation count per cell is equal, homogenous biopsies will have more high-frequency mutations, and thus more detectable mutations, than heterogeneous ones. Using simulations, we show that bulk genotyping of data simulated under a neutral model of somatic evolution generates strong spurious evidence for non-neutrality, because the pattern of tissue growth systematically generates differences in biopsy heterogeneity. Any experiment which compares mutation content across bulk-genotyped biopsies may therefore suggest mutation rate or selection intensity variation even when these forces are absent. We discuss computational and experimental approaches for resolving this problem.
Vision Algorithm for the Solar Aspect System of the HEROES Mission
NASA Technical Reports Server (NTRS)
Cramer, Alexander
2014-01-01
This work covers the design and test of a machine vision algorithm for generating high-accuracy pitch and yaw pointing solutions relative to the sun for the High Energy Replicated Optics to Explore the Sun (HEROES) mission. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small fiducial markers. Images of this plate were processed in real time to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an "Average Intersection" method, fiducial detection by a matched filter approach, identification with an ad-hoc method based on the spacing between fiducials, and image registration with a simple least squares fit. Performance is verified on a combination of artificially generated images, test data recorded on the ground, and images from the 2013 flight
Vision Algorithm for the Solar Aspect System of the HEROES Mission
NASA Technical Reports Server (NTRS)
Cramer, Alexander; Christe, Steven; Shih, Albert
2014-01-01
This work covers the design and test of a machine vision algorithm for generating high-accuracy pitch and yaw pointing solutions relative to the sun for the High Energy Replicated Optics to Explore the Sun (HEROES) mission. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small fiducial markers. Images of this plate were processed in real time to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an Average Intersection method, fiducial detection by a matched filter approach, identification with an ad-hoc method based on the spacing between fiducials, and image registration with a simple least squares fit. Performance is verified on a combination of artificially generated images, test data recorded on the ground, and images from the 2013 flight.
Optimization technique for problems with an inequality constraint
NASA Technical Reports Server (NTRS)
Russell, K. J.
1972-01-01
General technique uses a modified version of an existing technique termed the pattern search technique. New procedure called the parallel move strategy permits pattern search technique to be used with problems involving a constraint.
Late-Life Drinking Problems: The Predictive Roles of Drinking Level vs. Drinking Pattern.
Holahan, Charles J; Brennan, Penny L; Schutte, Kathleen K; Holahan, Carole K; Hixon, J Gregory; Moos, Rudolf H
2017-05-01
Research on late-middle-aged and older adults has focused primarily on average level of alcohol consumption, overlooking variability in underlying drinking patterns. The purpose of the present study was to examine the independent contributions of an episodic heavy pattern of drinking versus a high average level of drinking as prospective predictors of drinking problems. The sample comprised 1,107 adults ages 55-65 years at baseline. Alcohol consumption was assessed at baseline, and drinking problems were indexed across 20 years. We used prospective negative binomial regression analyses controlling for baseline drinking problems, as well as for demographic and health factors, to predict the number of drinking problems at each of four follow-up waves (1, 4, 10, and 20 years). Across waves where the effects were significant, a high average level of drinking (coefficients of 1.56, 95% CI [1.24, 1.95]; 1.48, 95% CI [1.11, 1.98]; and 1.85, 95% CI [1.23, 2.79] at 1, 10, and 20 years) and an episodic heavy pattern of drinking (coefficients of 1.61, 95% CI [1.30, 1.99]; 1.61, 95% CI [1.28, 2.03]; and 1.43, 95% CI [1.08, 1.90] at 1, 4, and 10 years) each independently increased the number of drinking problems by more than 50%. Information based only on average consumption underestimates the risk of drinking problems among older adults. Both a high average level of drinking and an episodic heavy pattern of drinking pose prospective risks of later drinking problems among older adults.
ERIC Educational Resources Information Center
Ehrler, David J.; McGhee, Ron L.; Evans, J. Gary
1999-01-01
Investigation conducted to link Big-Five personality traits with behavior problems identified in childhood. Results show distinct patterns of behavior problems associated with various personality characteristics. Preliminary data indicate that identifying Big-Five personality trait patterns may be a useful dimension of assessment for understanding…
Patterns of Problem-Solving in Children's Literacy and Arithmetic
ERIC Educational Resources Information Center
Farrington-Flint, Lee; Vanuxem-Cotterill, Sophie; Stiller, James
2009-01-01
Patterns of problem-solving among 5-to-7 year-olds' were examined on a range of literacy (reading and spelling) and arithmetic-based (addition and subtraction) problem-solving tasks using verbal self-reports to monitor strategy choice. The results showed higher levels of variability in the children's strategy choice across Years 1 and 2 on the…
ERIC Educational Resources Information Center
Lin, Chia-Yi
2010-01-01
The purpose of this research is to find the relationships among attributes of creative problem solving ability and their relationships with the Math Creative Problem Solving Ability. In addition, the attribute patterns of high, medium, and low mathematical creative groups were identified and compared. There were 409 fifth and sixth graders…
Scale problems in reporting landscape pattern at the regional scale
R.V. O' Neill; C.T. Hunsaker; S.P. Timmins; B.L. Jackson; K.B. Jones; Kurt H. Riitters; James D. Wickham
1996-01-01
Remotely sensed data for Southeastern United States (Standard Federal Region 4) are used to examine the scale problems involved in reporting landscape pattern for a large, heterogeneous region. Frequency distribu-tions of landscape indices illustrate problems associated with the grain or resolution of the data. Grain should be 2 to 5 times smaller than the...
The heparanome--the enigma of encoding and decoding heparan sulfate sulfation.
Lamanna, William C; Kalus, Ina; Padva, Michael; Baldwin, Rebecca J; Merry, Catherine L R; Dierks, Thomas
2007-04-30
Heparan sulfate (HS) is a cell surface carbohydrate polymer modified with sulfate moieties whose highly ordered composition is central to directing specific cell signaling events. The ability of the cell to generate these information rich glycans with such specificity has opened up a new field of "heparanomics" which seeks to understand the systems involved in generating these cell type and developmental stage specific HS sulfation patterns. Unlike other instances where biological information is encrypted as linear sequences in molecules such as DNA, HS sulfation patterns are generated through a non-template driven process. Thus, deciphering the sulfation code and the dynamic nature of its generation has posed a new challenge to system biologists. The recent discovery of two sulfatases, Sulf1 and Sulf2, with the unique ability to edit sulfation patterns at the cell surface, has opened up a new dimension as to how we understand the regulation of HS sulfation patterning and pattern-dependent cell signaling events. This review will focus on the functional relationship between HS sulfation patterning and biological processes. Special attention will be given to Sulf1 and Sulf2 and how these key editing enzymes might act in concert with the HS biosynthetic enzymes to generate and regulate specific HS sulfation patterns in vivo. We will further explore the use of knock out mice as biological models for understanding the dynamic systems involved in generating HS sulfation patterns and their biological relevance. A brief overview of new technologies and innovations summarizes advances in the systems biology field for understanding non-template molecular networks and their influence on the "heparanome".
Silveira, Camila Magalhães; Siu, Erica Rosanna; Wang, Yuan-Pang; Viana, Maria Carmen; Andrade, Arthur Guerra de; Andrade, Laura Helena
2012-01-01
To investigate drinking patterns and gender differences in alcohol-related problems in a Brazilian population, with an emphasis on the frequency of heavy drinking. A cross-sectional study was conducted with a probability adult household sample (n = 1,464) in the city of São Paulo, Brazil. Alcohol intake and ICD-10 psychopathology diagnoses were assessed with the Composite International Diagnostic Interview 1.1. The analyses focused on the prevalence and determinants of 12-month non-heavy drinking, heavy episodic drinking (4-5 drinks per occasion), and heavy and frequent drinking (heavy drinking at least 3 times/week), as well as associated alcohol-related problems according to drinking patterns and gender. Nearly 22% (32.4% women, 8.7% men) of the subjects were lifetime abstainers, 60.3% were non-heavy drinkers, and 17.5% reported heavy drinking in a 12-month period (26.3% men, 10.9% women). Subjects with the highest frequency of heavy drinking reported the most problems. Among subjects who did not engage in heavy drinking, men reported more problems than did women. A gender convergence in the amount of problems was observed when considering heavy drinking patterns. Heavy and frequent drinkers were twice as likely as abstainers to present lifetime depressive disorders. Lifetime nicotine dependence was associated with all drinking patterns. Heavy and frequent drinking was not restricted to young ages. Heavy and frequent episodic drinking was strongly associated with problems in a community sample from the largest city in Latin America. Prevention policies should target this drinking pattern, independent of age or gender. These findings warrant continued research on risky drinking behavior, particularly among persistent heavy drinkers at the non-dependent level.
King, Christian
2017-02-01
To examine whether the association between soft drinks consumption and child behaviour problems differs by food security status and sleep patterns in young children. Cross-sectional observational data from the Fragile Families and Child Wellbeing Study (FFCWS), which collected information on food insecurity, soft drinks consumption, sleep patterns and child behaviour problems. Bivariate and multivariate ordinary least-squares regression analyses predicting child behaviour problems and accounting for socio-economic factors and household characteristics were performed. Twenty urban cities in the USA with a population of 200 000 or more. Parental interviews of 2829 children who were about 5 years old. Soft drinks consumption was associated with aggressive behaviours, withdrawn and attention problems for children aged 5 years. However, the association differed by food security status. The association was mostly statistically insignificant among food-secure children after accounting for socio-economic and demographic characteristics. On the other hand, soft drinks consumption was associated with behaviour problems for food-insecure children even after accounting for these factors. However, after accounting for child sleep patterns, the association between soft drinks consumption and child behaviour problems became statistically insignificant for food-insecure children. The negative association between soft drinks consumption and child behaviour problems could be explained by sleep problems for food-insecure children. Since about 21 % of households with children are food insecure, targeted efforts to reduce food insecurity would help improve dietary (reduce soft drinks consumption) and health behaviours (improve sleep) and reduce child behaviour problems.
Synonym set extraction from the biomedical literature by lexical pattern discovery.
McCrae, John; Collier, Nigel
2008-03-24
Although there are a large number of thesauri for the biomedical domain many of them lack coverage in terms and their variant forms. Automatic thesaurus construction based on patterns was first suggested by Hearst 1, but it is still not clear how to automatically construct such patterns for different semantic relations and domains. In particular it is not certain which patterns are useful for capturing synonymy. The assumption of extant resources such as parsers is also a limiting factor for many languages, so it is desirable to find patterns that do not use syntactical analysis. Finally to give a more consistent and applicable result it is desirable to use these patterns to form synonym sets in a sound way. We present a method that automatically generates regular expression patterns by expanding seed patterns in a heuristic search and then develops a feature vector based on the occurrence of term pairs in each developed pattern. This allows for a binary classifications of term pairs as synonymous or non-synonymous. We then model this result as a probability graph to find synonym sets, which is equivalent to the well-studied problem of finding an optimal set cover. We achieved 73.2% precision and 29.7% recall by our method, out-performing hand-made resources such as MeSH and Wikipedia. We conclude that automatic methods can play a practical role in developing new thesauri or expanding on existing ones, and this can be done with only a small amount of training data and no need for resources such as parsers. We also concluded that the accuracy can be improved by grouping into synonym sets.
Liu, Xunying; Zhang, Chao; Woodland, Phil; Fonteneau, Elisabeth
2017-01-01
There is widespread interest in the relationship between the neurobiological systems supporting human cognition and emerging computational systems capable of emulating these capacities. Human speech comprehension, poorly understood as a neurobiological process, is an important case in point. Automatic Speech Recognition (ASR) systems with near-human levels of performance are now available, which provide a computationally explicit solution for the recognition of words in continuous speech. This research aims to bridge the gap between speech recognition processes in humans and machines, using novel multivariate techniques to compare incremental ‘machine states’, generated as the ASR analysis progresses over time, to the incremental ‘brain states’, measured using combined electro- and magneto-encephalography (EMEG), generated as the same inputs are heard by human listeners. This direct comparison of dynamic human and machine internal states, as they respond to the same incrementally delivered sensory input, revealed a significant correspondence between neural response patterns in human superior temporal cortex and the structural properties of ASR-derived phonetic models. Spatially coherent patches in human temporal cortex responded selectively to individual phonetic features defined on the basis of machine-extracted regularities in the speech to lexicon mapping process. These results demonstrate the feasibility of relating human and ASR solutions to the problem of speech recognition, and suggest the potential for further studies relating complex neural computations in human speech comprehension to the rapidly evolving ASR systems that address the same problem domain. PMID:28945744
Wedell, Douglas H; Moro, Rodrigo
2008-04-01
Two experiments used within-subject designs to examine how conjunction errors depend on the use of (1) choice versus estimation tasks, (2) probability versus frequency language, and (3) conjunctions of two likely events versus conjunctions of likely and unlikely events. All problems included a three-option format verified to minimize misinterpretation of the base event. In both experiments, conjunction errors were reduced when likely events were conjoined. Conjunction errors were also reduced for estimations compared with choices, with this reduction greater for likely conjuncts, an interaction effect. Shifting conceptual focus from probabilities to frequencies did not affect conjunction error rates. Analyses of numerical estimates for a subset of the problems provided support for the use of three general models by participants for generating estimates. Strikingly, the order in which the two tasks were carried out did not affect the pattern of results, supporting the idea that the mode of responding strongly determines the mode of thinking about conjunctions and hence the occurrence of the conjunction fallacy. These findings were evaluated in terms of implications for rationality of human judgment and reasoning.
2012-01-01
Approximately 50% of publications in English peer reviewed journals are contributed by non-native speakers (NNS) of the language. Basic thought processes are considered to be universal yet there are differences in thought patterns and particularly in discourse management of writers with different linguistic and cultural backgrounds. The study highlights some areas of potential incompatibility in native and NNS processing of English scientific papers. Principles and conventions in generating academic discourse are considered in terms of frequently occurring failures of NNS to meet expectations of editors, reviewers, and readers. Major problem areas concern organization and flow of information, principles of cohesion and clarity, cultural constraints, especially those of politeness and negotiability of ideas, and the complicated area of English modality pragmatics. The aim of the paper is to sensitize NN authors of English academic reports to problem areas of discourse processing which are stumbling blocks, often affecting acceptance of manuscripts. The problems discussed are essential for acquiring pragmalinguistic and sociocultural competence in producing effective communication. PMID:23118596
Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale
Engelmann, Christian; Hukerikar, Saurabh
2017-09-01
Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics.more » Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The resilience patterns and the design framework also enable exploration and evaluation of design alternatives and support optimization of the cost-benefit trade-offs among performance, protection coverage, and power consumption of resilience solutions. Here, the overall goal of this work is to establish a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner despite frequent faults, errors, and failures of various types.« less
Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelmann, Christian; Hukerikar, Saurabh
Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics.more » Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The resilience patterns and the design framework also enable exploration and evaluation of design alternatives and support optimization of the cost-benefit trade-offs among performance, protection coverage, and power consumption of resilience solutions. Here, the overall goal of this work is to establish a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner despite frequent faults, errors, and failures of various types.« less
de Arruda, Henrique Ferraz; Comin, Cesar Henrique; Miazaki, Mauro; Viana, Matheus Palhares; Costa, Luciano da Fontoura
2015-04-30
A key point in developmental biology is to understand how gene expression influences the morphological and dynamical patterns that are observed in living beings. In this work we propose a methodology capable of addressing this problem that is based on estimating the mutual information and Pearson correlation between the intensity of gene expression and measurements of several morphological properties of the cells. A similar approach is applied in order to identify effects of gene expression over the system dynamics. Neuronal networks were artificially grown over a lattice by considering a reference model used to generate artificial neurons. The input parameters of the artificial neurons were determined according to two distinct patterns of gene expression and the dynamical response was assessed by considering the integrate-and-fire model. As far as single gene dependence is concerned, we found that the interaction between the gene expression and the network topology, as well as between the former and the dynamics response, is strongly affected by the gene expression pattern. In addition, we observed a high correlation between the gene expression and some topological measurements of the neuronal network for particular patterns of gene expression. To our best understanding, there are no similar analyses to compare with. A proper understanding of gene expression influence requires jointly studying the morphology, topology, and dynamics of neurons. The proposed framework represents a first step towards predicting gene expression patterns from morphology and connectivity. Copyright © 2015. Published by Elsevier B.V.
Young, W.P.; Ostberg, C.O.; Keim, P.; Thorgaard, G.H.
2001-01-01
Interspecific hybridization represents a dynamic evolutionary phenomenon and major conservation problem in salmonid fishes. In this study we used amplified fragment length polymorphisms (AFLP) and mitochondrial DNA (mtDNA) markers to describe the extent and characterize the pattern of hybridization and introgression between coastal rainbow trout (Oncorhynchus mykiss irideus) and coastal cutthroat trout (O. clarki clarki). Hybrid individuals were initially identified using principle coordinate analysis of 133 polymorphic AFLP markers. Subsequent analysis using 23 diagnostic AFLP markers revealed the presence of F1, rainbow trout backcross, cutthroat trout backcross and later-generation hybrids. mtDNA analysis demonstrated equal numbers of F1 hybrids with rainbow and cutthroat trout mtDNA indicating reciprocal mating of the parental types. In contrast, rainbow and cutthroat trout backcross hybrids always exhibited the mtDNA from the recurrent parent, indicating a male hybrid mating with a pure female. This study illustrates the usefulness of the AFLP technique for generating large numbers of species diagnostic markers. The pattern of hybridization raises many questions concerning the existence and action of reproductive isolating mechanisms between these two species. Our findings are consistent with the hypothesis that introgression between anadromous populations of coastal rainbow and coastal cutthroat trout is limited by an environment-dependent reduction in hybrid fitness.
Li, Cai; Lowe, Robert; Ziemke, Tom
2014-01-01
In this article, we propose an architecture of a bio-inspired controller that addresses the problem of learning different locomotion gaits for different robot morphologies. The modeling objective is split into two: baseline motion modeling and dynamics adaptation. Baseline motion modeling aims to achieve fundamental functions of a certain type of locomotion and dynamics adaptation provides a "reshaping" function for adapting the baseline motion to desired motion. Based on this assumption, a three-layer architecture is developed using central pattern generators (CPGs, a bio-inspired locomotor center for the baseline motion) and dynamic motor primitives (DMPs, a model with universal "reshaping" functions). In this article, we use this architecture with the actor-critic algorithms for finding a good "reshaping" function. In order to demonstrate the learning power of the actor-critic based architecture, we tested it on two experiments: (1) learning to crawl on a humanoid and, (2) learning to gallop on a puppy robot. Two types of actor-critic algorithms (policy search and policy gradient) are compared in order to evaluate the advantages and disadvantages of different actor-critic based learning algorithms for different morphologies. Finally, based on the analysis of the experimental results, a generic view/architecture for locomotion learning is discussed in the conclusion.
Li, Cai; Lowe, Robert; Ziemke, Tom
2014-01-01
In this article, we propose an architecture of a bio-inspired controller that addresses the problem of learning different locomotion gaits for different robot morphologies. The modeling objective is split into two: baseline motion modeling and dynamics adaptation. Baseline motion modeling aims to achieve fundamental functions of a certain type of locomotion and dynamics adaptation provides a “reshaping” function for adapting the baseline motion to desired motion. Based on this assumption, a three-layer architecture is developed using central pattern generators (CPGs, a bio-inspired locomotor center for the baseline motion) and dynamic motor primitives (DMPs, a model with universal “reshaping” functions). In this article, we use this architecture with the actor-critic algorithms for finding a good “reshaping” function. In order to demonstrate the learning power of the actor-critic based architecture, we tested it on two experiments: (1) learning to crawl on a humanoid and, (2) learning to gallop on a puppy robot. Two types of actor-critic algorithms (policy search and policy gradient) are compared in order to evaluate the advantages and disadvantages of different actor-critic based learning algorithms for different morphologies. Finally, based on the analysis of the experimental results, a generic view/architecture for locomotion learning is discussed in the conclusion. PMID:25324773
Optical methods in fault dynamics
NASA Astrophysics Data System (ADS)
Uenishi, K.; Rossmanith, H. P.
2003-10-01
The Rayleigh pulse interaction with a pre-stressed, partially contacting interface between similar and dissimilar materials is investigated experimentally as well as numerically. This study is intended to obtain an improved understanding of the interface (fault) dynamics during the earthquake rupture process. Using dynamic photoelasticity in conjunction with high-speed cinematography, snapshots of time-dependent isochromatic fringe patterns associated with Rayleigh pulse-interface interaction are experimentally recorded. It is shown that interface slip (instability) can be triggered dynamically by a pulse which propagates along the interface at the Rayleigh wave speed. For the numerical investigation, the finite difference wave simulator SWIFD is used for solving the problem under different combinations of contacting materials. The effect of acoustic impedance ratio of the two contacting materials on the wave patterns is discussed. The results indicate that upon interface rupture, Mach (head) waves, which carry a relatively large amount of energy in a concentrated form, can be generated and propagated from the interface contact region (asperity) into the acoustically softer material. Such Mach waves can cause severe damage onto a particular region inside an adjacent acoustically softer area. This type of damage concentration might be a possible reason for the generation of the "damage belt" in Kobe, Japan, on the occasion of the 1995 Hyogo-ken Nanbu (Kobe) Earthquake.
Katsurada, Emiko; Tanimukai, Mitsue; Akazawa, Junko
2017-08-01
The present study investigates the relationships among children's history of maltreatment, attachment patterns, and behavior problems in Japanese institutionalized children. Twenty-nine children (12 boys and 17 girls) from three different institutions in the Kinki area (Western part of Japan) participated in this study. Their average age was 6. 41-years (ranging from 4 to 10). Thirteen of the children (44.8%) had history of maltreatment before they were institutionalized. Children's attachment was assessed by the Attachment Doll Play Assessment (George & Solomon, 1990, 1996, 2000). The child's main caregiver answered the Child Behavior Checklist (CBCL; Archenbach, 1991) to identify children's behavior problems. Results indicated a significant relationship between maltreatment history and attachment pattern. The relationship between attachment pattern and behavior problem was also confirmed. Implications and limitations of this study were discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
The New England travel market: changes in generational travel patterns
Rodney B. Warnick
1995-01-01
The purpose of this study was to examine and explore the New England domestic travel market trends, from 1979 through 1991 within the context of generations. The existing travel markets, who travel to New England, are changing by age cohorts and specifically within different generations. The New England changes in generational travel patterns do not reflect national...
Generation of 3D templates of active sites of proteins with rigid prosthetic groups.
Nebel, Jean-Christophe
2006-05-15
With the increasing availability of protein structures, the generation of biologically meaningful 3D patterns from the simultaneous alignment of several protein structures is an exciting prospect: active sites could be better understood, protein functions and protein 3D structures could be predicted more accurately. Although patterns can already be generated at the fold and topological levels, no system produces high-resolution 3D patterns including atom and cavity positions. To address this challenge, our research focuses on generating patterns from proteins with rigid prosthetic groups. Since these groups are key elements of protein active sites, the generated 3D patterns are expected to be biologically meaningful. In this paper, we present a new approach which allows the generation of 3D patterns from proteins with rigid prosthetic groups. Using 237 protein chains representing proteins containing porphyrin rings, our method was validated by comparing 3D templates generated from homologues with the 3D structure of the proteins they model. Atom positions were predicted reliably: 93% of them had an accuracy of 1.00 A or less. Moreover, similar results were obtained regarding chemical group and cavity positions. Results also suggested our system could contribute to the validation of 3D protein models. Finally, a 3D template was generated for the active site of human cytochrome P450 CYP17, the 3D structure of which is unknown. Its analysis showed that it is biologically meaningful: our method detected the main patterns of the cytochrome P450 superfamily and the motifs linked to catalytic reactions. The 3D template also suggested the position of a residue, which could be involved in a hydrogen bond with CYP17 substrates and the shape and location of a cavity. Comparisons with independently generated 3D models comforted these hypotheses. Alignment software (Nestor3D) is available at http://www.kingston.ac.uk/~ku33185/Nestor3D.html
Effects of traffic generation patterns on the robustness of complex networks
NASA Astrophysics Data System (ADS)
Wu, Jiajing; Zeng, Junwen; Chen, Zhenhao; Tse, Chi K.; Chen, Bokui
2018-02-01
Cascading failures in communication networks with heterogeneous node functions are studied in this paper. In such networks, the traffic dynamics are highly dependent on the traffic generation patterns which are in turn determined by the locations of the hosts. The data-packet traffic model is applied to Barabási-Albert scale-free networks to study the cascading failures in such networks and to explore the effects of traffic generation patterns on network robustness. It is found that placing the hosts at high-degree nodes in a network can make the network more robust against both intentional attacks and random failures. It is also shown that the traffic generation pattern plays an important role in network design.
College Student's Health, Drinking and Smoking Patterns: What Has Changed in 20 Years?
ERIC Educational Resources Information Center
Hensel, Desiree; Todd, Katherine Leigh; Engs, Ruth C.
2014-01-01
Problem: Institutes of higher learning are increasingly trying to address the issue of problem drinking. The purpose of this study was to determine how patterns in alcohol use and smoking by college students, as well as their illness patterns, have changed over 20 years. Methods: A cross-sectional serial survey design was used for this descriptive…
ERIC Educational Resources Information Center
Flowerdew, Lynne
2003-01-01
Reports on research describing similarities and differences between expert and novice writing in the problem-solution pattern, a frequent rhetorical pattern of technical academic writing. A corpus of undergraduate student writing and one containing professional writing consisted of 80 and 60 recommendation reports, respectively, with each corpus…
On the Geometry of the Hamilton-Jacobi Equation and Generating Functions
NASA Astrophysics Data System (ADS)
Ferraro, Sebastián; de León, Manuel; Marrero, Juan Carlos; Martín de Diego, David; Vaquero, Miguel
2017-10-01
In this paper we develop a geometric version of the Hamilton-Jacobi equation in the Poisson setting. Specifically, we "geometrize" what is usually called a complete solution of the Hamilton-Jacobi equation. We use some well-known results about symplectic groupoids, in particular cotangent groupoids, as a keystone for the construction of our framework. Our methodology follows the ambitious program proposed by Weinstein (In Mechanics day (Waterloo, ON, 1992), volume 7 of fields institute communications, American Mathematical Society, Providence, 1996) in order to develop geometric formulations of the dynamical behavior of Lagrangian and Hamiltonian systems on Lie algebroids and Lie groupoids. This procedure allows us to take symmetries into account, and, as a by-product, we recover results from Channell and Scovel (Phys D 50(1):80-88, 1991), Ge (Indiana Univ. Math. J. 39(3):859-876, 1990), Ge and Marsden (Phys Lett A 133(3):134-139, 1988), but even in these situations our approach is new. A theory of generating functions for the Poisson structures considered here is also developed following the same pattern, solving a longstanding problem of the area: how to obtain a generating function for the identity transformation and the nearby Poisson automorphisms of Poisson manifolds. A direct application of our results gives the construction of a family of Poisson integrators, that is, integrators that conserve the underlying Poisson geometry. These integrators are implemented in the paper in benchmark problems. Some conclusions, current and future directions of research are shown at the end of the paper.
StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.
Li, Chenhui; Baciu, George; Han, Yu
2018-03-01
Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.
The RiverFish Approach to Business Process Modeling: Linking Business Steps to Control-Flow Patterns
NASA Astrophysics Data System (ADS)
Zuliane, Devanir; Oikawa, Marcio K.; Malkowski, Simon; Alcazar, José Perez; Ferreira, João Eduardo
Despite the recent advances in the area of Business Process Management (BPM), today’s business processes have largely been implemented without clearly defined conceptual modeling. This results in growing difficulties for identification, maintenance, and reuse of rules, processes, and control-flow patterns. To mitigate these problems in future implementations, we propose a new approach to business process modeling using conceptual schemas, which represent hierarchies of concepts for rules and processes shared among collaborating information systems. This methodology bridges the gap between conceptual model description and identification of actual control-flow patterns for workflow implementation. We identify modeling guidelines that are characterized by clear phase separation, step-by-step execution, and process building through diagrams and tables. The separation of business process modeling in seven mutually exclusive phases clearly delimits information technology from business expertise. The sequential execution of these phases leads to the step-by-step creation of complex control-flow graphs. The process model is refined through intuitive table and diagram generation in each phase. Not only does the rigorous application of our modeling framework minimize the impact of rule and process changes, but it also facilitates the identification and maintenance of control-flow patterns in BPM-based information system architectures.
Laser etching of polymer masked leadframes
NASA Astrophysics Data System (ADS)
Ho, C. K.; Man, H. C.; Yue, T. M.; Yuen, C. W.
1997-02-01
A typical electroplating production line for the deposition of silver pattern on copper leadframes in the semiconductor industry involves twenty to twenty five steps of cleaning, pickling, plating, stripping etc. This complex production process occupies large floor space and has also a number of problems such as difficulty in the production of rubber masks and alignment, generation of toxic fumes, high cost of water consumption and sometimes uncertainty on the cleanliness of the surfaces to be plated. A novel laser patterning process is proposed in this paper which can replace many steps in the existing electroplating line. The proposed process involves the application of high speed laser etching techniques on leadframes which were protected with polymer coating. The desired pattern for silver electroplating is produced by laser ablation of the polymer coating. Excimer laser was found to be most effective for this process as it can expose a pattern of clean copper substrate which can be silver plated successfully. Previous working of Nd:YAG laser ablation showed that 1.06 μm radiation was not suitable for this etching process because a thin organic and transparent film remained on the laser etched region. The effect of excimer pulse frequency and energy density upon the removal rate of the polymer coating was studied.
Human spinal locomotor control is based on flexibly organized burst generators.
Danner, Simon M; Hofstoetter, Ursula S; Freundl, Brigitta; Binder, Heinrich; Mayr, Winfried; Rattay, Frank; Minassian, Karen
2015-03-01
Constant drive provided to the human lumbar spinal cord by epidural electrical stimulation can cause local neural circuits to generate rhythmic motor outputs to lower limb muscles in people paralysed by spinal cord injury. Epidural spinal cord stimulation thus allows the study of spinal rhythm and pattern generating circuits without their configuration by volitional motor tasks or task-specific peripheral feedback. To reveal spinal locomotor control principles, we studied the repertoire of rhythmic patterns that can be generated by the functionally isolated human lumbar spinal cord, detected as electromyographic activity from the legs, and investigated basic temporal components shared across these patterns. Ten subjects with chronic, motor-complete spinal cord injury were studied. Surface electromyographic responses to lumbar spinal cord stimulation were collected from quadriceps, hamstrings, tibialis anterior, and triceps surae in the supine position. From these data, 10-s segments of rhythmic activity present in the four muscle groups of one limb were extracted. Such samples were found in seven subjects. Physiologically adequate cycle durations and relative extension- and flexion-phase durations similar to those needed for locomotion were generated. The multi-muscle activation patterns exhibited a variety of coactivation, mixed-synergy and locomotor-like configurations. Statistical decomposition of the electromyographic data across subjects, muscles and samples of rhythmic patterns identified three common temporal components, i.e. basic or shared activation patterns. Two of these basic patterns controlled muscles to contract either synchronously or alternatingly during extension- and flexion-like phases. The third basic pattern contributed to the observed muscle activities independently from these extensor- and flexor-related basic patterns. Each bifunctional muscle group was able to express both extensor- and flexor-patterns, with variable ratios across the samples of rhythmic patterns. The basic activation patterns can be interpreted as central drives implemented by spinal burst generators that impose specific spatiotemporally organized activation on the lumbosacral motor neuron pools. Our data thus imply that the human lumbar spinal cord circuits can form burst-generating elements that flexibly combine to obtain a wide range of locomotor outputs from a constant, repetitive input. It may be possible to use this flexibility to incorporate specific adaptations to gait and stance to improve locomotor control, even after severe central nervous system damage. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Human spinal locomotor control is based on flexibly organized burst generators
Danner, Simon M.; Hofstoetter, Ursula S.; Freundl, Brigitta; Binder, Heinrich; Mayr, Winfried; Rattay, Frank
2015-01-01
Constant drive provided to the human lumbar spinal cord by epidural electrical stimulation can cause local neural circuits to generate rhythmic motor outputs to lower limb muscles in people paralysed by spinal cord injury. Epidural spinal cord stimulation thus allows the study of spinal rhythm and pattern generating circuits without their configuration by volitional motor tasks or task-specific peripheral feedback. To reveal spinal locomotor control principles, we studied the repertoire of rhythmic patterns that can be generated by the functionally isolated human lumbar spinal cord, detected as electromyographic activity from the legs, and investigated basic temporal components shared across these patterns. Ten subjects with chronic, motor-complete spinal cord injury were studied. Surface electromyographic responses to lumbar spinal cord stimulation were collected from quadriceps, hamstrings, tibialis anterior, and triceps surae in the supine position. From these data, 10-s segments of rhythmic activity present in the four muscle groups of one limb were extracted. Such samples were found in seven subjects. Physiologically adequate cycle durations and relative extension- and flexion-phase durations similar to those needed for locomotion were generated. The multi-muscle activation patterns exhibited a variety of coactivation, mixed-synergy and locomotor-like configurations. Statistical decomposition of the electromyographic data across subjects, muscles and samples of rhythmic patterns identified three common temporal components, i.e. basic or shared activation patterns. Two of these basic patterns controlled muscles to contract either synchronously or alternatingly during extension- and flexion-like phases. The third basic pattern contributed to the observed muscle activities independently from these extensor- and flexor-related basic patterns. Each bifunctional muscle group was able to express both extensor- and flexor-patterns, with variable ratios across the samples of rhythmic patterns. The basic activation patterns can be interpreted as central drives implemented by spinal burst generators that impose specific spatiotemporally organized activation on the lumbosacral motor neuron pools. Our data thus imply that the human lumbar spinal cord circuits can form burst-generating elements that flexibly combine to obtain a wide range of locomotor outputs from a constant, repetitive input. It may be possible to use this flexibility to incorporate specific adaptations to gait and stance to improve locomotor control, even after severe central nervous system damage. PMID:25582580
Investigating the role of future thinking in social problem solving.
Noreen, Saima; Whyte, Katherine E; Dritschel, Barbara
2015-03-01
There is well-established evidence that both rumination and depressed mood negatively impact the ability to solve social problems. A preliminary stage of the social problem solving process may be the process of catapulting oneself forward in time to think about the consequences of a problem before attempting to solve it. The aim of the present study was to examine how thinking about the consequences of a social problem being resolved or unresolved prior to solving it influences the solution of the problem as a function of levels of rumination and dysphoric mood. Eighty six participants initially completed the Beck Depression Inventory- II (BDI-II) and the Ruminative Response Scale (RRS). They were then presented with six social problems and generated consequences for half of the problems being resolved and half of the problems remaining unresolved. Participants then solved some of the problems, and following a delay, were asked to recall all of the consequences previously generated. Participants reporting higher levels of depressed mood and rumination were less effective at generating problem solutions. Specifically, those reporting higher levels of rumination produced less effective solutions for social problems that they had previously generated unresolved than resolved consequences. We also found that individuals higher in rumination, irrespective of depressed mood recalled more of the unresolved consequences in a subsequent memory test. As participants did not solve problems for scenarios where no consequences were generated, no baseline measure of problem solving was obtained. Our results suggest thinking about the consequences of a problem remaining unresolved may impair the generation of effective solutions in individuals with higher levels of rumination. Copyright © 2014 Elsevier Ltd. All rights reserved.
Initial benchmarking of a new electron-beam raster pattern generator for 130-100 nm maskmaking
NASA Astrophysics Data System (ADS)
Sauer, Charles A.; Abboud, Frank E.; Babin, Sergey V.; Chakarian, Varoujan; Ghanbari, Abe; Innes, Robert; Trost, David; Raymond, Frederick, III
2000-07-01
The decision by the Semiconductor Industry Association (SIA) to accelerate the continuing evolution to smaller linewidths is consistent with the commitment by Etec Systems, Inc. to rapidly develop new technologies for pattern generation systems with improved resolution, critical dimension (CD) uniformity, positional accuracy, and throughput. Current pattern generation designs are inadequate to meet the more advanced requirements for masks, particularly at or below the 100 nm node. Major changes to all pattern generation tools will be essential to meet future market requirements. An electron-beam (e-beam) system that is designed to meet the challenges for 130 - 100 nm device generation with extendibility to the 70-nm range will be discussed. This system has an architecture that includes a graybeam writing strategy, a new state system, and improved thermal management. Detailed changes include a pulse width modulated blanking system, per-pixel deflection, retrograde scanning multipass writing, and a column with a 50 kV accelerating voltage that supports a dose of up to 45 (mu) C/cm2 with minimal amounts of resist heating. This paper examines current issues, our approach to meeting International Technology Roadmap for Semiconductors (ITRS) requirements, and some preliminary results from a new pattern generator.
Geographic migration of black and white families over four generations.
Sharkey, Patrick
2015-02-01
This article analyzes patterns of geographic migration of black and white American families over four consecutive generations. The analysis is based on a unique set of questions in the Panel Study of Income Dynamics (PSID) asking respondents about the counties and states in which their parents and grandparents were raised. Using this information along with the extensive geographic information available in the PSID survey, the article tracks the geographic locations of four generations of family members and considers the ways in which families and places are linked together over the course of a family's history. The patterns documented in the article are consistent with much of the demographic literature on the Great Migration of black Americans out of the South, but they reveal new insights into patterns of black migration after the Great Migration. In the most recent generation, black Americans have remained in place to a degree that is unique relative to the previous generation and relative to whites of the same generation. This new geographic immobility is the most pronounced change in black Americans' migration patterns after the Great Migration, and it is a pattern that has implications for the demography of black migration as well as the literature on racial inequality.
Optimization of RET flow using test layout
NASA Astrophysics Data System (ADS)
Zhang, Yunqiang; Sethi, Satyendra; Lucas, Kevin
2008-11-01
At advanced technology nodes with extremely low k1 lithography, it is very hard to achieve image fidelity requirements and process window for some layout configurations. Quite often these layouts are within simple design rule constraints for a given technology node. It is important to have these layouts included during early RET flow development. Most of RET developments are based on shrunk layout from the previous technology node, which is possibly not good enough. A better methodology in creating test layout is required for optical proximity correction (OPC) recipe and assists feature development. In this paper we demonstrate the application of programmable test layouts in RET development. Layout pattern libraries are developed and embedded in a layout tool (ICWB). Assessment gauges are generated together with patterns for quick correction accuracy assessment. Several groups of test pattern libraries have been developed based on learning from product patterns and a layout DOE approach. The interaction between layout patterns and OPC recipe has been studied. Correction of a contact layer is quite challenge because of poor convergence and low process window. We developed test pattern library with many different contact configurations. Different OPC schemes are studied on these test layouts. The worst process window patterns are pinpointed for a given illumination condition. Assist features (AF) are frequently placed according to pre-determined rules to improve lithography process window. These rules are usually derived from lithographic models and experiments. Direct validation of AF rules is required at development phase. We use the test layout approach to determine rules in order to eliminate AF printability problem.
Automatic two- and three-dimensional mesh generation based on fuzzy knowledge processing
NASA Astrophysics Data System (ADS)
Yagawa, G.; Yoshimura, S.; Soneda, N.; Nakao, K.
1992-09-01
This paper describes the development of a novel automatic FEM mesh generation algorithm based on the fuzzy knowledge processing technique. A number of local nodal patterns are stored in a nodal pattern database of the mesh generation system. These nodal patterns are determined a priori based on certain theories or past experience of experts of FEM analyses. For example, such human experts can determine certain nodal patterns suitable for stress concentration analyses of cracks, corners, holes and so on. Each nodal pattern possesses a membership function and a procedure of node placement according to this function. In the cases of the nodal patterns for stress concentration regions, the membership function which is utilized in the fuzzy knowledge processing has two meanings, i.e. the “closeness” of nodal location to each stress concentration field as well as “nodal density”. This is attributed to the fact that a denser nodal pattern is required near a stress concentration field. What a user has to do in a practical mesh generation process are to choose several local nodal patterns properly and to designate the maximum nodal density of each pattern. After those simple operations by the user, the system places the chosen nodal patterns automatically in an analysis domain and on its boundary, and connects them smoothly by the fuzzy knowledge processing technique. Then triangular or tetrahedral elements are generated by means of the advancing front method. The key issue of the present algorithm is an easy control of complex two- or three-dimensional nodal density distribution by means of the fuzzy knowledge processing technique. To demonstrate fundamental performances of the present algorithm, a prototype system was constructed with one of object-oriented languages, Smalltalk-80 on a 32-bit microcomputer, Macintosh II. The mesh generation of several two- and three-dimensional domains with cracks, holes and junctions was presented as examples.
Kauffman, S A; Goodwin, B C
1990-06-07
We review the evidence presented in Part I showing that transcripts and protein products of maternal, gap, pair-rule, and segment polarity genes exhibit increasingly complex, multipeaked longitudinal waveforms in the early Drosophila embryo. The central problem we address in Part II is the use the embryo makes of these wave forms to specify longitudinal pattern. Based on the fact that mutants of many of these genes generate deletions and mirror symmetrical duplications of pattern elements on length scales ranging from about half the egg to within segments, we propose that position is specified by measuring a "phase angle" by use of the ratios of two or more variables. Pictorially, such a phase angle can be thought of as a colour on a colour wheel. Any such model contains a phaseless singularity where all or many phases, or colours, come together. We suppose as well that positional values sufficiently close to the singularity are meaningless, hence a "dead zone". Duplications and deletions are accounted for by deformation of the cycle of morphogen values occurring along the antero-posterior axis. If the cycle of values surrounds the singularity and lies outside the dead zone, pattern is normal. If the curve transects the dead zone, pattern elements are deleted. If the curve lies entirely on one side of the singularity, pattern elements are deleted and others are duplicated with mirror symmetry. The existence of different wavelength transcript patterns in maternal, gap, pair-rule, and segment polarity genes and the roles of those same genes in generating deletions and mirror symmetrical duplications on a variety of length scales lead us to propose that position is measured simultaneously on at least four colour wheels, which cycle different numbers of times along the anterior-posterior axis. These yield progressively finer grained positional information. Normal pattern specification requires a unique angle, outside of the dead zone, from each of the four wheels. Deformations of the cycle of gene product concentrations yield the deletions and mirror symmetric duplications observed in the mutants discussed. The alternative familiar hypothesis that longitudinal position is specified in an "on" "off" combinatorial code does not readily account for the duplication deletion phenomena.
ROLE OF NMDA, NICOTINIC, AND GABA RECEPTORS IN THE STEADY STATE VISUAL EVOKED POTENTIAL IN RATS.
This manuscript characterizes the receptor pathways involved in pattern-evoked potential generation in rats
" NMDA and nicotinic acetylcholine receptors appear to be involved in the generation of the steady-state pattern evoked response in vivo.
" The pattern evok...
Breathing simulator of workers for respirator performance test.
Yuasa, Hisashi; Kumita, Mikio; Honda, Takeshi; Kimura, Kazushi; Nozaki, Kosuke; Emi, Hitoshi; Otani, Yoshio
2015-01-01
Breathing machines are widely used to evaluate respirator performance but they are capable of generating only limited air flow patterns, such as, sine, triangular and square waves. In order to evaluate the respirator performance in practical use, it is desirable to test the respirator using the actual breathing patterns of wearers. However, it has been a difficult task for a breathing machine to generate such complicated flow patterns, since the human respiratory volume changes depending on the human activities and workload. In this study, we have developed an electromechanical breathing simulator and a respiration sampling device to record and reproduce worker's respiration. It is capable of generating various flow patterns by inputting breathing pattern signals recorded by a computer, as well as the fixed air flow patterns. The device is equipped with a self-control program to compensate the difference in inhalation and exhalation volume and the measurement errors on the breathing flow rate. The system was successfully applied to record the breathing patterns of workers engaging in welding and reproduced the breathing patterns.
NASA Astrophysics Data System (ADS)
Mitra, Joydeep; Torres, Andres; Ma, Yuansheng; Pan, David Z.
2018-01-01
Directed self-assembly (DSA) has emerged as one of the most compelling next-generation patterning techniques for sub 7 nm via or contact layers. A key issue in enabling DSA as a mainstream patterning technique is the generation of grapho-epitaxy-based guiding pattern (GP) shapes to assemble the contact patterns on target with high fidelity and resolution. Current GP generation is mostly empirical, and limited to a very small number of via configurations. We propose the first model-based GP synthesis algorithm and methodology for on-target and robust DSA, on general via pattern configurations. The final postoptical proximity correction-printed GPs derived from our original synthesized GPs are resilient to process variations and continue to maintain the same DSA fidelity in terms of placement error and target shape.
Computing sparse derivatives and consecutive zeros problem
NASA Astrophysics Data System (ADS)
Chandra, B. V. Ravi; Hossain, Shahadat
2013-02-01
We describe a substitution based sparse Jacobian matrix determination method using algorithmic differentiation. Utilizing the a priori known sparsity pattern, a compression scheme is determined using graph coloring. The "compressed pattern" of the Jacobian matrix is then reordered into a form suitable for computation by substitution. We show that the column reordering of the compressed pattern matrix (so as to align the zero entries into consecutive locations in each row) can be viewed as a variant of traveling salesman problem. Preliminary computational results show that on the test problems the performance of nearest-neighbor type heuristic algorithms is highly encouraging.
Controlling flows in microchannels with patterned surface charge and topography.
Stroock, Abraham D; Whitesides, George M
2003-08-01
This Account reviews two procedures for controlling the flow of fluids in microchannels. The first procedure involves patterning the density of charge on the inner surfaces of a channel. These patterns generate recirculating electroosmotic flows in the presence of a steady electric field. The second procedure involves patterning topography on an inner surface of a channel. These patterns generate recirculation in the cross-section of steady, pressure-driven flows. This Account summarizes applications of these flow to mixing and to controlling dispersion (band broadening).
Study on Buckling of Stiff Thin Films on Soft Substrates as Functional Materials
NASA Astrophysics Data System (ADS)
Ma, Teng
In engineering, buckling is mechanical instability of walls or columns under compression and usually is a problem that engineers try to prevent. In everyday life buckles (wrinkles) on different substrates are ubiquitous -- from human skin to a rotten apple they are a commonly observed phenomenon. It seems that buckles with macroscopic wavelengths are not technologically useful; over the past decade or so, however, thanks to the widespread availability of soft polymers and silicone materials micro-buckles with wavelengths in submicron to micron scale have received increasing attention because it is useful for generating well-ordered periodic microstructures spontaneously without conventional lithographic techniques. This thesis investigates the buckling behavior of thin stiff films on soft polymeric substrates and explores a variety of applications, ranging from optical gratings, optical masks, energy harvest to energy storage. A laser scanning technique is proposed to detect micro-strain induced by thermomechanical loads and a periodic buckling microstructure is employed as a diffraction grating with broad wavelength tunability, which is spontaneously generated from a metallic thin film on polymer substrates. A mechanical strategy is also presented for quantitatively buckling nanoribbons of piezoelectric material on polymer substrates involving the combined use of lithographically patterning surface adhesion sites and transfer printing technique. The precisely engineered buckling configurations provide a route to energy harvesters with extremely high levels of stretchability. This stiff-thin-film/polymer hybrid structure is further employed into electrochemical field to circumvent the electrochemically-driven stress issue in silicon-anode-based lithium ion batteries. It shows that the initial flat silicon-nanoribbon-anode on a polymer substrate tends to buckle to mitigate the lithiation-induced stress so as to avoid the pulverization of silicon anode. Spontaneously generated submicron buckles of film/polymer are also used as an optical mask to produce submicron periodic patterns with large filling ratio in contrast to generating only ˜100 nm edge submicron patterns in conventional near-field soft contact photolithography. This thesis aims to deepen understanding of buckling behavior of thin films on compliant substrates and, in turn, to harness the fundamental properties of such instability for diverse applications.
Planification de la maintenance d'un parc de turbines-alternateurs par programmation mathematique
NASA Astrophysics Data System (ADS)
Aoudjit, Hakim
A growing number of Hydro-Quebec's hydro generators are at the end of their useful life and maintenance managers fear to face a number of overhauls exceeding what can be handled. Maintenance crews and budgets are limited and these withdrawals may take up to a full year and mobilize significant resources in addition to the loss of electricity production. In addition, increased export sales forecasts and severe production patterns are expected to speed up wear that can lead to halting many units at the same time. Currently, expert judgment is at the heart of withdrawals which rely primarily on periodic inspections and in-situ measurements and the results are sent to the maintenance planning team who coordinate all the withdrawals decisions. The degradations phenomena taking place is random in nature and the prediction capability of wear using only inspections is limited to short-term at best. A long term planning of major overhauls is sought by managers for the sake of justifying and rationalizing budgets and resources. The maintenance managers are able to provide a huge amount of data. Among them, is the hourly production of each unit for several years, the repairs history on each part of a unit as well as major withdrawals since the 1950's. In this research, we tackle the problem of long term maintenance planning for a fleet of 90 hydro generators at Hydro-Quebec over a 50 years planning horizon period. We lay a scientific and rational framework to support withdrawals decisions by using part of the available data and maintenance history while fulfilling a set of technical and economic constraints. We propose a planning approach based on a constrained optimization framework. We begin by decomposing and sorting hydro generator components to highlight the most influential parts. A failure rate model is developed to take into account the technical characteristics and unit utilization. Then, replacement and repair policies are evaluated for each of the components then strategies are derived for the whole unit. Traditional univariate policies such as the age replacement policy and the minimal repair policy are calculated. These policies are extended to build alternative bivariate maintenance policy as well as a repair strategy where the state of a component after a repair is rejuvenated by a constant coefficient. These templates form the basis for the calculation of objective function for the scheduling problem. On one hand, this issue is treated as a nonlinear problem where the objective is to minimize the average total maintenance cost per unit of time on an infinite horizon for the fleet with technical and economic constraints. A formulation is also proposed in the case of a finite time horizon. In the event of electricity production variation, and given that the usage profile is known, the influence of production scenarios is reflected on the unit's components through their failure rate. In this context, prognoses on possible resources problems are made by studying the characteristics of the generated plans. On the second hand, the withdrawals are now subjected to two decision criteria. In addition to minimizing the average total maintenance cost per unit of time on an infinite time horizon, the best achievable reliability of remaining turbo generators is sought. This problem is treated as a biobjective nonlinear optimization problem. Finally a series of problems describing multiple contexts are solved for planning renovations of 90 turbo generators units considering 3 major components in each unit and 2 types of maintenance policies for each component.
Sverdlov, Serge; Thompson, Elizabeth A.
2013-01-01
In classical quantitative genetics, the correlation between the phenotypes of individuals with unknown genotypes and a known pedigree relationship is expressed in terms of probabilities of IBD states. In existing approaches to the inverse problem where genotypes are observed but pedigree relationships are not, dependence between phenotypes is either modeled as Bayesian uncertainty or mapped to an IBD model via inferred relatedness parameters. Neither approach yields a relationship between genotypic similarity and phenotypic similarity with a probabilistic interpretation corresponding to a generative model. We introduce a generative model for diploid allele effect based on the classic infinite allele mutation process. This approach motivates the concept of IBF (Identity by Function). The phenotypic covariance between two individuals given their diploid genotypes is expressed in terms of functional identity states. The IBF parameters define a genetic architecture for a trait without reference to specific alleles or population. Given full genome sequences, we treat a gene-scale functional region, rather than a SNP, as a QTL, modeling patterns of dominance for multiple alleles. Applications demonstrated by simulation include phenotype and effect prediction and association, and estimation of heritability and classical variance components. A simulation case study of the Missing Heritability problem illustrates a decomposition of heritability under the IBF framework into Explained and Unexplained components. PMID:23851163
Evolutionary theory and teleology.
O'Grady, R T
1984-04-21
The order within and among living systems can be explained rationally by postulating a process of descent with modification, effected by factors which are extrinsic or intrinsic to the organisms. Because at the time Darwin proposed his theory of evolution there was no concept of intrinsic factors which could evolve, he postulated a process of extrinsic effects--natural selection. Biological order was thus seen as an imposed, rather than an emergent, property. Evolutionary change was seen as being determined by the functional efficiency (adaptedness) of the organism in its environment, rather than by spontaneous changes in intrinsically generated organizing factors. The initial incompleteness of Darwin's explanatory model, and the axiomatization of its postulates in neo-Darwinism, has resulted in a theory of functionalism, rather than structuralism. As such, it introduces an unnecessary teleology which confounds evolutionary studies and reduces the usefulness of the theory. This problem cannot be detected from within the neo-Darwinian paradigm because the different levels of end-directed activity--teleomatic, teleonomic, and teleological--are not recognized. They are, in fact, considered to influence one another. The theory of nonequilibrium evolution avoids these problems by returning to the basic principles of biological order and developing a structuralist explanation of intrinsically generated change. Extrinsic factors may affect the resultant evolutionary pattern, but they are neither necessary nor sufficient for evolution to occur.
NASA Astrophysics Data System (ADS)
Obulesu, O.; Rama Mohan Reddy, A., Dr; Mahendra, M.
2017-08-01
Detecting regular and efficient cyclic models is the demanding activity for data analysts due to unstructured, vigorous and enormous raw information produced from web. Many existing approaches generate large candidate patterns in the occurrence of huge and complex databases. In this work, two novel algorithms are proposed and a comparative examination is performed by considering scalability and performance parameters. The first algorithm is, EFPMA (Extended Regular Model Detection Algorithm) used to find frequent sequential patterns from the spatiotemporal dataset and the second one is, ETMA (Enhanced Tree-based Mining Algorithm) for detecting effective cyclic models with symbolic database representation. EFPMA is an algorithm grows models from both ends (prefixes and suffixes) of detected patterns, which results in faster pattern growth because of less levels of database projection compared to existing approaches such as Prefixspan and SPADE. ETMA uses distinct notions to store and manage transactions data horizontally such as segment, sequence and individual symbols. ETMA exploits a partition-and-conquer method to find maximal patterns by using symbolic notations. Using this algorithm, we can mine cyclic models in full-series sequential patterns including subsection series also. ETMA reduces the memory consumption and makes use of the efficient symbolic operation. Furthermore, ETMA only records time-series instances dynamically, in terms of character, series and section approaches respectively. The extent of the pattern and proving efficiency of the reducing and retrieval techniques from synthetic and actual datasets is a really open & challenging mining problem. These techniques are useful in data streams, traffic risk analysis, medical diagnosis, DNA sequence Mining, Earthquake prediction applications. Extensive investigational outcomes illustrates that the algorithms outperforms well towards efficiency and scalability than ECLAT, STNR and MAFIA approaches.
USDA-ARS?s Scientific Manuscript database
CLIGEN (CLImate GENerator) is a widely used stochastic weather generator to simulate continuous daily precipitation and storm pattern information for hydrological and soil erosion models. Although CLIGEN has been tested in several regions in the world, thoroughly assessment before applying it to Chi...
Central Pattern Generation and the Motor Infrastructure for Suck, Respiration, and Speech
ERIC Educational Resources Information Center
Barlow, Steven M.; Estep, Meredith
2006-01-01
The objective of the current report is to review experimental findings on centrally patterned movements and sensory and descending modulation of central pattern generators (CPGs) in a variety of animal and human models. Special emphasis is directed toward speech production muscle systems, including the chest wall and orofacial complex during…
Sex-biased phoretic mite load on two seaweed flies: Coelopa frigida and Coelopa pilipes.
Gilburn, Andre S; Stewart, Katie M; Edward, Dominic A
2009-12-01
Two hypotheses explain male-biased parasitism. Physiological costs of male sexually selected characteristics can reduce immunocompetence. Alternatively, ecological differences could generate male-biased parasitism. One method of comparing the importance of the two theories is to investigate patterns of phoresy, which are only likely to be generated by ecological rather than immunological differences between the sexes. Here we studied the pattern of phoresy of the mite, Thinoseius fucicola, on two species of seaweed fly hosts, Coelopa frigida and Coelopa pilipes. We found a highly male-biased pattern of phoresy of T. fucicola on both species. These are the first reported instances of sex-biased phoresy in a solely phoretic parasite. We also show the first two cases of size-biased phoresy. We suggest that ecological factors, particularly, male mate searching, generated male biased patterns of phoresy. We highlight the potential importance of studies of phoresy in determining the relative roles of the immunocompetence and ecological theories in generating male-biased parasitism. We suggest that more studies of patterns of phoresy are carried out to allow detailed comparisons with patterns of parasitism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zarepisheh, M; Li, R; Xing, L
Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) andmore » aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves quality of resultant treatment plans as compared with conventional VMAT or IMRT treatments.« less
Optimum projection pattern generation for grey-level coded structured light illumination systems
NASA Astrophysics Data System (ADS)
Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben
2017-04-01
Structured light illumination (SLI) systems are well-established optical inspection techniques for noncontact 3D surface measurements. A common technique is multi-frequency sinusoidal SLI that obtains the phase map at various fringe periods in order to estimate the absolute phase, and hence, the 3D surface information. Nevertheless, multi-frequency SLI systems employ multiple measurement planes (e.g. four phase shifted frames) to obtain the phase at a given fringe period. It is therefore an age old challenge to obtain the absolute surface information using fewer measurement frames. Grey level (GL) coding techniques have been developed as an attempt to reduce the number of planes needed, because a spatio-temporal GL sequence employing p discrete grey-levels and m frames has the potential to unwrap up to pm fringes. Nevertheless, one major disadvantage of GL based SLI techniques is that there are often errors near the border of each stripe, because an ideal stepwise intensity change cannot be measured. If the step-change in intensity is a single discrete grey-level unit, this problem can usually be overcome by applying an appropriate threshold. However, severe errors occur if the intensity change at the border of the stripe exceeds several discrete grey-level units. In this work, an optimum GL based technique is presented that generates a series of projection patterns with a minimal gradient in the intensity. It is shown that when using this technique, the errors near the border of the stripes can be significantly reduced. This improvement is achieved with the choice generated patterns, and does not involve additional hardware or special post-processing techniques. The performance of that method is validated using both simulations and experiments. The reported technique is generic, works with an arbitrary number of frames, and can employ an arbitrary number of grey-levels.
Simultaneous beam sampling and aperture shape optimization for SPORT.
Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei
2015-02-01
Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.
Simultaneous beam sampling and aperture shape optimization for SPORT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu
Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decisionmore » variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. Conclusions: The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.« less
Efficient generation of holographic news ticker in holographic 3DTV
NASA Astrophysics Data System (ADS)
Kim, Seung-Cheol; Kim, Eun-Soo
2009-08-01
News ticker is used to show breaking news or news headlines in conventional 2-D broadcasting system. For the case of the breaking news, the fast creation is need, because the information should be sent quickly. In addition, if holographic 3- D broadcasting system is started in the future, news ticker will remain. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic news ticker in holographic 3DTV or 3-D movies using N-LUT method. The proposed method is largely consisted of five steps: construction of the LUT for each character, extraction of characters in news ticker, generation and shift of the CGH pattern for news ticker using the LUT for each character, composition of hologram pattern for 3-D video and hologram pattern for news ticker and reconstruct the holographic 3D video with news ticker. To confirm the proposed method, moving car in front of the castle is used as a 3D video and the words 'HOLOGRAM CAPTION GENERATOR' is used as a news ticker. From this simulation results confirmed the feasibility of the proposed method in fast generation of CGH patterns for holographic captions.
Computer Generated Holography with Intensity-Graded Patterns
Conti, Rossella; Assayag, Osnath; de Sars, Vincent; Guillon, Marc; Emiliani, Valentina
2016-01-01
Computer Generated Holography achieves patterned illumination at the sample plane through phase modulation of the laser beam at the objective back aperture. This is obtained by using liquid crystal-based spatial light modulators (LC-SLMs), which modulate the spatial phase of the incident laser beam. A variety of algorithms is employed to calculate the phase modulation masks addressed to the LC-SLM. These algorithms range from simple gratings-and-lenses to generate multiple diffraction-limited spots, to iterative Fourier-transform algorithms capable of generating arbitrary illumination shapes perfectly tailored on the base of the target contour. Applications for holographic light patterning include multi-trap optical tweezers, patterned voltage imaging and optical control of neuronal excitation using uncaging or optogenetics. These past implementations of computer generated holography used binary input profile to generate binary light distribution at the sample plane. Here we demonstrate that using graded input sources, enables generating intensity graded light patterns and extend the range of application of holographic light illumination. At first, we use intensity-graded holograms to compensate for LC-SLM position dependent diffraction efficiency or sample fluorescence inhomogeneity. Finally we show that intensity-graded holography can be used to equalize photo evoked currents from cells expressing different levels of chanelrhodopsin2 (ChR2), one of the most commonly used optogenetics light gated channels, taking into account the non-linear dependence of channel opening on incident light. PMID:27799896
Automated problem list generation and physicians perspective from a pilot study.
Devarakonda, Murthy V; Mehta, Neil; Tsou, Ching-Huei; Liang, Jennifer J; Nowacki, Amy S; Jelovsek, John Eric
2017-09-01
An accurate, comprehensive and up-to-date problem list can help clinicians provide patient-centered care. Unfortunately, problem lists created and maintained in electronic health records by providers tend to be inaccurate, duplicative and out of date. With advances in machine learning and natural language processing, it is possible to automatically generate a problem list from the data in the EHR and keep it current. In this paper, we describe an automated problem list generation method and report on insights from a pilot study of physicians' assessment of the generated problem lists compared to existing providers-curated problem lists in an institution's EHR system. The natural language processing and machine learning-based Watson 1 method models clinical thinking in identifying a patient's problem list using clinical notes and structured data. This pilot study assessed the Watson method and included 15 randomly selected, de-identified patient records from a large healthcare system that were each planned to be reviewed by at least two internal medicine physicians. The physicians created their own problem lists, and then evaluated the overall usefulness of their own problem lists (P), Watson generated problem lists (W), and the existing EHR problem lists (E) on a 10-point scale. The primary outcome was pairwise comparisons of P, W, and E. Six out of the 10 invited physicians completed 27 assessments of P, W, and E, and in process evaluated 732 Watson generated problems and 444 problems in the EHR system. As expected, physicians rated their own lists, P, highest. However, W was rated higher than E. Among 89% of assessments, Watson identified at least one important problem that physicians missed. Cognitive computing systems like this Watson system hold the potential for accurate, problem-list-centered summarization of patient records, potentially leading to increased efficiency, better clinical decision support, and improved quality of patient care. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Oshri, Assaf; Himelboim, Itai; Kwon, Josephine A.; Sutton, Tara E.; Mackillop, James
2015-01-01
Objective: The aim of the present study was to examine the links between severities of child abuse (physical vs. sexual), and alcohol use versus problems via social media (Facebook) peer connection structures. Method: A total of 318 undergraduate female students at a public university in the United States reported severity of child abuse experiences and current alcohol use and problems. Social network data were obtained directly from the individuals’ Facebook network. Results: Severity of childhood physical abuse was positively linked to alcohol use and problems via eigenvector centrality, whereas severity of childhood sexual abuse was negatively linked to alcohol use and problems via clustering coefficient. Conclusions: Childhood physical and sexual abuse were linked positively and negatively, respectively, to online social network patterns associated with alcohol use and problems. The study suggests the potential utility of these online network patterns as risk indices and ultimately using social media as a platform for targeted preventive interventions. PMID:26562592
Oshri, Assaf; Himelboim, Itai; Kwon, Josephine A; Sutton, Tara E; Mackillop, James
2015-11-01
The aim of the present study was to examine the links between severities of child abuse (physical vs. sexual), and alcohol use versus problems via social media (Facebook) peer connection structures. A total of 318 undergraduate female students at a public university in the United States reported severity of child abuse experiences and current alcohol use and problems. Social network data were obtained directly from the individuals' Facebook network. Severity of childhood physical abuse was positively linked to alcohol use and problems via eigenvector centrality, whereas severity of childhood sexual abuse was negatively linked to alcohol use and problems via clustering coefficient. Childhood physical and sexual abuse were linked positively and negatively, respectively, to online social network patterns associated with alcohol use and problems. The study suggests the potential utility of these online network patterns as risk indices and ultimately using social media as a platform for targeted preventive interventions.
A scheme of pedagogical problems solving in kinematic to observe toulmin argumentation feasibility
NASA Astrophysics Data System (ADS)
Manurung, Sondang R.; Rustaman, Nuryani Y.; Siregar, Nelson
2013-09-01
The purpose of this study is to determine the students' ability to map out the problem solving. This paper would show a schematic template map used to analyze the students' tasks in performing problem solving pedagogically. Scheme of problem solving map of student was undertaken based on Toulmin Argumentation Pattern (TAP) argumentative discourse. The samples of this study were three work-sheets of physics education students who represented the upper, middle and lower levels of class in one LPTK in Medan. The instrument of this study was an essay test in kinematics topic. The data analyses were performed with schematic template map in order to know the students' ability in mapping the problem solving. The results showed that the student in the Upper level of class followed the appropriate direction pattern, while two others students could not followed the pattern exactly.
Evolution of general surgical problems in patients with left ventricular assist devices.
McKellar, Stephen H; Morris, David S; Mauermann, William J; Park, Soon J; Zietlow, Scott P
2012-11-01
Left ventricular assist devices (LVADs) are increasingly used to treat patients with end-stage heart failure. These patients may develop acute noncardiac surgical problems around the time of LVAD implantation or, as survival continues to improve, chronic surgical problems as ambulatory patients remote from the LVAD implant. Previous reports of noncardiac surgical problems in LVAD patients included patients with older, first-generation devices and do not address newer, second-generation devices. We describe the frequency and management of noncardiac surgical problems encountered during LVAD support with these newer-generation devices to assist noncardiac surgeons involved in the care of patients with LVADs. We retrospectively reviewed the medical records of consecutive patients receiving LVADs at our institution. We collected data for any consultation by noncardiac surgeons within the scope of general surgery during LVAD support and subsequent treatment. Ninety-nine patients received implantable LVADs between 2003 and 2009 (first-generation, n = 19; second-generation, n = 80). Excluding intestinal hemorrhage, general surgical opinions were rendered for 34 patients with 49 problems, mostly in the acute recovery phase after LVAD implantation. Of those, 27 patients underwent 28 operations. Respiratory failure and intra-abdominal pathologies were the most common problems addressed, and LVAD rarely precluded operation. Patients with second-generation LVADs were more likely to survive hospitalization (P = .04) and develop chronic, rather than emergent, surgical problems. Patients with LVADs frequently require consultation from noncardiac surgeons within the scope of general surgeons and often require operation. Patients with second-generation LVADs are more likely to become outpatients and develop more elective surgical problems. Noncardiac surgeons will be increasingly involved in caring for patients with LVADs and should anticipate the problems unique to this patient population. Copyright © 2012 Mosby, Inc. All rights reserved.
Generation of isolated asymmetric umbilics in light's polarization
NASA Astrophysics Data System (ADS)
Galvez, Enrique J.; Rojec, Brett L.; Kumar, Vijay; Viswanathan, Nirmal K.
2014-03-01
Polarization-singularity C points, a form of line singularities, are the vectorial counterparts of the optical vortices of spatial modes and fundamental optical features of polarization-spatial modes. Their generation in tailored beams has been limited to so-called "lemon" and "star" C points that contain symmetric dislocations in state-of-polarization patterns. In this Rapid Communication we present the theory and laboratory measurements of two complementary methods to generate isolated asymmetric C points in tailored beams, of which symmetric lemon and star patterns are limiting cases; and we report on the generation of so-called "monstar" patterns, an asymmetric C point with characteristics of both lemons and stars.
Parallel Algorithms and Patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robey, Robert W.
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Primer on clinical acid-base problem solving.
Whittier, William L; Rutecki, Gregory W
2004-03-01
Acid-base problem solving has been an integral part of medical practice in recent generations. Diseases discovered in the last 30-plus years, for example, Bartter syndrome and Gitelman syndrome, D-lactic acidosis, and bulimia nervosa, can be diagnosed according to characteristic acid-base findings. Accuracy in acid-base problem solving is a direct result of a reproducible, systematic approach to arterial pH, partial pressure of carbon dioxide, bicarbonate concentration, and electrolytes. The 'Rules of Five' is one tool that enables clinicians to determine the cause of simple and complex disorders, even triple acid-base disturbances, with consistency. In addition, other electrolyte abnormalities that accompany acid-base disorders, such as hypokalemia, can be incorporated into algorithms that complement the Rules and contribute to efficient problem solving in a wide variety of diseases. Recently urine electrolytes have also assisted clinicians in further characterizing select disturbances. Acid-base patterns, in many ways, can serve as a 'common diagnostic pathway' shared by all subspecialties in medicine. From infectious disease (eg, lactic acidemia with highly active antiviral therapy therapy) through endocrinology (eg, Conn's syndrome, high urine chloride alkalemia) to the interface between primary care and psychiatry (eg, bulimia nervosa with multiple potential acid-base disturbances), acid-base problem solving is the key to unlocking otherwise unrelated diagnoses. Inasmuch as the Rules are clinical tools, they are applied throughout this monograph to diverse pathologic conditions typical in contemporary practice.
A Study of the Female Life Course.
ERIC Educational Resources Information Center
Sugaya, Yoshiko
1985-01-01
A study examined the historical changes that have occurred in the female life course pattern in rural communities in Japan. The family and career patterns of two generations of women were studied from a generation-lineage and age cohort perspective. The older-generation sample consisted of women between the ages of 50 and 79, and the…
Interplay of cell dynamics and epithelial tension during morphogenesis of the Drosophila pupal wing
Etournay, Raphaël; Popović, Marko; Merkel, Matthias; Nandi, Amitabha; Blasse, Corinna; Aigouy, Benoît; Brandl, Holger; Myers, Gene; Salbreux, Guillaume; Jülicher, Frank; Eaton, Suzanne
2015-01-01
How tissue shape emerges from the collective mechanical properties and behavior of individual cells is not understood. We combine experiment and theory to study this problem in the developing wing epithelium of Drosophila. At pupal stages, the wing-hinge contraction contributes to anisotropic tissue flows that reshape the wing blade. Here, we quantitatively account for this wing-blade shape change on the basis of cell divisions, cell rearrangements and cell shape changes. We show that cells both generate and respond to epithelial stresses during this process, and that the nature of this interplay specifies the pattern of junctional network remodeling that changes wing shape. We show that patterned constraints exerted on the tissue by the extracellular matrix are key to force the tissue into the right shape. We present a continuum mechanical model that quantitatively describes the relationship between epithelial stresses and cell dynamics, and how their interplay reshapes the wing. DOI: http://dx.doi.org/10.7554/eLife.07090.001 PMID:26102528
NASA Astrophysics Data System (ADS)
Yashvantrai Vyas, Bhargav; Maheshwari, Rudra Prakash; Das, Biswarup
2016-06-01
Application of series compensation in extra high voltage (EHV) transmission line makes the protection job difficult for engineers, due to alteration in system parameters and measurements. The problem amplifies with inclusion of electronically controlled compensation like thyristor controlled series compensation (TCSC) as it produce harmonics and rapid change in system parameters during fault associated with TCSC control. This paper presents a pattern recognition based fault type identification approach with support vector machine. The scheme uses only half cycle post fault data of three phase currents to accomplish the task. The change in current signal features during fault has been considered as discriminatory measure. The developed scheme in this paper is tested over a large set of fault data with variation in system and fault parameters. These fault cases have been generated with PSCAD/EMTDC on a 400 kV, 300 km transmission line model. The developed algorithm has proved better for implementation on TCSC compensated line with its improved accuracy and speed.
Brain-Computer Interface Based on Generation of Visual Images
Bobrov, Pavel; Frolov, Alexander; Cantor, Charles; Fedulova, Irina; Bakhnyan, Mikhail; Zhavoronkov, Alexander
2011-01-01
This paper examines the task of recognizing EEG patterns that correspond to performing three mental tasks: relaxation and imagining of two types of pictures: faces and houses. The experiments were performed using two EEG headsets: BrainProducts ActiCap and Emotiv EPOC. The Emotiv headset becomes widely used in consumer BCI application allowing for conducting large-scale EEG experiments in the future. Since classification accuracy significantly exceeded the level of random classification during the first three days of the experiment with EPOC headset, a control experiment was performed on the fourth day using ActiCap. The control experiment has shown that utilization of high-quality research equipment can enhance classification accuracy (up to 68% in some subjects) and that the accuracy is independent of the presence of EEG artifacts related to blinking and eye movement. This study also shows that computationally-inexpensive Bayesian classifier based on covariance matrix analysis yields similar classification accuracy in this problem as a more sophisticated Multi-class Common Spatial Patterns (MCSP) classifier. PMID:21695206
Eriksson, Anders; Manica, Andrea
2012-08-28
Recent comparisons between anatomically modern humans and ancient genomes of other hominins have raised the tantalizing, and hotly debated, possibility of hybridization. Although several tests of hybridization have been devised, they all rely on the degree to which different modern populations share genetic polymorphisms with the ancient genomes of other hominins. However, spatial population structure is expected to generate genetic patterns similar to those that might be attributed to hybridization. To investigate this problem, we take Neanderthals as a case study, and build a spatially explicit model of the shared history of anatomically modern humans and this hominin. We show that the excess polymorphism shared between Eurasians and Neanderthals is compatible with scenarios in which no hybridization occurred, and is strongly linked to the strength of population structure in ancient populations. Thus, we recommend caution in inferring admixture from geographic patterns of shared polymorphisms, and argue that future attempts to investigate ancient hybridization between humans and other hominins should explicitly account for population structure.
4K x 2K pixel color video pickup system
NASA Astrophysics Data System (ADS)
Sugawara, Masayuki; Mitani, Kohji; Shimamoto, Hiroshi; Fujita, Yoshihiro; Yuyama, Ichiro; Itakura, Keijirou
1998-12-01
This paper describes the development of an experimental super- high-definition color video camera system. During the past several years there has been much interest in super-high- definition images as the next generation image media. One of the difficulties in implementing a super-high-definition motion imaging system is constructing the image-capturing section (camera). Even the state-of-the-art semiconductor technology can not realize the image sensor which has enough pixels and output data rate for super-high-definition images. The present study is an attempt to fill the gap in this respect. The authors intend to solve the problem by using new imaging method in which four HDTV sensors are attached on a new color separation optics so that their pixel sample pattern forms checkerboard pattern. A series of imaging experiments demonstrate that this technique is an effective approach to capturing super-high-definition moving images in the present situation where no image sensors exist for such images.
Universal resilience patterns in cascading load model: More capacity is not always better
NASA Astrophysics Data System (ADS)
Wang, Jianwei; Wang, Xue; Cai, Lin; Ni, Chengzhang; Xie, Wei; Xu, Bo
We study the problem of universal resilience patterns in complex networks against cascading failures. We revise the classical betweenness method and overcome its limitation of quantifying the load in cascading model. Considering that the generated load by all nodes should be equal to the transported one by all edges in the whole network, we propose a new method to quantify the load on an edge and construct a simple cascading model. By attacking the edge with the highest load, we show that, if the flow between two nodes is transported along the shortest paths between them, then the resilience of some networks against cascading failures inversely decreases with the enhancement of the capacity of every edge, i.e. the more capacity is not always better. We also observe the abnormal fluctuation of the additional load that exceeds the capacity of each edge. By a simple graph, we analyze the propagation of cascading failures step by step, and give a reasonable explanation of the abnormal fluctuation of cascading dynamics.
4D Origami by Smart Embroidery.
Stoychev, Georgi; Razavi, Mir Jalil; Wang, Xianqiao; Ionov, Leonid
2017-09-01
There exist many methods for processing of materials: extrusion, injection molding, fibers spinning, 3D printing, to name a few. In most cases, materials with a static, fixed shape are produced. However, numerous advanced applications require customized elements with reconfigurable shape. The few available techniques capable of overcoming this problem are expensive and/or time-consuming. Here, the use of one of the most ancient technologies for structuring, embroidering, is proposed to generate sophisticated patterns of active materials, and, in this way, to achieve complex actuation. By combining experiments and computational modeling, the fundamental rules that can predict the folding behavior of sheets with a variety of stitch-patterns are elucidated. It is demonstrated that theoretical mechanics analysis is only suitable to predict the behavior of the simplest experimental setups, whereas computer modeling gives better predictions for more complex cases. Finally, the applicability of the rules by designing basic origami structures and wrinkling substrates with controlled thermal insulation properties is shown. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Speech rhythm in Kannada speaking adults who stutter.
Maruthy, Santosh; Venugopal, Sahana; Parakh, Priyanka
2017-10-01
A longstanding hypothesis about the underlying mechanisms of stuttering suggests that speech disfluencies may be associated with problems in timing and temporal patterning of speech events. Fifteen adults who do and do not stutter read five sentences, and from these, the vocalic and consonantal durations were measured. Using these, pairwise variability index (raw PVI for consonantal intervals and normalised PVI for vocalic intervals) and interval based rhythm metrics (PercV, DeltaC, DeltaV, VarcoC and VarcoV) were calculated for all the participants. Findings suggested higher mean values in adults who stutter when compared to adults who do not stutter for all the rhythm metrics except for VarcoV. Further, statistically significant difference between the two groups was found for all the rhythm metrics except for VarcoV. Combining the present results with consistent prior findings based on rhythm deficits in children and adults who stutter, there appears to be strong empirical support for the hypothesis that individuals who stutter may have deficits in generation of rhythmic speech patterns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.
Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...
2014-10-01
Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less
A two-in-one process for reliable graphene transistors processed with photo-lithography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahlberg, P.; Hinnemo, M.; Song, M.
2015-11-16
Research on graphene field-effect transistors (GFETs) has mainly relied on devices fabricated using electron-beam lithography for pattern generation, a method that has known problems with polymer contaminants. GFETs fabricated via photo-lithography suffer even worse from other chemical contaminations, which may lead to strong unintentional doping of the graphene. In this letter, we report on a scalable fabrication process for reliable GFETs based on ordinary photo-lithography by eliminating the aforementioned issues. The key to making this GFET processing compatible with silicon technology lies in a two-in-one process where a gate dielectric is deposited by means of atomic layer deposition. During thismore » deposition step, contaminants, likely unintentionally introduced during the graphene transfer and patterning, are effectively removed. The resulting GFETs exhibit current-voltage characteristics representative to that of intrinsic non-doped graphene. Fundamental aspects pertaining to the surface engineering employed in this work are investigated in the light of chemical analysis in combination with electrical characterization.« less
Global energy, sustainability, and the conventional development paradigm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raskin, P.D.; Margolis, R.M.
1998-05-01
The conventional development paradigm assumes that the values, consumption patterns, and dynamics of the western industrial system will be progressively played out on a global scale. In this inquiry, the authors explore the implications of the conventional paradigm for the evolution of global energy patterns, and the compatibility with notions of sustainability. They present a global long-range conventional development scenario to the year 2050, and identify major environmental, resource, and social pressures and uncertainties. These include the economic and geopolitical consequences of fossil fuel depletion, the environmental and security implications of increased nuclear generation, the risk of significant climatic change,more » and the threats to social cohesion of distributional inequities. Such potential problems could negate the basic scenario assumption of steady economic and social development. By clarifying the stress points in a conventional picture of energy development, the scenario provides a useful point of departure for examining alternative long-range scenarios for sustainable energy development.« less
Efficient use of bit planes in the generation of motion stimuli
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Stone, Leland S.
1988-01-01
The production of animated motion sequences on computer-controlled display systems presents a technical problem because large images cannot be transferred from disk storage to image memory at conventional frame rates. A technique is described in which a single base image can be used to generate a broad class of motion stimuli without the need for such memory transfers. This technique was applied to the generation of drifting sine-wave gratings (and by extension, sine wave plaids). For each drifting grating, sine and cosine spatial phase components are first reduced to 1 bit/pixel using a digital halftoning technique. The resulting pairs of 1-bit images are then loaded into pairs of bit planes of the display memory. To animate the patterns, the display hardware's color lookup table is modified on a frame-by-frame basis; for each frame the lookup table is set to display a weighted sum of the spatial sine and cosine phase components. Because the contrasts and temporal frequencies of the various components are mutually independent in each frame, the sine and cosine components can be counterphase modulated in temporal quadrature, yielding a single drifting grating. Using additional bit planes, multiple drifting gratings can be combined to form sine-wave plaid patterns. A large number of resultant plaid motions can be produced from a single image file because the temporal frequencies of all the components can be varied independently. For a graphics device having 8 bits/pixel, up to four drifting gratings may be combined, each having independently variable contrast and speed.
2014-01-01
Background The utilisation of good design practices in the development of complex health services is essential to improving quality. Healthcare organisations, however, are often seriously out of step with modern design thinking and practice. As a starting point to encourage the uptake of good design practices, it is important to understand the context of their intended use. This study aims to do that by articulating current health service development practices. Methods Eleven service development projects carried out in a large mental health service were investigated through in-depth interviews with six operation managers. The critical decision method in conjunction with diagrammatic elicitation was used to capture descriptions of these projects. Stage-gate design models were then formed to visually articulate, classify and characterise different service development practices. Results Projects were grouped into three categories according to design process patterns: new service introduction and service integration; service improvement; service closure. Three common design stages: problem exploration, idea generation and solution evaluation - were then compared across the design process patterns. Consistent across projects were a top-down, policy-driven approach to exploration, underexploited idea generation and implementation-based evaluation. Conclusions This study provides insight into where and how good design practices can contribute to the improvement of current service development practices. Specifically, the following suggestions for future service development practices are made: genuine user needs analysis for exploration; divergent thinking and innovative culture for idea generation; and fail-safe evaluation prior to implementation. Better training for managers through partnership working with design experts and researchers could be beneficial. PMID:24438471
A cluster merging method for time series microarray with production values.
Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio
2014-09-01
A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.
Jun, Gyuchan Thomas; Morrison, Cecily; Clarkson, P John
2014-01-17
The utilisation of good design practices in the development of complex health services is essential to improving quality. Healthcare organisations, however, are often seriously out of step with modern design thinking and practice. As a starting point to encourage the uptake of good design practices, it is important to understand the context of their intended use. This study aims to do that by articulating current health service development practices. Eleven service development projects carried out in a large mental health service were investigated through in-depth interviews with six operation managers. The critical decision method in conjunction with diagrammatic elicitation was used to capture descriptions of these projects. Stage-gate design models were then formed to visually articulate, classify and characterise different service development practices. Projects were grouped into three categories according to design process patterns: new service introduction and service integration; service improvement; service closure. Three common design stages: problem exploration, idea generation and solution evaluation - were then compared across the design process patterns. Consistent across projects were a top-down, policy-driven approach to exploration, underexploited idea generation and implementation-based evaluation. This study provides insight into where and how good design practices can contribute to the improvement of current service development practices. Specifically, the following suggestions for future service development practices are made: genuine user needs analysis for exploration; divergent thinking and innovative culture for idea generation; and fail-safe evaluation prior to implementation. Better training for managers through partnership working with design experts and researchers could be beneficial.
Consistency functional map propagation for repetitive patterns
NASA Astrophysics Data System (ADS)
Wang, Hao
2017-09-01
Repetitive patterns appear frequently in both man-made and natural environments. Automatically and robustly detecting such patterns from an image is a challenging problem. We study repetitive pattern alignment by embedding segmentation cue with a functional map model. However, this model cannot tackle the repetitive patterns directly due to the large photometric and geometric variations. Thus, a consistency functional map propagation (CFMP) algorithm that extends the functional map with dynamic propagation is proposed to address this issue. This propagation model is acquired in two steps. The first one aligns the patterns from a local region, transferring segmentation functions among patterns. It can be cast as an L norm optimization problem. The latter step updates the template segmentation for the next round of pattern discovery by merging the transferred segmentation functions. Extensive experiments and comparative analyses have demonstrated an encouraging performance of the proposed algorithm in detection and segmentation of repetitive patterns.
Linear Programming and Its Application to Pattern Recognition Problems
NASA Technical Reports Server (NTRS)
Omalley, M. J.
1973-01-01
Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.
Minimal perceptrons for memorizing complex patterns
NASA Astrophysics Data System (ADS)
Pastor, Marissa; Song, Juyong; Hoang, Danh-Tai; Jo, Junghyo
2016-11-01
Feedforward neural networks have been investigated to understand learning and memory, as well as applied to numerous practical problems in pattern classification. It is a rule of thumb that more complex tasks require larger networks. However, the design of optimal network architectures for specific tasks is still an unsolved fundamental problem. In this study, we consider three-layered neural networks for memorizing binary patterns. We developed a new complexity measure of binary patterns, and estimated the minimal network size for memorizing them as a function of their complexity. We formulated the minimal network size for regular, random, and complex patterns. In particular, the minimal size for complex patterns, which are neither ordered nor disordered, was predicted by measuring their Hamming distances from known ordered patterns. Our predictions agree with simulations based on the back-propagation algorithm.
Cost-sensitive AdaBoost algorithm for ordinal regression based on extreme learning machine.
Riccardi, Annalisa; Fernández-Navarro, Francisco; Carloni, Sante
2014-10-01
In this paper, the well known stagewise additive modeling using a multiclass exponential (SAMME) boosting algorithm is extended to address problems where there exists a natural order in the targets using a cost-sensitive approach. The proposed ensemble model uses an extreme learning machine (ELM) model as a base classifier (with the Gaussian kernel and the additional regularization parameter). The closed form of the derived weighted least squares problem is provided, and it is employed to estimate analytically the parameters connecting the hidden layer to the output layer at each iteration of the boosting algorithm. Compared to the state-of-the-art boosting algorithms, in particular those using ELM as base classifier, the suggested technique does not require the generation of a new training dataset at each iteration. The adoption of the weighted least squares formulation of the problem has been presented as an unbiased and alternative approach to the already existing ELM boosting techniques. Moreover, the addition of a cost model for weighting the patterns, according to the order of the targets, enables the classifier to tackle ordinal regression problems further. The proposed method has been validated by an experimental study by comparing it with already existing ensemble methods and ELM techniques for ordinal regression, showing competitive results.
From Central Pattern Generator to Sensory Template in the Evolution of Birdsong
ERIC Educational Resources Information Center
Konishi, Masakazu
2010-01-01
Central nervous networks, be they a part of the human brain or a group of neurons in a snail, may be designed to produce distinct patterns of movement. Central pattern generators can account for the development and production of normal vocal signals without auditory feedback in non-songbirds. Songbirds need auditory feedback to develop and…
The Creativity of Reflective and Impulsive Selected Students in Solving Geometric Problems
NASA Astrophysics Data System (ADS)
Shoimah, R. N.; Lukito, A.; Siswono, T. Y. E.
2018-01-01
This research purposed to describe the elementary students’ creativity with reflective and impulsive cognitive style in solving geometric problems. This research used qualitative research methods. The data was collected by written tests and task-based interviews. The subjects consisted of two 5th grade students that were measured by MFFT (Matching Familiar Figures Test). The data were analyzed based on the three main components of creativity; that is fluency, flexibility, and novelty. This results showed that subject with reflective cognitive style in solving geometric problems met all components of creativity (fluency; subject generated more than three different right-ideas in solving problems, flexibility; subject generated more than two different ways to get problem solved, and novelty; subject generated new ideas and new ways that original and has never been used before). While subject with impulsive cognitive style in solving geometric problems met two components of creativity (fluency; subject generated more than three different right-ideas in solving problems, flexibility; subject generated two different ways to get problem solved). Thus, it could be concluded that reflective students are more creative in solving geometric problems. The results of this research can also be used as a guideline in the future assessment of creativity based on cognitive style.
Bi-criteria travelling salesman subtour problem with time threshold
NASA Astrophysics Data System (ADS)
Kumar Thenepalle, Jayanth; Singamsetty, Purusotham
2018-03-01
This paper deals with the bi-criteria travelling salesman subtour problem with time threshold (BTSSP-T), which comes from the family of the travelling salesman problem (TSP) and is NP-hard in the strong sense. The problem arises in several application domains, mainly in routing and scheduling contexts. Here, the model focuses on two criteria: total travel distance and gains attained. The BTSSP-T aims to determine a subtour that starts and ends at the same city and visits a subset of cities at a minimum travel distance with maximum gains, such that the time spent on the tour does not exceed the predefined time threshold. A zero-one integer-programming problem is adopted to formulate this model with all practical constraints, and it includes a finite set of feasible solutions (one for each tour). Two algorithms, namely, the Lexi-Search Algorithm (LSA) and the Tabu Search (TS) algorithm have been developed to solve the BTSSP-T problem. The proposed LSA implicitly enumerates the feasible patterns and provides an efficient solution with backtracking, whereas the TS, which is metaheuristic, will give the better approximate solution. A numerical example is demonstrated in order to understand the search mechanism of the LSA. Numerical experiments are carried out in the MATLAB environment, on the different benchmark instances available in the TSPLIB domain as well as on randomly generated test instances. The experimental results show that the proposed LSA works better than the TS algorithm in terms of solution quality and, computationally, both LSA and TS are competitive.
Hoppmann, Christiane A; Coats, Abby Heckman; Blanchard-Fields, Fredda
2008-07-01
Qualitative interviews on family and financial problems from 332 adolescents, young, middle-aged, and older adults, demonstrated that developmentally relevant goals predicted problem-solving strategy use over and above problem domain. Four focal goals concerned autonomy, generativity, maintaining good relationships with others, and changing another person. We examined both self- and other-focused problem-solving strategies. Autonomy goals were associated with self-focused instrumental problem solving and generative goals were related to other-focused instrumental problem solving in family and financial problems. Goals of changing another person were related to other-focused instrumental problem solving in the family domain only. The match between goals and strategies, an indicator of problem-solving adaptiveness, showed that young individuals displayed the greatest match between autonomy goals and self-focused problem solving, whereas older adults showed a greater match between generative goals and other-focused problem solving. Findings speak to the importance of considering goals in investigations of age-related differences in everyday problem solving.
ERIC Educational Resources Information Center
de Mestre, Neville
2008-01-01
This article presents a hands-on experiment that covers many areas of high school mathematics. Included are the notions of patterns, proof, triangular numbers and various aspects of problem solving. The problem involves the arrangements of a school of fish using split peas or buttons to represent the fish. (Contains 4 figures.)
New approach for pattern collapse problem by increasing contact area at sub-100nm patterning
NASA Astrophysics Data System (ADS)
Lee, Sung-Koo; Jung, Jae Chang; Lee, Min Suk; Lee, Sung K.; Kim, Sam Young; Hwang, Young-Sun; Bok, Cheol K.; Moon, Seung-Chan; Shin, Ki S.; Kim, Sang-Jung
2003-06-01
To accomplish minimizing feature size to sub 100nm, new light sources for photolithography are emerging, such as ArF(193nm), F2(157nm), and EUV(13nm). However as the pattern size decreases to sub 100nm, a new obstacle, that is pattern collapse problem, becomes most serious bottleneck to the road for the sub 100 nm lithography. The main reason for this pattern collapse problem is capillary force that is increased as the pattern size decreases. As a result there were some trials to decrease this capillary force by changing developer or rinse materials that had low surface tension. On the other hands, there were other efforts to increase adhesion between resists and sub materials (organic BARC). In this study, we will propose a novel approach to solve pattern collapse problems by increasing contact area between sub material (organic BARC) and resist pattern. The basic concept of this approach is that if nano-scale topology is made at the sub material, the contact area between sub materials and resist will be increased. The process scheme was like this. First after coating and baking of organic BARC material, the nano-scale topology (3~10nm) was made by etching at this organic BARC material. On this nano-scale topology, resist was coated and exposed. Finally after develop, the contact area between organic BARC and resist could be increased. Though nano-scale topology was made by etching technology, this 20nm topology variation induced large substrate reflectivity of 4.2% and as a result the pattern fidelity was not so good at 100nm 1:1 island pattern. So we needed a new method to improve pattern fidelity problem. This pattern fidelity problem could be solved by introducing a sacrificial BARC layer. The process scheme was like this. First organic BARC was coated of which k value was about 0.64 and then sacrificial BARC layers was coated of which k value was about 0.18 on the organic BARC. The nano-scale topology (1~4nm) was made by etching of this sacrificial BARC layer and then as the same method mentioned above, the contact area between sacrificial layer and resist could be increased. With this introduction of sacrificial layer, the substrate reflectivity of sacrificial BARC layer was decreased enormously to 0.2% though there is 20nm topology variation of sacrificial BARC layer. With this sacrificial BARC layer, we could get 100nm 1:1 L/S pattern. With conventional process, the minimum CD where no collapse occurred, was 96.5nm. By applying this sacrificial BARC layer, the minimum CD where no collapse occurred, was 65.7nm. In conclusion, with nano-scale topology and sacrificial BARC layer, we could get very small pattern that was strong to pattern collapse issue.
NASA Astrophysics Data System (ADS)
Gerik, A.; Kruhl, J. H.
2006-12-01
The quantitative analysis of patterns as a geometric arrangement of material domains with specific geometric or crystallographic properties such as shape, size or crystallographic orientation has been shown to be a valuable tool with a wide field of applications in geo- and material sciences. Pattern quantification allows an unbiased comparison of experimentally generated or theoretical patterns with patterns of natural origin. In addition to this, the application of different methods can also provide information about different pattern forming processes. This information includes the distribution of crystals in a matrix - to analyze i.e. the nature and orientation of flow within a melt - or the governing shear strain regime at the point of time the pattern was formed as well as nature of fracture patterns of different scales, all of which are of great interest not only in structural and engineering geology, but also in material sciences. Different approaches to this problem have been discussed over the past fifteen years, yet only few of the methods were applied successfully at least to single examples (i.e. Velde et al., 1990; Harris et al., 1991; Peternell et al., 2003; Volland &Kruhl, 2004). One of the reasons for this has been the high expenditure of time that was necessary to prepare and analyse the samples. To overcome this problem, a first selection of promising methods have been implemented into a growing collection of software tools: (1) The modifications that Harris et al. (1991) have suggested for the Cantor's dust method (Velde et al., 1990) and which have been applied by Volland &Kruhl (2004) to show the anisotropy in a breccia sample. (2) A map-counting method that uses local box-counting dimensions to map the inhomogeneity of a crystal distribution pattern. Peternell et al. (2003) have used this method to analyze the distribution of phenocrysts in a porphyric granite. (3) A modified perimeter method that relates the directional dependence of the perimeter of grain boundaries to the anisotropy of the pattern (Peternell et al., 2003). We have used the resulting new possibilities to analyze numerous patterns of natural, experimental and mathematical origin in order to determine the scope of applicability of the different methods and present these results along with an evaluation of their individual sensitivities and limitations. References: Harris, C., Franssen, R. &Loosveld, R. (1991): Fractal analysis of fractures in rocks: the Cantor's Dust method comment. Tectonophysics 198: 107-111. Peternell, M., Andries, F. &Kruhl, J.H. (2003): Magmatic flow-pattern anisotropies - analyzed on the basis of a new 'map-mounting' fractal geometry method. DRT Tectonics conference, St. Malo, Book of Abstracts. Velde, B., Dubois, J., Touchard, G. &Badri, A. (1990): Fractal analysis of fractures in rocks: the Cantor's Dust method. Tectonophysics (179): 345-352. Volland, S. &Kruhl, J.H. (2004): Anisotropy quantification: the application of fractal geometry methods on tectonic fracture patterns of a Hercynian fault zone in NW-Sardinia. Journal of Structural Geology 26: 1499- 1510.
Breathing simulator of workers for respirator performance test
YUASA, Hisashi; KUMITA, Mikio; HONDA, Takeshi; KIMURA, Kazushi; NOZAKI, Kosuke; EMI, Hitoshi; OTANI, Yoshio
2014-01-01
Breathing machines are widely used to evaluate respirator performance but they are capable of generating only limited air flow patterns, such as, sine, triangular and square waves. In order to evaluate the respirator performance in practical use, it is desirable to test the respirator using the actual breathing patterns of wearers. However, it has been a difficult task for a breathing machine to generate such complicated flow patterns, since the human respiratory volume changes depending on the human activities and workload. In this study, we have developed an electromechanical breathing simulator and a respiration sampling device to record and reproduce worker’s respiration. It is capable of generating various flow patterns by inputting breathing pattern signals recorded by a computer, as well as the fixed air flow patterns. The device is equipped with a self-control program to compensate the difference in inhalation and exhalation volume and the measurement errors on the breathing flow rate. The system was successfully applied to record the breathing patterns of workers engaging in welding and reproduced the breathing patterns. PMID:25382381
NASA Astrophysics Data System (ADS)
Van Den Broeke, Douglas J.; Laidig, Thomas L.; Chen, J. Fung; Wampler, Kurt E.; Hsu, Stephen D.; Shi, Xuelong; Socha, Robert J.; Dusa, Mircea V.; Corcoran, Noel P.
2004-08-01
Imaging contact and via layers continues to be one of the major challenges to be overcome for 65nm node lithography. Initial results of using ASML MaskTools' CPL Technology to print contact arrays through pitch have demonstrated the potential to further extend contact imaging to a k1 near 0.30. While there are advantages and disadvantages for any potential RET, the benefits of not having to solve the phase assignment problem (which can lead to unresolvable phase conflicts), of it being a single reticle - single exposure technique, and its application to multiple layers within a device (clear field and dark field) make CPL an attractive, cost effective solution to low k1 imaging. However, real semiconductor circuit designs consist of much more than regular arrays of contact holes and a method to define the CPL reticle design for a full chip circuit pattern is required in order for this technique to be feasible in volume manufacturing. Interference Mapping Lithography (IML) is a novel approach for defining optimum reticle patterns based on the imaging conditions that will be used when the wafer is exposed. Figure 1 shows an interference map for an isolated contact simulated using ASML /1150 settings of 0.75NA and 0.92/0.72/30deg Quasar illumination. This technique provides a model-based approach for placing all types features (scattering bars, anti-scattering bars, non-printing assist features, phase shifted and non-phase shifted) for the purpose of enhancing the resolution of the target pattern and it can be applied to any reticle type including binary (COG), attenuated phase shifting mask (attPSM), alternating aperture phase shifting mask (altPSM), and CPL. In this work, we investigate the application of IML to generate CPL reticle designs for random contact patterns that are typical for 65nm node logic devices. We examine the critical issues related to using CPL with Interference Mapping Lithography including controlling side lobe printing, contact patterns with odd symmetry, forbidden pitch regions, and reticle manufacturing constraints. Multiple methods for deriving the interference map used to define reticle patterns for various RET's will be discussed. CPL reticle designs that were created from implementing automated algorithms for contact pattern decomposition using MaskWeaver will also be presented.
Advecting Procedural Textures for 2D Flow Animation
NASA Technical Reports Server (NTRS)
Kao, David; Pang, Alex; Moran, Pat (Technical Monitor)
2001-01-01
This paper proposes the use of specially generated 3D procedural textures for visualizing steady state 2D flow fields. We use the flow field to advect and animate the texture over time. However, using standard texture advection techniques and arbitrary textures will introduce some undesirable effects such as: (a) expanding texture from a critical source point, (b) streaking pattern from the boundary of the flowfield, (c) crowding of advected textures near an attracting spiral or sink, and (d) absent or lack of textures in some regions of the flow. This paper proposes a number of strategies to solve these problems. We demonstrate how the technique works using both synthetic data and computational fluid dynamics data.
El Bilbeisi, Abdel Hamid; Hosseini, Saeed; Djafarian, Kurosh
2017-11-15
The prevalence of diabetes mellitus is rising worldwide. When diabetes is uncontrolled, it has dire consequences for health and well-being. However, the role of diet in the origin of diabetes complications is not understood well. This study identifies major dietary patterns among type 2 diabetes patients and its association with diabetes complications in Gaza Strip, Palestine. This cross sectional study was conducted among 1200 previously diagnosed type 2 diabetes mellitus (both genders, aged 20-64 years), patients receiving care in primary healthcare centers in Gaza Strip, Palestine. Dietary patterns were evaluated using a validated semi-quantitative food frequency questionnaire. Additional information regarding demographic and medical history variables was obtained with an interview-based questionnaire. Statistical analysis was performed using SPSS version 20. Two major dietary patterns were identified by factor analysis: Asian-like pattern and sweet-soft drinks-snacks pattern. After adjustment for confounding variables, patients in the lowest tertile of the Asian-like pattern characterized by a high intake of whole grains, potatoes, beans, legumes, vegetables, tomatoes and fruit had a lower odds for (High BP, kidney problems, heart problems, extremities problems and neurological problems), (OR 0.710 CI 95% (.506-.997)), (OR 0.834 CI 95% (.700-.994)), (OR 0.730 CI 95% (.596-.895)), (OR 0.763 CI 95% (.667-.871)) and (OR 0.773 CI 95% (.602-.991)) respectively, (P value <0.05 for all). No significant association was found between the sweet-soft drinks snacks pattern with diabetes complications. The Asian-like pattern may be associated with a lower prevalence of diabetes complications among type 2 diabetes patients.
Geostatistics and spatial analysis in biological anthropology.
Relethford, John H
2008-05-01
A variety of methods have been used to make evolutionary inferences based on the spatial distribution of biological data, including reconstructing population history and detection of the geographic pattern of natural selection. This article provides an examination of geostatistical analysis, a method used widely in geology but which has not often been applied in biological anthropology. Geostatistical analysis begins with the examination of a variogram, a plot showing the relationship between a biological distance measure and the geographic distance between data points and which provides information on the extent and pattern of spatial correlation. The results of variogram analysis are used for interpolating values of unknown data points in order to construct a contour map, a process known as kriging. The methods of geostatistical analysis and discussion of potential problems are applied to a large data set of anthropometric measures for 197 populations in Ireland. The geostatistical analysis reveals two major sources of spatial variation. One pattern, seen for overall body and craniofacial size, shows an east-west cline most likely reflecting the combined effects of past population dispersal and settlement. The second pattern is seen for craniofacial height and shows an isolation by distance pattern reflecting rapid spatial changes in the midlands region of Ireland, perhaps attributable to the genetic impact of the Vikings. The correspondence of these results with other analyses of these data and the additional insights generated from variogram analysis and kriging illustrate the potential utility of geostatistical analysis in biological anthropology. (c) 2008 Wiley-Liss, Inc.
Predictive Simulations of Neuromuscular Coordination and Joint-Contact Loading in Human Gait.
Lin, Yi-Chung; Walter, Jonathan P; Pandy, Marcus G
2018-04-18
We implemented direct collocation on a full-body neuromusculoskeletal model to calculate muscle forces, ground reaction forces and knee contact loading simultaneously for one cycle of human gait. A data-tracking collocation problem was solved for walking at the normal speed to establish the practicality of incorporating a 3D model of articular contact and a model of foot-ground interaction explicitly in a dynamic optimization simulation. The data-tracking solution then was used as an initial guess to solve predictive collocation problems, where novel patterns of movement were generated for walking at slow and fast speeds, independent of experimental data. The data-tracking solutions accurately reproduced joint motion, ground forces and knee contact loads measured for two total knee arthroplasty patients walking at their preferred speeds. RMS errors in joint kinematics were < 2.0° for rotations and < 0.3 cm for translations while errors in the model-computed ground-reaction and knee-contact forces were < 0.07 BW and < 0.4 BW, respectively. The predictive solutions were also consistent with joint kinematics, ground forces, knee contact loads and muscle activation patterns measured for slow and fast walking. The results demonstrate the feasibility of performing computationally-efficient, predictive, dynamic optimization simulations of movement using full-body, muscle-actuated models with realistic representations of joint function.
Learning multimodal dictionaries.
Monaci, Gianluca; Jost, Philippe; Vandergheynst, Pierre; Mailhé, Boris; Lesage, Sylvain; Gribonval, Rémi
2007-09-01
Real-world phenomena involve complex interactions between multiple signal modalities. As a consequence, humans are used to integrate at each instant perceptions from all their senses in order to enrich their understanding of the surrounding world. This paradigm can be also extremely useful in many signal processing and computer vision problems involving mutually related signals. The simultaneous processing of multimodal data can, in fact, reveal information that is otherwise hidden when considering the signals independently. However, in natural multimodal signals, the statistical dependencies between modalities are in general not obvious. Learning fundamental multimodal patterns could offer deep insight into the structure of such signals. In this paper, we present a novel model of multimodal signals based on their sparse decomposition over a dictionary of multimodal structures. An algorithm for iteratively learning multimodal generating functions that can be shifted at all positions in the signal is proposed, as well. The learning is defined in such a way that it can be accomplished by iteratively solving a generalized eigenvector problem, which makes the algorithm fast, flexible, and free of user-defined parameters. The proposed algorithm is applied to audiovisual sequences and it is able to discover underlying structures in the data. The detection of such audio-video patterns in audiovisual clips allows to effectively localize the sound source on the video in presence of substantial acoustic and visual distractors, outperforming state-of-the-art audiovisual localization algorithms.
Laser illuminator and optical system for disk patterning
Hackel, Lloyd A.; Dane, C. Brent; Dixit, Shamasundar N.; Everett, Mathew; Honig, John
2000-01-01
Magnetic recording media are textured over areas designated for contact in order to minimize friction with data transducing heads. In fabricating a hard disk, an aluminum nickel-phosphorous substrate is polished to a specular finish. A mechanical means is then used to roughen an annular area intended to be the head contact band. An optical and mechanical system allows thousands of spots to be generated with each laser pulse, allowing the textured pattern to be rapidly generated with a low repetition rate laser and an uncomplicated mechanical system. The system uses a low power laser, a beam expander, a specially designed phase plate, a prism to deflect the beam, a lens to transmit the diffraction pattern to the far field, a mechanical means to rotate the pattern and a trigger system to fire the laser when sections of the pattern are precisely aligned. The system generates an annular segment of the desired pattern with which the total pattern is generated by rotating the optical system about its optic axis, sensing the rotational position and firing the laser as the annular segment rotates into the next appropriate position. This marking system can be integrated into a disk sputtering system for manufacturing magnetic disks, allowing for a very streamlined manufacturing process.
Training Spiking Neural Models Using Artificial Bee Colony
Vazquez, Roberto A.; Garro, Beatriz A.
2015-01-01
Spiking neurons are models designed to simulate, in a realistic manner, the behavior of biological neurons. Recently, it has been proven that this type of neurons can be applied to solve pattern recognition problems with great efficiency. However, the lack of learning strategies for training these models do not allow to use them in several pattern recognition problems. On the other hand, several bioinspired algorithms have been proposed in the last years for solving a broad range of optimization problems, including those related to the field of artificial neural networks (ANNs). Artificial bee colony (ABC) is a novel algorithm based on the behavior of bees in the task of exploring their environment to find a food source. In this paper, we describe how the ABC algorithm can be used as a learning strategy to train a spiking neuron aiming to solve pattern recognition problems. Finally, the proposed approach is tested on several pattern recognition problems. It is important to remark that to realize the powerfulness of this type of model only one neuron will be used. In addition, we analyze how the performance of these models is improved using this kind of learning strategy. PMID:25709644
The analysis method of the DRAM cell pattern hotspot
NASA Astrophysics Data System (ADS)
Lee, Kyusun; Lee, Kweonjae; Chang, Jinman; Kim, Taeheon; Han, Daehan; Hong, Aeran; Kim, Yonghyeon; Kang, Jinyoung; Choi, Bumjin; Lee, Joosung; Lee, Jooyoung; Hong, Hyeongsun; Lee, Kyupil; Jin, Gyoyoung
2015-03-01
It is increasingly difficult to determine degree of completion of the patterning and the distribution at the DRAM Cell Patterns. When we research DRAM Device Cell Pattern, there are three big problems currently, it is as follows. First, due to etch loading, it is difficult to predict the potential defect. Second, due to under layer topology, it is impossible to demonstrate the influence of the hotspot. Finally, it is extremely difficult to predict final ACI pattern by the photo simulation, because current patterning process is double patterning technology which means photo pattern is completely different from final etch pattern. Therefore, if the hotspot occurs in wafer, it is very difficult to find it. CD-SEM is the most common pattern measurement tool in semiconductor fabrication site. CD-SEM is used to accurately measure small region of wafer pattern primarily. Therefore, there is no possibility of finding places where unpredictable defect occurs. Even though, "Current Defect detector" can measure a wide area, every chip has same pattern issue, the detector cannot detect critical hotspots. Because defect detecting algorithm of bright field machine is based on image processing, if same problems occur on compared and comparing chip, the machine cannot identify it. Moreover this instrument is not distinguished the difference of distribution about 1nm~3nm. So, "Defect detector" is difficult to handle the data for potential weak point far lower than target CD. In order to solve those problems, another method is needed. In this paper, we introduce the analysis method of the DRAM Cell Pattern Hotspot.
A fast process development flow by applying design technology co-optimization
NASA Astrophysics Data System (ADS)
Chen, Yi-Chieh; Yeh, Shin-Shing; Ou, Tsong-Hua; Lin, Hung-Yu; Mai, Yung-Ching; Lin, Lawrence; Lai, Jun-Cheng; Lai, Ya Chieh; Xu, Wei; Hurat, Philippe
2017-03-01
Beyond 40 nm technology node, the pattern weak points and hotspot types increase dramatically. The typical patterns for lithography verification suffers huge turn-around-time (TAT) to handle the design complexity. Therefore, in order to speed up process development and increase pattern variety, accurate design guideline and realistic design combinations are required. This paper presented a flow for creating a cell-based layout, a lite realistic design, to early identify problematic patterns which will negatively affect the yield. A new random layout generating method, Design Technology Co-Optimization Pattern Generator (DTCO-PG), is reported in this paper to create cell-based design. DTCO-PG also includes how to characterize the randomness and fuzziness, so that it is able to build up the machine learning scheme which model could be trained by previous results, and then it generates patterns never seen in a lite design. This methodology not only increases pattern diversity but also finds out potential hotspot preliminarily. This paper also demonstrates an integrated flow from DTCO pattern generation to layout modification. Optical Proximity Correction, OPC and lithographic simulation is then applied to DTCO-PG design database to detect hotspots and then hotspots or weak points can be automatically fixed through the procedure or handled manually. This flow benefits the process evolution to have a faster development cycle time, more complexity pattern design, higher probability to find out potential hotspots in early stage, and a more holistic yield ramping operation.
Perceptual support promotes strategy generation: Evidence from equation solving.
Alibali, Martha W; Crooks, Noelle M; McNeil, Nicole M
2017-08-30
Over time, children shift from using less optimal strategies for solving mathematics problems to using better ones. But why do children generate new strategies? We argue that they do so when they begin to encode problems more accurately; therefore, we hypothesized that perceptual support for correct encoding would foster strategy generation. Fourth-grade students solved mathematical equivalence problems (e.g., 3 + 4 + 5 = 3 + __) in a pre-test. They were then randomly assigned to one of three perceptual support conditions or to a Control condition. Participants in all conditions completed three mathematical equivalence problems with feedback about correctness. Participants in the experimental conditions received perceptual support (i.e., highlighting in red ink) for accurately encoding the equal sign, the right side of the equation, or the numbers that could be added to obtain the correct solution. Following this intervention, participants completed a problem-solving post-test. Among participants who solved the problems incorrectly at pre-test, those who received perceptual support for correctly encoding the equal sign were more likely to generate new, correct strategies for solving the problems than were those who received feedback only. Thus, perceptual support for accurate encoding of a key problem feature promoted generation of new, correct strategies. Statement of Contribution What is already known on this subject? With age and experience, children shift to using more effective strategies for solving math problems. Problem encoding also improves with age and experience. What the present study adds? Support for encoding the equal sign led children to generate correct strategies for solving equations. Improvements in problem encoding are one source of new strategies. © 2017 The British Psychological Society.
Patterns for Effectively Documenting Frameworks
NASA Astrophysics Data System (ADS)
Aguiar, Ademar; David, Gabriel
Good design and implementation are necessary but not sufficient pre-requisites for successfully reusing object-oriented frameworks. Although not always recognized, good documentation is crucial for effective framework reuse, and often hard, costly, and tiresome, coming with many issues, especially when we are not aware of the key problems and respective ways of addressing them. Based on existing literature, case studies and lessons learned, the authors have been mining proven solutions to recurrent problems of documenting object-oriented frameworks, and writing them in pattern form, as patterns are a very effective way of communicating expertise and best practices. This paper presents a small set of patterns addressing problems related to the framework documentation itself, here seen as an autonomous and tangible product independent of the process used to create it. The patterns aim at helping non-experts on cost-effectively documenting object-oriented frameworks. In concrete, these patterns provide guidance on choosing the kinds of documents to produce, how to relate them, and which contents to include. Although the focus is more on the documents themselves, rather than on the process and tools to produce them, some guidelines are also presented in the paper to help on applying the patterns to a specific framework.
Solution of the Inverse Problem for Thin Film Patterning by Electrohydrodynamic Forces
NASA Astrophysics Data System (ADS)
Zhou, Chengzhe; Troian, Sandra
2017-11-01
Micro- and nanopatterning techniques for applications ranging from optoelectronics to biofluidics have multiplied in number over the past decade to include adaptations of mature technologies as well as novel lithographic techniques based on periodic spatial modulation of surface stresses. We focus here on one such technique which relies on shape changes in nanofilms responding to a patterned counter-electrode. The interaction of a patterned electric field with the polarization charges at the liquid interface causes a patterned electrostatic pressure counterbalanced by capillary pressure which leads to 3D protrusions whose shape and evolution can be terminated as needed. All studies to date, however, have investigated the evolution of the liquid film in response to a preset counter-electrode pattern. In this talk, we present solution of the inverse problem for the thin film equation governing the electrohydrodynamic response by treating the system as a transient control problem. Optimality conditions are derived and an efficient corresponding solution algorithm is presented. We demonstrate such implementation of film control to achieve periodic, free surface shapes ranging from simple circular cap arrays to more complex square and sawtooth patterns.
Prevalence of alcohol-related problems among the Slavs and Arabs in Belarus: a university survey.
Welcome, Menizibeya O; Razvodovsky, Yury E; Pereverzev, Vladimir A
2011-05-01
Alcohol abuse is a major problem among students in Belarus. Alcohol-related problems might vary among students of different cultural backgrounds. To examine the different patterns in alcohol use and related problems among students of different cultural groups--the Slavs and Arabs, in major Belarusian universities. 1465 university students (1345 Slavs and 120 Arabs) from three major universities in Minsk, Belarus, were administered the Alcohol Use Disorders Identification Test, the Cut, Annoyed, Guilty and Eye questionnaire, and the Michigan Alcohol Screening Test, including other alcohol-related questions. Overall, 91.08% (n = 1225) Slavs and 60.83% (n = 73) Arabs were alcohol users. A total of 16.28% (n = 219) Slavs and 32.50% (n = 39) Arabs were identified as problem drinkers. Different patterns of alcohol use and related problems were characterized for the Slavs and Arabs. The level of alcohol-related problems was higher among the Arabs, compared to the Slavs. Significant differences in the pattern of alcohol use and related problems exist among the students of various cultural groups--the Slavs and Arabs in Minsk, Belarus. This is the first empirical study to investigate the prevalence of alcohol use and related problems among the Arab and Slav students in Belarus.
Summary of 1971 pattern recognition program development
NASA Technical Reports Server (NTRS)
Whitley, S. L.
1972-01-01
Eight areas related to pattern recognition analysis at the Earth Resources Laboratory are discussed: (1) background; (2) Earth Resources Laboratory goals; (3) software problems/limitations; (4) operational problems/limitations; (5) immediate future capabilities; (6) Earth Resources Laboratory data analysis system; (7) general program needs and recommendations; and (8) schedule and milestones.
Drinking Patterns, Drinking Expectancies, and Coping after Spinal Cord Injury.
ERIC Educational Resources Information Center
Heinemann, Allen W.; And Others
1994-01-01
Drinking patterns, alcohol expectancies, and coping strategies were assessed for 121 persons with recent spinal cord injuries during hospitalization, 3 months after surgery, and 12 months after surgery. Although the rate of heavy drinking decreased, preinjury problem drinkers still had the lowest rate of positive reappraisal, problem solving, and…
Patterns of Obesity among Children and Adolescents with Intellectual Disabilities in Taiwan
ERIC Educational Resources Information Center
Lin, Jin-Ding; Yen, Chia-Feng; Li, Chi-Wei; Wu, Jia-Ling
2005-01-01
Background: Obesity and the health problems associated with it have substantial economic consequences for health care systems. Little information is available concerning obesity-related problems among people with intellectual disabilities. The aims of this study were to analyse patterns of obesity among children and adolescents with intellectual…
Adjustment Problems of Sibling and Nonsibling Pairs Referred to a School Mental Health Program
ERIC Educational Resources Information Center
Gallagher, Richard; Cowen, Emory L.
1976-01-01
In this study, siblings who developed school adjustment problems had more similar referral patterns than demographically matched, unrelated referral pairs. This effect was strongest among like sex pairs. Common environmental characteristics leading to similar coping patterns were seen to explain the results. (NG)
Patterns and Correlates of Research Productivity in Population Scientists.
ERIC Educational Resources Information Center
Richards, James M., Jr.
Although a concern with population issues has gone out of fashion, the problems underlying that concern have not disappeared. Solving these problems would be facilitated by increased knowledge produced by scientists working directly on population issues. A study was conducted to explore patterns and correlates of research productivity of members…
Why the leopard got its spots: relating pattern development to ecology in felids
Allen, William L.; Cuthill, Innes C.; Scott-Samuel, Nicholas E.; Baddeley, Roland
2011-01-01
A complete explanation of the diversity of animal colour patterns requires an understanding of both the developmental mechanisms generating them and their adaptive value. However, only two previous studies, which involved computer-generated evolving prey, have attempted to make this link. This study examines variation in the camouflage patterns displayed on the flanks of many felids. After controlling for the effects of shared ancestry using a fully resolved molecular phylogeny, this study shows how phenotypes from plausible felid coat pattern generation mechanisms relate to ecology. We found that likelihood of patterning and pattern attributes, such as complexity and irregularity, were related to felids' habitats, arboreality and nocturnality. Our analysis also indicates that disruptive selection is a likely explanation for the prevalence of melanistic forms in Felidae. Furthermore, we show that there is little phylogenetic signal in the visual appearance of felid patterning, indicating that camouflage adapts to ecology over relatively short time scales. Our method could be applied to any taxon with colour patterns that can reasonably be matched to reaction–diffusion and similar models, where the kinetics of the reaction between two or more initially randomly dispersed morphogens determines the outcome of pattern development. PMID:20961899
Median Hetero-Associative Memories Applied to the Categorization of True-Color Patterns
NASA Astrophysics Data System (ADS)
Vázquez, Roberto A.; Sossa, Humberto
Median associative memories (MED-AMs) are a special type of associative memory based on the median operator. This type of associative model has been applied to the restoration of gray scale images and provides better performance than other models, such as morphological associative memories, when the patterns are altered with mixed noise. Despite of his power, MED-AMs have not been applied in problems involving true-color patterns. In this paper we describe how a median hetero-associative memory (MED-HAM) could be applied in problems that involve true-color patterns. A complete study of the behavior of this associative model in the restoration of true-color images is performed using a benchmark of 14400 images altered by different type of noises. Furthermore, we describe how this model can be applied to an image categorization problem.
Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Engelmann, Christian
Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies thatmore » are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important constraints and opportunities for solutions deployed at various layers of the system stack. The framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also enables optimization of the cost-benefit trade-os among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-ecient manner in spite of frequent faults, errors, and failures of various types.« less
NASA Astrophysics Data System (ADS)
Eidenberger, Horst
2003-12-01
This paper describes how the web standards Synchronized Multimedia Integration Language (SMIL) and Scalable Vector Graphics (SVG) are used in teaching at the Vienna University of Technology. SMIL and SVG are used in courses on multimedia authoring. Didactically, the goal is to teach students how to use media objects and timing concepts to build interactive media applications. Additionally, SMIL is applied to generate multimedia content from a database using a content management system. The paper gives background information on the SMIL and SVG standards and sketches how teaching multimedia is organized at the Vienna University of Technology. Courses from the summer term 2003 are described and illustrated in two case studies. General design problems of SMIL-based presentations are modelled as patterns. Additionally, suggestions for improvement in the standards are given and shortcomings of existing user agents are summarized. Our conclusion is that SMIL and SVG are very well suited for teaching multimedia. Currently, the main problem is that all existing SMIL players lack some properties desired for teaching applications (stability, correctness, etc.).
Close coupling of pre- and post-processing vision stations using inexact algorithms
NASA Astrophysics Data System (ADS)
Shih, Chi-Hsien V.; Sherkat, Nasser; Thomas, Peter D.
1996-02-01
Work has been reported using lasers to cut deformable materials. Although the use of laser reduces material deformation, distortion due to mechanical feed misalignment persists. Changes in the lace patten are also caused by the release of tension in the lace structure as it is cut. To tackle the problem of distortion due to material flexibility, the 2VMethod together with the Piecewise Error Compensation Algorithm incorporating the inexact algorithms, i.e., fuzzy logic, neural networks and neural fuzzy technique, are developed. A spring mounted pen is used to emulate the distortion of the lace pattern caused by tactile cutting and feed misalignment. Using pre- and post-processing vision systems, it is possible to monitor the scalloping process and generate on-line information for the artificial intelligence engines. This overcomes the problems of lace distortion due to the trimming process. Applying the algorithms developed, the system can produce excellent results, much better than a human operator.
2013-01-01
Background It is well known that children of parents with mental illness are at greater risk of mental illness themselves. However the patterns of familial mental health problems across multiple generations in families are less clear. This study aimed to examine mental health relationships across three generations of Australian families. Methods Mental health data, along with a range of family demographic information, were collected from over 4600 families in Growing Up in Australia: The Longitudinal Study of Australian Children, a nationally representative cohort study. The social and emotional wellbeing of two cohorts of children aged 4–5 years and 8–9 years was measured using the parent-rated Strengths and Difficulties Questionnaire (SDQ). The mental health of mothers and fathers was measured using the Kessler 6-item K6 scale, and the mental health history of maternal and paternal grandmothers and grandfathers was measured using a dichotomous parent-report item. Multivariate linear regression analyses were used assess the relationships between grandparent and parent mental health and child social and emotional wellbeing at ages 4–5 years and 8–9 years. Results Both cohorts of children had greater mental health distress with higher SDQ scores on average if their mother or father had a mental health problem. For children aged 8–9 years, a history of mental health problems in maternal grandmothers and grandfathers was associated with higher SDQ scores in grandchildren, after controlling for maternal and paternal mental health and other family characteristics. For children aged 4–5 years, only a mental health history in paternal grandfathers was associated with higher SDQ scores. Conclusions The mental health histories of both parents and grandparents play an important role in the social and emotional wellbeing of young children. PMID:24206921
NASA Technical Reports Server (NTRS)
King, J. C.
1976-01-01
The generation of satellite coverage patterns is facilitated by three basic strategies: use of a simplified physical model, permitting rapid closed-form calculation; separation of earth rotation and nodal precession from initial geometric analyses; and use of symmetries to construct traces of indefinite length by repetitive transposition of basic one-quadrant elements. The complete coverage patterns generated consist of a basic nadir trace plus a number of associated off-nadir traces, one for each sensor swath edge to be delineated. Each trace is generated by transposing one or two of the basic quadrant elements into a circle on a nonrotating earth model sphere, after which the circle is expanded into the actual 'helical' pattern by adding rotational displacements to the longitude coordinates. The procedure adapts to the important periodic coverage cases by direct insertion of the characteristic integers N and R (days and orbital revolutions, respectively, per coverage period).
Bøe, Tormod; Skogen, Jens Christoffer; Sivertsen, Børge; Hysing, Mari; Petrie, Keith J; Dearing, Eric; Zachrisson, Henrik Daae
2017-01-01
Objective The aim of the current paper was to investigate the association between the patterns of duration, timing and sequencing of exposure to low family income during childhood, and symptoms of mental health problems in adolescence. Setting Survey administered to a large population-based sample of Norwegian adolescents. Participants Survey data from 9154 participants of 16–19 years age (53% participation rate; 52.7% girls) were linked to registry-based information about childhood family income from tax return data. Outcome measures Mental health outcomes were symptoms of emotional, conduct, hyperactivity, peer problems and general mental health problems measured with the Strengths and Difficulties Questionnaire, symptoms of depression measured with Short Mood and Feelings Questionnaire and symptoms of attention-deficit/hyperactivity disorder (ADHD) measured with the Adult ADHD Self-Report Scale. Results Latent class analysis and the BCH approach in Mplus were used to examine associations between patterns of poverty exposure and mental health outcomes. Four latent classes of poverty exposure emerged from the analysis. Participants moving into poverty (2.3%), out of poverty (3.5%) or those chronically poor (3.1%) had more symptoms of mental health problems (Cohen’s d=16-.50) than those with no poverty exposure (91.1%). This pattern was, however, not found for symptoms of ADHD. The pattern of results was confirmed in robustness checks using observed data. Conclusions Exposure to poverty in childhood was found to be associated with most mental health problems in adolescence. There was no strong suggestion of any timing or sequencing effects in the patterns of associations. PMID:28928191
Knibbe, Ronald Arnold; Joosten, Jan; Choquet, Marie; Derickx, Mieke; Morin, Delphine; Monshouwer, Karin
2007-02-01
Our main goal was to establish whether French and Dutch adolescents differ in rates of substance-related adverse events (e.g. fights, robbery), problems with peers or socializing agents even when controlling for pattern of substance use. For problems with peers and socializing agents due to alcohol we hypothesized that, because of stronger informal control of drinking in France, French adolescents are more likely to report problems with peers and socializing agents. For adverse events due to alcohol no difference was expected after controlling for consumption patterns. For drug-related problems, the hypothesis was that, due to the more restrictive drug policy in France, French adolescents are more likely to report problems with peers, socializing agents and adverse events. Comparable surveys based on samples of adolescent schoolchildren in France (n=9646) and the Netherlands (n=4291) were used. Data were analysed using multilevel logistic regression in which school, age and gender, indicators of substance use and country were used as predictors of substance-related problems. The outcomes show that French adolescents are more likely to report problems with peers and socializing agents due to alcohol even when consumption pattern is controlled for. For adverse events due to alcohol no difference was found between French and Dutch adolescents. For drug-related problems the expected differences were found; i.e. French adolescents are more likely to report problems with peers, socializing agents and adverse events even when controlling for pattern of drug use. It is concluded that there are culturally embedded differences in the rates of some types of problems due to alcohol or drug use. With respect to alcohol use, these differences are most likely due to culturally embedded differences in the informal social control of alcohol use. The differences in rates of drug-related problems are interpreted in the context of national differences in drug policy.
The terminal area automated path generation problem
NASA Technical Reports Server (NTRS)
Hsin, C.-C.
1977-01-01
The automated terminal area path generation problem in the advanced Air Traffic Control System (ATC), has been studied. Definitions, input, output and the interrelationships with other ATC functions have been discussed. Alternatives in modeling the problem have been identified. Problem formulations and solution techniques are presented. In particular, the solution of a minimum effort path stretching problem (path generation on a given schedule) has been carried out using the Newton-Raphson trajectory optimization method. Discussions are presented on the effect of different delivery time, aircraft entry position, initial guess on the boundary conditions, etc. Recommendations are made on real-world implementations.
Changing clothes easily: connexin41.8 regulates skin pattern variation.
Watanabe, Masakatsu; Kondo, Shigeru
2012-05-01
The skin patterns of animals are very important for their survival, yet the mechanisms involved in skin pattern formation remain unresolved. Turing's reaction-diffusion model presents a well-known mathematical explanation of how animal skin patterns are formed, and this model can predict various animal patterns that are observed in nature. In this study, we used transgenic zebrafish to generate various artificial skin patterns including a narrow stripe with a wide interstripe, a narrow stripe with a narrow interstripe, a labyrinth, and a 'leopard' pattern (or donut-like ring pattern). In this process, connexin41.8 (or its mutant form) was ectopically expressed using the mitfa promoter. Specifically, the leopard pattern was generated as predicted by Turing's model. Our results demonstrate that the pigment cells in animal skin have the potential and plasticity to establish various patterns and that the reaction-diffusion principle can predict skin patterns of animals. © 2012 John Wiley & Sons A/S.
Two-Dimensional Grammars And Their Applications To Artificial Intelligence
NASA Astrophysics Data System (ADS)
Lee, Edward T.
1987-05-01
During the past several years, the concepts and techniques of two-dimensional grammars1,2 have attracted growing attention as promising avenues of approach to problems in picture generation as well as in picture description3 representation, recognition, transformation and manipulation. Two-dimensional grammar techniques serve the purpose of exploiting the structure or underlying relationships in a picture. This approach attempts to describe a complex picture in terms of their components and their relative positions. This resembles the way a sentence is described in terms of its words and phrases, and the terms structural picture recognition, linguistic picture recognition, or syntactic picture recognition are often used. By using this approach, the problem of picture recognition becomes similar to that of phrase recognition in a language. However, describing pictures using a string grammar (one-dimensional grammar), the only relation between sub-pictures and/or primitives is the concatenation; that is each picture or primitive can be connected only at the left or right. This one-dimensional relation has not been very effective in describing two-dimensional pictures. A natural generaliza-tion is to use two-dimensional grammars. In this paper, two-dimensional grammars and their applications to artificial intelligence are presented. Picture grammars and two-dimensional grammars are introduced and illustrated by examples. In particular, two-dimensional grammars for generating all possible squares and all possible rhombuses are presented. The applications of two-dimensional grammars to solving region filling problems are discussed. An algorithm for region filling using two-dimensional grammars is presented together with illustrative examples. The advantages of using this algorithm in terms of computation time are also stated. A high-level description of a two-level picture generation system is proposed. The first level is the picture primitive generation using two-dimensional grammars. The second level is picture generation using either string description or entity-relationship (ER) diagram description. Illustrative examples are also given. The advantages of ER diagram description together with its comparison to string description are also presented. The results obtained in this paper may have useful applications in artificial intelligence, robotics, expert systems, picture processing, pattern recognition, knowledge engineering and pictorial database design. Furthermore, examples related to satellite surveillance and identifications are also included.
ERIC Educational Resources Information Center
Blackburn, J. Joey; Robinson, J. Shane
2016-01-01
The purpose of this experimental study was to assess the effects of cognitive style, problem complexity, and hypothesis generation on the problem solving ability of school-based agricultural education students. Problem solving ability was defined as time to solution. Kirton's Adaption-Innovation Inventory was employed to assess students' cognitive…
Quality issues in blue noise halftoning
NASA Astrophysics Data System (ADS)
Yu, Qing; Parker, Kevin J.
1998-01-01
The blue noise mask (BNM) is a halftone screen that produces unstructured visually pleasing dot patterns. The BNM combines the blue-noise characteristics of error diffusion and the simplicity of ordered dither. A BNM is constructed by designing a set of interdependent binary patterns for individual gray levels. In this paper, we investigate the quality issues in blue-noise binary pattern design and mask generation as well as in application to color reproduction. Using a global filtering technique and a local 'force' process for rearranging black and white pixels, we are able to generate a series of binary patterns, all representing a certain gray level, ranging from white-noise pattern to highly structured pattern. The quality of these individual patterns are studied in terms of low-frequency structure and graininess. Typically, the low-frequency structure (LF) is identified with a measurement of the energy around dc in the spatial frequency domain, while the graininess is quantified by a measurement of the average minimum distance (AMD) between minority dots as well as the kurtosis of the local kurtosis distribution (KLK) for minority pixels of the binary pattern. A set of partial BNMs are generated by using the different patterns as unique starting 'seeds.' In this way, we are able to study the quality of binary patterns over a range of gray levels. We observe that the optimality of a binary pattern for mask generation is related to its own quality mertirc values as well as the transition smoothness of those quality metric values over neighboring levels. Several schemes have been developed to apply blue-noise halftoning to color reproduction. Different schemes generate halftone patterns with different textures. In a previous paper, a human visual system (HVS) model was used to study the color halftone quality in terms of luminance and chrominance error in CIELAB color space. In this paper, a new series of psycho-visual experiments address the 'preferred' color rendering among four different blue noise halftoning schemes. The experimental results will be interpreted with respect to the proposed halftone quality metrics.
On the unity of children’s phonological error patterns: Distinguishing symptoms from the problem
Dinnsen, Daniel A.
2012-01-01
This article compares the claims of rule- and constraint-based accounts of three seemingly distinct error patterns, namely, Deaffrication, Consonant Harmony and Assibilation, in the sound system of a child with a phonological delay. It is argued that these error patterns are not separate problems, but rather are symptoms of a larger conspiracy to avoid word-initial coronal stops. The clinical implications of these findings are also considered. PMID:21787147
Jia, Peng; Xierali, Imam M
2015-09-17
Congestive heart failure (CHF) is a major public health problem in the United States and is a leading cause of hospitalization in the elderly population. Understanding the health care travel patterns of CHF patients and their underlying cause is important to balance the supply and demand for local hospital resources. This article explores the nonclinical factors that prompt CHF patients to seek distant instead of local hospitalization. Local hospitalization was defined as inpatients staying within hospital service areas, and distant hospitalization was defined as inpatients traveling outside hospital service areas, based on individual hospital discharge data in 2011 generated by a Dartmouth-Swiss hybrid approach. Multiple logistic and linear regression models were used to compare the travel patterns of different groups of inpatients in Florida. Black patients, no-charge patients, patients living in large metropolitan areas, and patients with a low socioeconomic status were more likely to seek local hospitalization than were white patients, those who were privately insured, those who lived in rural areas, and those with a high socioeconomic status, respectively. Findings indicate that different populations diagnosed with CHF had different travel patterns for hospitalization. Changes or disruptions in local hospital supply could differentially affect different groups in a population. Policy makers could target efforts to CHF patients who are less likely to travel to seek treatment.
Cogan, N G; Wolgemuth, C W
2011-01-01
The behavior of collections of oceanic bacteria is controlled by metabolic (chemotaxis) and physical (fluid motion) processes. Some sulfur-oxidizing bacteria, such as Thiovulum majus, unite these two processes via a material interface produced by the bacteria and upon which the bacteria are transiently attached. This interface, termed a bacterial veil, is formed by exo-polymeric substances (EPS) produced by the bacteria. By adhering to the veil while continuing to rotate their flagella, the bacteria are able to exert force on the fluid surroundings. This behavior induces a fluid flow that, in turn, causes the bacteria to aggregate leading to the formation of a physical pattern in the veil. These striking patterns are very similar in flavor to the classic convection instability observed when a shallow fluid is heated from below. However, the physics are very different since the flow around the veil is mediated by the bacteria and affects the bacterial densities. In this study, we extend a model of a one-dimensional veil in a two-dimensional fluid to the more realistic two-dimensional veil in a three-dimensional fluid. The linear stability analysis indicates that the Peclet number serves as a bifurcation parameter, which is consistent with experimental observations. We also solve the nonlinear problem numerically and are able to obtain patterns that are similar to those observed in the experiments.
NASA Astrophysics Data System (ADS)
Pournoury, M.; Zamiri, A.; Kim, T. Y.; Yurlov, V.; Oh, K.
2016-03-01
Capacitive touch sensor screen with the metal materials has recently become qualified for substitution of ITO; however several obstacles still have to be solved. One of the most important issues is moiré phenomenon. The visibility problem of the metal-mesh, in touch sensor module (TSM) is numerically considered in this paper. Based on human eye contract sensitivity function (CSF), moiré pattern of TSM electrode mesh structure is simulated with MATLAB software for 8 inch screen display in oblique view. Standard deviation of the generated moiré by the superposition of electrode mesh and screen image is calculated to find the optimal parameters which provide the minimum moiré visibility. To create the screen pixel array and mesh electrode, rectangular function is used. The filtered image, in frequency domain, is obtained by multiplication of Fourier transform of the finite mesh pattern (product of screen pixel and mesh electrode) with the calculated CSF function for three different observer distances (L=200, 300 and 400 mm). It is observed that the discrepancy between analytical and numerical results is less than 0.6% for 400 mm viewer distance. Moreover, in the case of oblique view due to considering the thickness of the finite film between mesh electrodes and screen, different points of minimum standard deviation of moiré pattern are predicted compared to normal view.
Xierali, Imam M.
2015-01-01
Introduction Congestive heart failure (CHF) is a major public health problem in the United States and is a leading cause of hospitalization in the elderly population. Understanding the health care travel patterns of CHF patients and their underlying cause is important to balance the supply and demand for local hospital resources. This article explores the nonclinical factors that prompt CHF patients to seek distant instead of local hospitalization. Methods Local hospitalization was defined as inpatients staying within hospital service areas, and distant hospitalization was defined as inpatients traveling outside hospital service areas, based on individual hospital discharge data in 2011 generated by a Dartmouth–Swiss hybrid approach. Multiple logistic and linear regression models were used to compare the travel patterns of different groups of inpatients in Florida. Results Black patients, no-charge patients, patients living in large metropolitan areas, and patients with a low socioeconomic status were more likely to seek local hospitalization than were white patients, those who were privately insured, those who lived in rural areas, and those with a high socioeconomic status, respectively. Conclusion Findings indicate that different populations diagnosed with CHF had different travel patterns for hospitalization. Changes or disruptions in local hospital supply could differentially affect different groups in a population. Policy makers could target efforts to CHF patients who are less likely to travel to seek treatment. PMID:26378896
Hematopoietic transcriptional mechanisms: from locus-specific to genome-wide vantage points.
DeVilbiss, Andrew W; Sanalkumar, Rajendran; Johnson, Kirby D; Keles, Sunduz; Bresnick, Emery H
2014-08-01
Hematopoiesis is an exquisitely regulated process in which stem cells in the developing embryo and the adult generate progenitor cells that give rise to all blood lineages. Master regulatory transcription factors control hematopoiesis by integrating signals from the microenvironment and dynamically establishing and maintaining genetic networks. One of the most rudimentary aspects of cell type-specific transcription factor function, how they occupy a highly restricted cohort of cis-elements in chromatin, remains poorly understood. Transformative technologic advances involving the coupling of next-generation DNA sequencing technology with the chromatin immunoprecipitation assay (ChIP-seq) have enabled genome-wide mapping of factor occupancy patterns. However, formidable problems remain; notably, ChIP-seq analysis yields hundreds to thousands of chromatin sites occupied by a given transcription factor, and only a fraction of the sites appear to be endowed with critical, non-redundant function. It has become en vogue to map transcription factor occupancy patterns genome-wide, while using powerful statistical tools to establish correlations to inform biology and mechanisms. With the advent of revolutionary genome editing technologies, one can now reach beyond correlations to conduct definitive hypothesis testing. This review focuses on key discoveries that have emerged during the path from single loci to genome-wide analyses, specifically in the context of hematopoietic transcriptional mechanisms. Copyright © 2014 ISEH - International Society for Experimental Hematology. Published by Elsevier Inc. All rights reserved.
Adaptive Tracking Control for Robots With an Interneural Computing Scheme.
Tsai, Feng-Sheng; Hsu, Sheng-Yi; Shih, Mau-Hsiang
2018-04-01
Adaptive tracking control of mobile robots requires the ability to follow a trajectory generated by a moving target. The conventional analysis of adaptive tracking uses energy minimization to study the convergence and robustness of the tracking error when the mobile robot follows a desired trajectory. However, in the case that the moving target generates trajectories with uncertainties, a common Lyapunov-like function for energy minimization may be extremely difficult to determine. Here, to solve the adaptive tracking problem with uncertainties, we wish to implement an interneural computing scheme in the design of a mobile robot for behavior-based navigation. The behavior-based navigation adopts an adaptive plan of behavior patterns learning from the uncertainties of the environment. The characteristic feature of the interneural computing scheme is the use of neural path pruning with rewards and punishment interacting with the environment. On this basis, the mobile robot can be exploited to change its coupling weights in paths of neural connections systematically, which can then inhibit or enhance the effect of flow elimination in the dynamics of the evolutionary neural network. Such dynamical flow translation ultimately leads to robust sensory-to-motor transformations adapting to the uncertainties of the environment. A simulation result shows that the mobile robot with the interneural computing scheme can perform fault-tolerant behavior of tracking by maintaining suitable behavior patterns at high frequency levels.
Mask pattern generator employing EPL technology
NASA Astrophysics Data System (ADS)
Yoshioka, Nobuyuki; Yamabe, Masaki; Wakamiya, Wataru; Endo, Nobuhiro
2003-08-01
Mask cost is one of crucial issues in device fabrication, especially in SoC (System on a Chip) with small-volume production. The cost mainly depends on productivity of mask manufacturing tools such as mask writers and defect inspection tools. EPL (Electron Projection Lithography) has been developing as a high-throughput electron beam exposure technology that will succeed optical lithography. The application of EPL technology to mask writing will result in high productivity and contribute to decrease the mask cost. The concept of a mask pattern generator employing EPL technology is proposed in this paper. It is very similar to EPL technology used for pattern printing on a wafer. The mask patterns on the glass substrate are exposed by projecting the basic circuit patterns formed on the mother EPL mask. One example of the mother EPL mask is a stencil type made with 200-mm Si wafer. The basic circuit patterns are IP patterns and logical primitive patterns such as cell libraries (AND, OR, Inverter, Flip-Flop and etc.) to express the SoC device patterns. Since the SoC patterns are exposed with its collective units such as IP and logical primitive patterns by using this method, the high throughput will be expected comparing with conventional mask E-beam writers. In this paper, the mask pattern generator with the EPL technology is proposed. The concept, its advantages and issues to be solved are discussed.
Automating the generation of lexical patterns for processing free text in clinical documents.
Meng, Frank; Morioka, Craig
2015-09-01
Many tasks in natural language processing utilize lexical pattern-matching techniques, including information extraction (IE), negation identification, and syntactic parsing. However, it is generally difficult to derive patterns that achieve acceptable levels of recall while also remaining highly precise. We present a multiple sequence alignment (MSA)-based technique that automatically generates patterns, thereby leveraging language usage to determine the context of words that influence a given target. MSAs capture the commonalities among word sequences and are able to reveal areas of linguistic stability and variation. In this way, MSAs provide a systemic approach to generating lexical patterns that are generalizable, which will both increase recall levels and maintain high levels of precision. The MSA-generated patterns exhibited consistent F1-, F.5-, and F2- scores compared to two baseline techniques for IE across four different tasks. Both baseline techniques performed well for some tasks and less well for others, but MSA was found to consistently perform at a high level for all four tasks. The performance of MSA on the four extraction tasks indicates the method's versatility. The results show that the MSA-based patterns are able to handle the extraction of individual data elements as well as relations between two concepts without the need for large amounts of manual intervention. We presented an MSA-based framework for generating lexical patterns that showed consistently high levels of both performance and recall over four different extraction tasks when compared to baseline methods. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Otoniel, Buenrostro Delgado; Liliana, Márquez-Benavides; Francelia, Pinette Gaona
2008-01-01
Mexico is currently facing a crisis in the waste management field. Some efforts have just commenced in urban and in rural settlements, e.g., conversion of open dumps into landfills, a relatively small composting culture, and implementation of source separation and plastic recycling strategies. Nonetheless, the high heterogeneity of components in the waste, many of these with hazardous properties, present the municipal collection services with serious problems, due to the risks to the health of the workers and to the impacts to the environment as a result of the inadequate disposition of these wastes. A generation study in the domestic sector was undertaken with the aim of finding out the composition and the generation rate of household hazardous waste (HHW) produced at residences. Simultaneously to the generation study, a socioeconomic survey was applied to determine the influence of income level on the production of HHW. Results from the solid waste generation analysis indicated that approximately 1.6% of the waste stream consists of HHW. Correspondingly, it was estimated that in Morelia, a total amount of 442ton/day of domestic waste are produced, including 7.1ton of HHW per day. Furthermore, the overall amount of HHW is not directly related to income level, although particular byproducts do correlate. However, an important difference was observed, as the brands and the presentation sizes of goods and products used in each socioeconomic stratum varied.
Fabrication High Resolution Metrology Target By Step And Repeat Method
NASA Astrophysics Data System (ADS)
Dusa, Mircea
1983-10-01
Based on the photolithography process generally used to generate high resolution masks for semiconductor I.C.S, we found a very useful industrial application of laser technology.First, we have generated high resolution metrology targets which are used in industrial measurement laser interferometers as difra.ction gratings. Secondi we have generated these targets using step and repeat machine, with He-Ne laser interferometer controlled state, as a pattern generator, due to suitable computer programming.Actually, high resolution metrology target, means two chromium plates, one of which is called the" rule" the other one the "vernier". In Fig.1 we have the configuration of the rule and the vernier. The rule has a succesion of 3 μM lines generated as a difraction grating on a 4 x 4 inch chromium blank. The vernier has several exposed fields( areas) having 3 - 15 μm lines, fields placed on very precise position on the chromium blank surface. High degree of uniformity, tight CD tolerances, low defect density required by the targets, creates specialised problems during processing. Details of the processing, together with experimental results will be presented. Before we start to enter into process details, we have to point out that the dimensional requirements of the reticle target, are quite similar or perhaps more strict than LSI master casks. These requirements presented in Fig.2.
AN ASSESSMENT OF MCNP WEIGHT WINDOWS
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. S. HENDRICKS; C. N. CULBERTSON
2000-01-01
The weight window variance reduction method in the general-purpose Monte Carlo N-Particle radiation transport code MCNPTM has recently been rewritten. In particular, it is now possible to generate weight window importance functions on a superimposed mesh, eliminating the need to subdivide geometries for variance reduction purposes. Our assessment addresses the following questions: (1) Does the new MCNP4C treatment utilize weight windows as well as the former MCNP4B treatment? (2) Does the new MCNP4C weight window generator generate importance functions as well as MCNP4B? (3) How do superimposed mesh weight windows compare to cell-based weight windows? (4) What are the shortcomingsmore » of the new MCNP4C weight window generator? Our assessment was carried out with five neutron and photon shielding problems chosen for their demanding variance reduction requirements. The problems were an oil well logging problem, the Oak Ridge fusion shielding benchmark problem, a photon skyshine problem, an air-over-ground problem, and a sample problem for variance reduction.« less
Spatial chaos of Wang tiles with two symbols
NASA Astrophysics Data System (ADS)
Chen, Jin-Yu; Chen, Yu-Jie; Hu, Wen-Guei; Lin, Song-Sun
2016-02-01
This investigation completely classifies the spatial chaos problem in plane edge coloring (Wang tiles) with two symbols. For a set of Wang tiles B , spatial chaos occurs when the spatial entropy h ( B ) is positive. B is called a minimal cycle generator if P ( B ) ≠ 0̸ and P ( B ' ) = 0̸ whenever B ' ⫋ B , where P ( B ) is the set of all periodic patterns on ℤ2 generated by B . Given a set of Wang tiles B , write B = C 1 ∪ C 2 ∪ ⋯ ∪ C k ∪ N , where Cj, 1 ≤ j ≤ k, are minimal cycle generators and B contains no minimal cycle generator except those contained in C1∪C2∪⋯∪Ck. Then, the positivity of spatial entropy h ( B ) is completely determined by C1∪C2∪⋯∪Ck. Furthermore, there are 39 equivalence classes of marginal positive-entropy sets of Wang tiles and 18 equivalence classes of saturated zero-entropy sets of Wang tiles. For a set of Wang tiles B , h ( B ) is positive if and only if B contains a MPE set, and h ( B ) is zero if and only if B is a subset of a SZE set.
The effect of pavement markings on driving behaviour in curves: a simulator study.
Ariën, Caroline; Brijs, Kris; Vanroelen, Giovanni; Ceulemans, Wesley; Jongen, Ellen M M; Daniels, Stijn; Brijs, Tom; Wets, Geert
2017-05-01
This study investigates the effect of two pavement markings (transverse rumble strips (TRS) and a backward pointing herringbone pattern (HP)) on speed and lateral control in and nearby curves. Two real-world curves with strong indications of a safety problem were replicated as realistic as possible in the simulator. Results show that both speed and lateral control differ between the curves. These behavioural differences are probably due to curve-related dissimilarities with respect to geometric alignment, cross-sectional design and speed limit. TRS and HP both influenced mean speed and mean acceleration/deceleration but not lateral control. TRS generated an earlier and more stable speed reduction than HP which induced significant speed reductions along the curve. The TRS gives drivers more time to generate the right expectations about the upcoming curve. When accidents occur primarily near the curve entry, TRS is recommended. The HP has the potential to reduce accidents at the curve end. Practitioner Summary: Two pavement markings (transversal rumble strips and HP) nearby dangerous curves were investigated in the driving simulator. TRS generated an earlier and more stable speed reduction than HP which induced speed reductions along the curve. The TRS gives drivers more time to generate right expectations about the upcoming curve.
[Subjectivity, ethics and productivity in post-productive health restructuring].
Gomes, Doris; Ramos, Flávia Regina Souza
2015-08-01
The scope of this paper is to analyze the ethical problems generated by the modern stressor pattern of post-transformation productivity in productive restructuring in the health area. It is a qualitative study of the descriptive and exploratory type in which 30 professionals (nurses, doctors and dental surgeons) from a metropolitan region in the South of Brazil were interviewed, all of whom had prior experience in the public and private sectors. The results were analyzed through Discursive Textual Analysis. Capitalization is revealed as a major ethical problem in the series of new issues derived from the productivity-profitability imperative in health, due to the acritical incorporation of ethics that is restricted to the company's interests or to corporate-individual interests. The ethical problem of low professional commitment to the needs of the patient and of the social collective indicates the need to build a new engaged solidarity in order to increase the quality of public healthcare. Productivity targeted at individual and social needs/interests in the area of health requires a new self-managing and collective engagement of the subjects, supported by an institutional and ethical-political effort of group action, cooperation and solidarity.
Canale, Natale; Vieno, Alessio; Griffiths, Mark D; Borraccino, Alberto; Lazzeri, Giacomo; Charrier, Lorena; Lemma, Patrizia; Dalmasso, Paola; Santinello, Massimo
2017-03-01
The primary aim of the present study was to examine the association between immigrant generation, family sociodemographic characteristics, and problem gambling severity in a large-scale nationally representative sample of Italian youth. Data from the 2013-2014 Health Behaviour in School-aged Children (HBSC) Survey were used for cross-sectional analyses of adolescent problem gambling. Self-administered questionnaires were completed by a representative sample of 20,791 15-year-old students. Respondents' problem gambling severity, immigrant status, family characteristics (family structure, family affluence, perceived family support) and socio-demographic characteristics were individually assessed. Rates of adolescent at-risk/problem gambling were twice as high among first generation immigrants than non-immigrant students; the odds of being at-risk/problem gamblers were higher among first-generation immigrants than adolescents of other immigrant generations or non-immigrant. Not living with two biological or adoptive parents appears to be a factor that increases the risk of becoming a problem gambler in first generation immigrants. Immigrant status and family characteristics may play a key role in contributing to adolescent problem gambling. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Barnette, Daniel W.
2002-01-01
The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.
Students’ Algebraic Reasonsing In Solving Mathematical Problems With Adversity Quotient
NASA Astrophysics Data System (ADS)
Aryani, F.; Amin, S. M.; Sulaiman, R.
2018-01-01
Algebraic reasoning is a process in which students generalize mathematical ideas from a set of particular instances and express them in increasingly formal and age-appropriate ways. Using problem solving approach to develop algebraic reasoning of mathematics may enhace the long-term learning trajectory of the majority students. The purpose of this research was to describe the algebraic reasoning of quitter, camper, and climber junior high school students in solving mathematical problems. This research used qualitative descriptive method. Subjects were determined by purposive sampling. The technique of collecting data was done by task-based interviews.The results showed that the algebraic reasoning of three students in the process of pattern seeking by identifying the things that are known and asked in a similar way. But three students found the elements of pattern recognition in different ways or method. So, they are generalize the problem of pattern formation with different ways. The study of algebraic reasoning and problem solving can be a learning paradigm in the improve students’ knowledge and skills in algebra work. The goal is to help students’ improve academic competence, develop algebraic reasoning in problem solving.
Alternative Constraint Handling Technique for Four-Bar Linkage Path Generation
NASA Astrophysics Data System (ADS)
Sleesongsom, S.; Bureerat, S.
2018-03-01
This paper proposes an extension of a new concept for path generation from our previous work by adding a new constraint handling technique. The propose technique was initially designed for problems without prescribed timing by avoiding the timing constraint, while remain constraints are solving with a new constraint handling technique. The technique is one kind of penalty technique. The comparative study is optimisation of path generation problems are solved using self-adaptive population size teaching-learning based optimization (SAP-TLBO) and original TLBO. In this study, two traditional path generation test problem are used to test the proposed technique. The results show that the new technique can be applied with the path generation problem without prescribed timing and gives better results than the previous technique. Furthermore, SAP-TLBO outperforms the original one.
Center for the Study of Rhythmic Processes.
1987-10-20
pattern generators Neural network Spinal cord Mathematical modeling Neuromodulators Regeneration Sensory feedback 19 ABSTRACT (Continue on reverse if...generator circuit. Trends in Neurosciences 9: 432-437. Marder, E. (1987) Neurotransmitters and neuromodulators . In Selverston, A.I. and Moulins, M. The...relating to the effects of neuromodulators on the output of the lobster stomatogastric central pattern generator. (See Sections III and IV.) 2. Trainig
Photomask and pattern programming manual
NASA Technical Reports Server (NTRS)
Kirschman, R. K.
1978-01-01
A user's manual for a set of computer programs written in FORTRAN for the layout and generation of photomasks is presented. A limited amount of related information on photomasks, their design, and use is included. Input to the programs includes data describing the photomask design. Possible outputs include plots of the layout and a magnetic tape for controlling generation of the photomask by a pattern generator.
Kohli, Monika V; Patil, Gururaj B.; Kulkarni, Narayan B.; Bagalkot, Kishore; Purohit, Zarana; Dave, Nilixa; Sagari, Shitalkumar G; Malaghan, Manjunath
2014-01-01
Background: Feeding mode during infancy and its effect on deciduous tooth appearance in oral cavity in two generations and among genders. Aim and Objective: Study aimed to compare and correlate times and patterns of deciduous tooth eruption in breastfeeding (OBF), partial breastfeeding (PBF) and spoon feeding (SF) infants and initiation of semisolid food feeding (SSF) in infants. It also aimed to address the variations in the time of eruption of first deciduous tooth and its pattern in two generations who had more than a decade of difference in ages. Materials and methods: An open-ended questionnaire study was conducted on mothers of 265 patients from two groups, generation 1 (G1)- adults who were aged 20-35 years and second group, generation 2 (G2) - children who were below 5 years of age . Results: A statistical significance was observed with respect to age, gender, generations, and frequency of breastfeeding, partial breastfeeding and time of initiation of semisolid food. Conclusion: There is a delayed eruption of teeth in present generation. For girls, it occurs at age of 7.88 months and for boys, it occurs at the age of 8.08 months. PMID:24783136
Patterns of problem-solving in children's literacy and arithmetic.
Farrington-Flint, Lee; Vanuxem-Cotterill, Sophie; Stiller, James
2009-11-01
Patterns of problem-solving among 5-to-7 year-olds' were examined on a range of literacy (reading and spelling) and arithmetic-based (addition and subtraction) problem-solving tasks using verbal self-reports to monitor strategy choice. The results showed higher levels of variability in the children's strategy choice across Years I and 2 on the arithmetic (addition and subtraction) than literacy-based tasks (reading and spelling). However, across all four tasks, the children showed a tendency to move from less sophisticated procedural-based strategies, which included phonological strategies for reading and spelling and counting-all and finger modellingfor addition and subtraction, to more efficient retrieval methods from Years I to 2. Distinct patterns in children's problem-solving skill were identified on the literacy and arithmetic tasks using two separate cluster analyses. There was a strong association between these two profiles showing that those children with more advanced problem-solving skills on the arithmetic tasks also showed more advanced profiles on the literacy tasks. The results highlight how different-aged children show flexibility in their use of problem-solving strategies across literacy and arithmetical contexts and reinforce the importance of studying variations in children's problem-solving skill across different educational contexts.
Alcohol use patterns, problems and policies in Malaysia.
Jernigan, D H; Indran, S K
1997-12-01
The roots of Malaysia's drinking patterns lie in the introduction of most forms of alcohol by Europeans. Although Malaysia today has relatively low per capita alcohol consumption, available studies and interviews with alcohol industry officials point to a small segment of the population that drinks heavily and causes and experiences substantial alcohol related-problems. Indians are over-represented in this sub-population, but studies also reveal substantial drinking problems among Chinese and Malays. Government officials categorize alcohol as an Indian problem. The government devotes little resources to monitoring drinking patterns, use or problems; or to preventing, treating or educating the public about alcohol-related problems. Alcohol-producing transnational corporations own shares of all of Malaysia's major alcohol producers. In the face of high alcohol taxes and a ban on broadcast advertising of alcoholic beverages, these companies market alcohol aggressively, making health claims, targeting heavy drinkers and encouraging heavy drinking, employing indirect advertising, and using women in seductive poses and occupations to attract the mostly male drinking population. Monitoring of the country's alcohol problems is greatly needed in order to establish alcohol consumption more clearly as a national health and safety issue, while stronger controls and greater corporate responsibility are required to control alcohol marketing.
A corollary discharge maintains auditory sensitivity during sound production
NASA Astrophysics Data System (ADS)
Poulet, James F. A.; Hedwig, Berthold
2002-08-01
Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.
A generative spike train model with time-structured higher order correlations.
Trousdale, James; Hu, Yu; Shea-Brown, Eric; Josić, Krešimir
2013-01-01
Emerging technologies are revealing the spiking activity in ever larger neural ensembles. Frequently, this spiking is far from independent, with correlations in the spike times of different cells. Understanding how such correlations impact the dynamics and function of neural ensembles remains an important open problem. Here we describe a new, generative model for correlated spike trains that can exhibit many of the features observed in data. Extending prior work in mathematical finance, this generalized thinning and shift (GTaS) model creates marginally Poisson spike trains with diverse temporal correlation structures. We give several examples which highlight the model's flexibility and utility. For instance, we use it to examine how a neural network responds to highly structured patterns of inputs. We then show that the GTaS model is analytically tractable, and derive cumulant densities of all orders in terms of model parameters. The GTaS framework can therefore be an important tool in the experimental and theoretical exploration of neural dynamics.
Widaman, Keith F.; Grimm, Kevin J.; Early, Dawnté R.; Robins, Richard W.; Conger, Rand D.
2013-01-01
Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group. PMID:24019738
NASA Technical Reports Server (NTRS)
Cramer, Alexander Krishnan
2014-01-01
This work covers the design and test of a machine vision algorithm for generating high- accuracy pitch and yaw pointing solutions relative to the sun on a high altitude balloon. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small cross-shaped fiducial markers. Images of this plate taken with an off-the-shelf camera were processed to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an "Average Intersection" method, fiducial detection by a matched filter approach, and identification with an ad-hoc method based on the spacing between fiducials. Performance is verified on real test data where possible, but otherwise uses artificially generated data. Pointing knowledge is ultimately verified to meet the 20 arcsecond requirement.
Li, Yang; Li, Guoqing; Wang, Zhenhao
2015-01-01
In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.
Generation time, life history and the substitution rate of neutral mutations.
Lehtonen, Jussi; Lanfear, Robert
2014-11-01
Our understanding of molecular evolution is hampered by a lack of quantitative predictions about how life-history (LH) traits should correlate with substitution rates. Comparative studies have shown that neutral substitution rates vary substantially between species, and evidence shows that much of this diversity is associated with variation in LH traits. However, while these studies often agree, some unexplained and contradictory results have emerged. Explaining these results is difficult without a clear theoretical understanding of the problem. In this study, we derive predictions for the relationships between LH traits and substitution rates in iteroparous species by using demographic theory to relate commonly measured life-history traits to genetic generation time, and by implication to neutral substitution rates. This provides some surprisingly simple explanations for otherwise confusing patterns, such as the association between fecundity and substitution rates. The same framework can be applied to more complex life histories if full life-tables are available. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
The quest for restoring hearing: Understanding ear development more completely.
Jahan, Israt; Pan, Ning; Elliott, Karen L; Fritzsch, Bernd
2015-09-01
Neurosensory hearing loss is a growing problem of super-aged societies. Cochlear implants can restore some hearing, but rebuilding a lost hearing organ would be superior. Research has discovered many cellular and molecular steps to develop a hearing organ but translating those insights into hearing organ restoration remains unclear. For example, we cannot make various hair cell types and arrange them into their specific patterns surrounded by the right type of supporting cells in the right numbers. Our overview of the topologically highly organized and functionally diversified cellular mosaic of the mammalian hearing organ highlights what is known and unknown about its development. Following this analysis, we suggest critical steps to guide future attempts toward restoration of a functional organ of Corti. We argue that generating mutant mouse lines that mimic human pathology to fine-tune attempts toward long-term functional restoration are needed to go beyond the hope generated by restoring single hair cells in postnatal sensory epithelia. © 2015 WILEY Periodicals, Inc.
A Comparative Study of Random Patterns for Digital Image Correlation
NASA Astrophysics Data System (ADS)
Stoilov, G.; Kavardzhikov, V.; Pashkouleva, D.
2012-06-01
Digital Image Correlation (DIC) is a computer based image analysis technique utilizing random patterns, which finds applications in experimental mechanics of solids and structures. In this paper a comparative study of three simulated random patterns is done. One of them is generated according to a new algorithm, introduced by the authors. A criterion for quantitative evaluation of random patterns after the calculation of their autocorrelation functions is introduced. The patterns' deformations are simulated numerically and realized experimentally. The displacements are measured by using the DIC method. Tensile tests are performed after printing the generated random patterns on surfaces of standard iron sheet specimens. It is found that the new designed random pattern keeps relatively good quality until reaching 20% deformation.
Deaf-And-Mute Sign Language Generation System
NASA Astrophysics Data System (ADS)
Kawai, Hideo; Tamura, Shinichi
1984-08-01
We have developed a system which can recognize speech and generate the corresponding animation-like sign language sequence. The system is implemented in a popular personal computer. This has three video-RAM's and a voice recognition board which can recognize only registered voice of a specific speaker. Presently, fourty sign language patterns and fifty finger spellings are stored in two floppy disks. Each sign pattern is composed of one to four sub-patterns. That is, if the pattern is composed of one sub-pattern, it is displayed as a still pattern. If not, it is displayed as a motion pattern. This system will help communications between deaf-and-mute persons and healthy persons. In order to display in high speed, almost programs are written in a machine language.
Scalable patterning using laser-induced shock waves
NASA Astrophysics Data System (ADS)
Ilhom, Saidjafarzoda; Kholikov, Khomidkhodza; Li, Peizhen; Ottman, Claire; Sanford, Dylan; Thomas, Zachary; San, Omer; Karaca, Haluk E.; Er, Ali O.
2018-04-01
An advanced direct imprinting method with low cost, quick, and minimal environmental impact to create a thermally controllable surface pattern using the laser pulses is reported. Patterned microindents were generated on Ni50Ti50 shape memory alloys and aluminum using an Nd: YAG laser operating at 1064 nm combined with a suitable transparent overlay, a sacrificial layer of graphite, and copper grid. Laser pulses at different energy densities, which generate pressure pulses up to a few GPa on the surface, were focused through the confinement medium, ablating the copper grid to create plasma and transferring the grid pattern onto the surface. Scanning electron microscope and optical microscope images show that various patterns were obtained on the surface with high fidelity. One-dimensional profile analysis indicates that the depth of the patterned sample initially increases with the laser energy and later levels off. Our simulations of laser irradiation process also confirm that high temperature and high pressure could be generated when the laser energy density of 2 J/cm2 is used.
Laidler, Matthew R; Tourdjman, Mathieu; Buser, Genevieve L; Hostetler, Trevor; Repp, Kimberly K; Leman, Richard; Samadpour, Mansour; Keene, William E
2013-10-01
An outbreak of Escherichia coli O157:H7 was identified in Oregon through an increase in Shiga toxin-producing E. coli cases with an indistinguishable, novel pulsed-field gel electrophoresis (PFGE) subtyping pattern. We defined confirmed cases as persons from whom E. coli O157:H7 with the outbreak PFGE pattern was cultured during July-August 2011, and presumptive cases as persons having a household relationship with a case testing positive for E. coli O157:H7 and coincident diarrheal illness. We conducted an investigation that included structured hypothesis-generating interviews, a matched case-control study, and environmental and traceback investigations. We identified 15 cases. Six cases were hospitalized, including 4 with hemolytic uremic syndrome (HUS). Two cases with HUS died. Illness was significantly associated with strawberry consumption from roadside stands or farmers' markets (matched odds ratio, 19.6; 95% confidence interval, 2.9-∞). A single farm was identified as the source of contaminated strawberries. Ten of 111 (9%) initial environmental samples from farm A were positive for E. coli O157:H7. All samples testing positive for E. coli O157:H7 contained deer feces, and 5 tested farm fields had ≥ 1 sample positive with the outbreak PFGE pattern. The investigation identified fresh strawberries as a novel vehicle for E. coli O157:H7 infection, implicated deer feces as the source of contamination, and highlights problems concerning produce contamination by wildlife and regulatory exemptions for locally grown produce. A comprehensive hypothesis-generating questionnaire enabled rapid identification of the implicated product. Good agricultural practices are key barriers to wildlife fecal contamination of produce.
Simulation of spatial and temporal properties of aftershocks by means of the fiber bundle model
NASA Astrophysics Data System (ADS)
Monterrubio-Velasco, Marisol; Zúñiga, F. R.; Márquez-Ramírez, Victor Hugo; Figueroa-Soto, Angel
2017-11-01
The rupture processes of any heterogeneous material constitute a complex physical problem. Earthquake aftershocks show temporal and spatial behaviors which are consequence of the heterogeneous stress distribution and multiple rupturing following the main shock. This process is difficult to model deterministically due to the number of parameters and physical conditions, which are largely unknown. In order to shed light on the minimum requirements for the generation of aftershock clusters, in this study, we perform a simulation of the main features of such a complex process by means of a fiber bundle (FB) type model. The FB model has been widely used to analyze the fracture process in heterogeneous materials. It is a simple but powerful tool that allows modeling the main characteristics of a medium such as the brittle shallow crust of the earth. In this work, we incorporate spatial properties, such as the Coulomb stress change pattern, which help simulate observed characteristics of aftershock sequences. In particular, we introduce a parameter ( P) that controls the probability of spatial distribution of initial loads. Also, we use a "conservation" parameter ( π), which accounts for the load dissipation of the system, and demonstrate its influence on the simulated spatio-temporal patterns. Based on numerical results, we find that P has to be in the range 0.06 < P < 0.30, whilst π needs to be limited by a very narrow range ( 0.60 < π < 0.66) in order to reproduce aftershocks pattern characteristics which resemble those of observed sequences. This means that the system requires a small difference in the spatial distribution of initial stress, and a very particular fraction of load transfer in order to generate realistic aftershocks.
NASA Astrophysics Data System (ADS)
López-Comino, José Ángel; Stich, Daniel; Ferreira, Ana M. G.; Morales, Jose
2015-09-01
Inversions for the full slip distribution of earthquakes provide detailed models of earthquake sources, but stability and non-uniqueness of the inversions is a major concern. The problem is underdetermined in any realistic setting, and significantly different slip distributions may translate to fairly similar seismograms. In such circumstances, inverting for a single best model may become overly dependent on the details of the procedure. Instead, we propose to perform extended fault inversion trough falsification. We generate a representative set of heterogeneous slipmaps, compute their forward predictions, and falsify inappropriate trial models that do not reproduce the data within a reasonable level of mismodelling. The remainder of surviving trial models forms our set of coequal solutions. The solution set may contain only members with similar slip distributions, or else uncover some fundamental ambiguity such as, for example, different patterns of main slip patches. For a feasibility study, we use teleseismic body wave recordings from the 2012 September 5 Nicoya, Costa Rica earthquake, although the inversion strategy can be applied to any type of seismic, geodetic or tsunami data for which we can handle the forward problem. We generate 10 000 pseudo-random, heterogeneous slip distributions assuming a von Karman autocorrelation function, keeping the rake angle, rupture velocity and slip velocity function fixed. The slip distribution of the 2012 Nicoya earthquake turns out to be relatively well constrained from 50 teleseismic waveforms. Two hundred fifty-two slip models with normalized L1-fit within 5 per cent from the global minimum from our solution set. They consistently show a single dominant slip patch around the hypocentre. Uncertainties are related to the details of the slip maximum, including the amount of peak slip (2-3.5 m), as well as the characteristics of peripheral slip below 1 m. Synthetic tests suggest that slip patterns such as Nicoya may be a fortunate case, while it may be more difficult to unambiguously reconstruct more distributed slip from teleseismic data.
NASA Astrophysics Data System (ADS)
Kinard, Melissa Grass
Scientific communities have established social mechanisms for proposing explanations, questioning evidence, and validating claims. Opportunities like these are often not a given in science classrooms (Vellom, Anderson, & Palincsar, 1993) even though the National Science Education Standards (NSES, 1996) state that a scientifically literate person should be able to "engage intelligently in public discourse and debate about important issues in science and technology" (National Research Council [NRC], 1996). Research further documents that students' science conceptions undergo little modification with the traditional teaching experienced in many high school science classrooms (Duit, 2003, Dykstra, 2005). This case study is an examination of the discourse that occurred as four high school physics students collaborated on solutions to three physics lab problems during which the students made predictions and experimentally generated data to support their predictions. The discourse patterns were initially examined for instances of concept negotiations. Selected instances were further examined using Toulmin's (2003) pattern for characterizing argumentation in order to understand the students' scientific reasoning strategies and to document the role of collaboration in facilitating conceptual modifications and changes. Audio recordings of the students' conversations during the labs, written problems turned in to the teacher, interviews of the students, and observations and field notes taken during student collaboration were used to document and describe the students' challenges and successes encountered during their collaborative work. The findings of the study indicate that collaboration engaged the students and generated two types of productive science discourse: concept negotiations and procedure negotiations. Further analysis of the conceptual and procedure negotiations revealed that the students viewed science as sensible and plausible but not as a tool they could employ to answer their questions. The students' conceptual growth was inhibited by their allegiance to the authority of the science laws as learned in their school classroom. Thus, collaboration did not insure conceptual change. Describing student discourse in situ contributes to science education research about teaching practices that facilitate conceptual understandings in the science classroom.
Baby Carriage: Infants Walking with Loads
ERIC Educational Resources Information Center
Garciaguirre, Jessie S.; Adolph, Karen E.; Shrout, Patrick E.
2007-01-01
Maintaining balance is a central problem for new walkers. To examine how infants cope with the additional balance control problems induced by load carriage, 14-month-olds were loaded with 15% of their body weight in shoulder-packs. Both symmetrical and asymmetrical loads disrupted alternating gait patterns and caused less mature footfall patterns.…
A Study on the Sleep Patterns and Problems of University Business Students in Hong Kong
ERIC Educational Resources Information Center
Tsui, Y. Y.; Wing, Y. K.
2009-01-01
Objective: To investigate sleep patterns and problems of university business students. Participants: Undergraduate Chinese business students in Hong Kong. Methods: Self-reported questionnaires were completed during class lectures and through online system. Results: Of the 620 participating students (mean age 19.9 years), sleep duration was…
Touch Processing and Social Behavior in ASD
ERIC Educational Resources Information Center
Miguel, Helga O.; Sampaio, Adriana; Martínez-Regueiro, Rocío; Gómez-Guerrero, Lorena; López-Dóriga, Cristina Gutiérrez; Gómez, Sonia; Carracedo, Ángel; Fernández-Prieto, Montse
2017-01-01
Abnormal patterns of touch processing have been linked to core symptoms in ASD. This study examined the relation between tactile processing patterns and social problems in 44 children and adolescents with ASD, aged 6-14 (M = 8.39 ± 2.35). Multiple linear regression indicated significant associations between touch processing and social problems. No…
Sleep Habits and Patterns of College Students: A Preliminary Study.
ERIC Educational Resources Information Center
Buboltz, Walter C.; Brown, Franklin; Soper, Barlow
2001-01-01
Surveyed college students regarding their sleep habits, patterns, and problems. A large majority had at least occasional sleep problems, with women reporting more of some difficulties than men. The most common sleep difficulties were taking more than 30 minutes to fall asleep, trouble falling asleep more than three times per week, morning…
Module Based Complexity Formation: Periodic Patterning in Feathers and Hairs
Chuong, Cheng-Ming; Yeh, Chao-Yuan; Jiang, Ting-Xin; Widelitz, Randall
2012-01-01
Patterns describe order which emerges from homogeneity. Complex patterns on the integument are striking because of their visibility throughout an organism's lifespan. Periodic patterning is an effective design because the ensemble of hair or feather follicles (modules) allows the generation of complexity, including regional variations and cyclic regeneration, giving the skin appendages a new lease on life. Spatial patterns include the arrangements of feathers and hairs in specified number, size, and spacing. We explore how a field of equivalent progenitor cells can generate periodically arranged modules based on genetic information, physical-chemical rules and developmental timing. Reconstitution experiments suggest a competitive equilibrium regulated by activators / inhibitors involving Turing reaction-diffusion. Temporal patterns result from oscillating stem cell activities within each module (micro-environment regulation), reflected as growth (anagen) and resting (telogen) phases during the cycling of feather and hair follicles. Stimulating modules with activators initiates the spread of regenerative hair waves, while global inhibitors outside each module (macro-environment) prevent this. Different wave patterns can be simulated by Cellular Automata principles. Hormonal status and seasonal changes can modulate appendage phenotypes, leading to “organ metamorphosis”, with multiple ectodermal organ phenotypes generated from the same precursors. We discuss potential evolutionary novel steps using this module based complexity in several amniote integument organs, exemplified by the spectacular peacock feather pattern. We thus explore the application of the acquired knowledge of patterning in tissue engineering. New hair follicles can be generated after wounding. Hairs and feathers can be reconstituted through self-organization of dissociated progenitor cells. PMID:23539312
Module-based complexity formation: periodic patterning in feathers and hairs.
Chuong, Cheng-Ming; Yeh, Chao-Yuan; Jiang, Ting-Xin; Widelitz, Randall
2013-01-01
Patterns describe order which emerges from homogeneity. Complex patterns on the integument are striking because of their visibility throughout an organism’s lifespan. Periodic patterning is an effective design because the ensemble of hair or feather follicles (modules) allows the generation of complexity, including regional variations and cyclic regeneration, giving the skin appendages a new lease on life. Spatial patterns include the arrangements of feathers and hairs in specific number, size, and spacing.We explorehowa field of equivalent progenitor cells can generate periodically arranged modules based on genetic information, physical–chemical rules and developmental timing. Reconstitution experiments suggest a competitive equilibrium regulated by activators/inhibitors involving Turing reaction-diffusion. Temporal patterns result from oscillating stem cell activities within each module (microenvironment regulation), reflected as growth (anagen) and resting (telogen) phases during the cycling of feather and hair follicles. Stimulating modules with activators initiates the spread of regenerative hair waves, while global inhibitors outside each module (macroenvironment) prevent this. Different wave patterns can be simulated by cellular automata principles. Hormonal status and seasonal changes can modulate appendage phenotypes, leading to ‘organ metamorphosis’, with multiple ectodermal organ phenotypes generated from the same precursors. We discuss potential novel evolutionary steps using this module-based complexity in several amniote integument organs, exemplified by the spectacular peacock feather pattern. We thus explore the application of the acquired knowledge of patterning in tissue engineering. New hair follicles can be generated after wounding. Hairs and feathers can be reconstituted through self-organization of dissociated progenitor cells. © 2012 Wiley Periodicals, Inc.
Satisfaction, Challenges, and Interaction in Online Education: A Generational Comparison
ERIC Educational Resources Information Center
Yousef, Martin C.
2012-01-01
Problem: Research suggests that multiple generations of students (predominantly Generation X and millennials) are concurrently enrolled in online classes and that the number of online students continues to grow. The problem investigated in this study was to identify the level of satisfaction as well as the preferences of students from Generation X…
Automatic Item Generation via Frame Semantics: Natural Language Generation of Math Word Problems.
ERIC Educational Resources Information Center
Deane, Paul; Sheehan, Kathleen
This paper is an exploration of the conceptual issues that have arisen in the course of building a natural language generation (NLG) system for automatic test item generation. While natural language processing techniques are applicable to general verbal items, mathematics word problems are particularly tractable targets for natural language…
Generating moment matching scenarios using optimization techniques
Mehrotra, Sanjay; Papp, Dávid
2013-05-16
An optimization based method is proposed to generate moment matching scenarios for numerical integration and its use in stochastic programming. The main advantage of the method is its flexibility: it can generate scenarios matching any prescribed set of moments of the underlying distribution rather than matching all moments up to a certain order, and the distribution can be defined over an arbitrary set. This allows for a reduction in the number of scenarios and allows the scenarios to be better tailored to the problem at hand. The method is based on a semi-infinite linear programming formulation of the problem thatmore » is shown to be solvable with polynomial iteration complexity. A practical column generation method is implemented. The column generation subproblems are polynomial optimization problems; however, they need not be solved to optimality. It is found that the columns in the column generation approach can be efficiently generated by random sampling. The number of scenarios generated matches a lower bound of Tchakaloff's. The rate of convergence of the approximation error is established for continuous integrands, and an improved bound is given for smooth integrands. Extensive numerical experiments are presented in which variants of the proposed method are compared to Monte Carlo and quasi-Monte Carlo methods on both numerical integration problems and stochastic optimization problems. The benefits of being able to match any prescribed set of moments, rather than all moments up to a certain order, is also demonstrated using optimization problems with 100-dimensional random vectors. Here, empirical results show that the proposed approach outperforms Monte Carlo and quasi-Monte Carlo based approaches on the tested problems.« less