代写范文

留学资讯

写作技巧

论文代写专题

服务承诺

资金托管
原创保证
实力保障
24小时客服
使命必达

51Due提供Essay,Paper,Report,Assignment等学科作业的代写与辅导,同时涵盖Personal Statement,转学申请等留学文书代写。

51Due将让你达成学业目标
51Due将让你达成学业目标
51Due将让你达成学业目标
51Due将让你达成学业目标

私人订制你的未来职场 世界名企,高端行业岗位等 在新的起点上实现更高水平的发展

积累工作经验
多元化文化交流
专业实操技能
建立人际资源圈

Learning Appropriate Contexts--论文代写范文精选

2016-02-03 来源: 51due教员组 类别: Essay范文

51Due论文代写网精选essay代写范文:“Learning Appropriate Contexts” 通过电脑编程扩展,以便有更好的解决方案,利用不同的指示开发解决方案。如何识别上下文的内容是错误的。通过一些原则来识别上下文。现在有许多应用程序,以提高推理普遍性。在机器学习领域的应用上,下文相关的概念已经越来越稀少。归纳学习、进化计算和强化技术似乎没有多大用处。

有一些尝试上下文检测方法应用于神经网络,这样一个网络可以更有效地学一种模式,但这些都是有限的概念,对于现有的修复算法。当然设计师通过算法,检测并应对必要的变化。下面的essay代写范文进行详述。

Abstract
Genetic Programming is extended so that the solutions being evolved do so in the context of local domains within the total problem domain. This produces a situation where different “species” of solution develop to exploit different “niches” of the problem – indicating exploitable solutions. It is argued that for context to be fully learnable a further step of abstraction is necessary. Such contexts abstracted from clusters of solution/model domains make sense of the problem of how to identify when it is the content of a model is wrong and when it is the context. Some principles of learning to identify useful contexts are proposed. 
Keywords: learning, conditions of application, context, evolutionary computing, error

Introduction 
In AI there have now been many applications of context and context-like notions with a view to improving the robustness and generality of inference. In the field of machine learning applications of context-related notions have been much rarer and, when they do occur, less fundamental. Inductive learning, evolutionary computing and reinforcement techniques do not seem to have much use for the notion. 

There have been some attempts to apply context detection methods to neural networks, so that a network can more efficiently learn more than one kind of pattern but these have been limited in conception to fixes for existing algorithms. Of course, if one knows in advance that there will be several relevant contexts, the human designer (who is naturally adept at distinguishing the appropriate context) can ‘hard-wire’ some mechanism so that the learning algorithm can detect and make the sudden change necessary (for example simply switching to a new neural network) to adjust to a new context. But if one does not have such prior knowledge then this is not possible – the appropriate contexts have to be learnt at the same time as the content of the models. In such cases the question is “why does one need separate parts of the model for context and content, why not just combine them into a unitary model?”. 

If one does not combine them one always has the problem of determining whether any shortcoming in the model is due to a misidentification of context or simply erroneous content – a problem that is impossible to solve just by looking at the context & content of a model on its own. Rather the tendency has often been, in the absence of a good reason to do otherwise, to simplify things by combining the conditions of application of a model explicitly into the model content. This paper seeks to make some practical proposals as to how notions of conditions of applicability and then contexts themselves can be introduced into evolutionary computing. Such a foray includes suggestions for principles for learning and identifying the appropriate contexts without prior knowledge.

Adding conditions of applicability to evolving models
Standard Evolutionary Computing Algorithms Almost all evolutionary computing algorithms have the following basic structure: •There is a target problem; •There is a population of candidate models/solutions (initially random); •Each iteration some/all of the models are evaluated against the problem (either competitively against each others or by being given a fitness score); •The algorithm is such that the models which perform better at the problem are preferentially selected for, so the worse models tend to be discarded; •There is some operator which introduces variation into the population; •At any particular time the model which currently performs best is the “result” of the computation (usually taken at the end). There are various different approaches within this, for example, genetic programming (GP) (Koza, 1992). With GP the population of models can have a tree structure of any shape with the nodes and terminals taken from a fixed vocabulary. The models are interpreted as a function or program to solve the given problem, and usually given a numeric measure of their success at this – their “fitness”. The models are propagated into the next generation with a probability correlated to this fitness. The variation is provided by “crossing” the tree structures– as shown in figure 1).

Adding Conditions of Application 
Thus the first step is to allow each candidate model to specialise in different parts of the problem domain. For this to be possible, success at solving the target problem must be evaluated locally, without the model being (unduly) penalised for not being global successful. In evolutionary terms, we allow the problem domain to be the environment and allow different models to co-exist in different “niches” corresponding to particular sub-spaces. A disadvantage of this technique is that once the algorithm has finished is does not provide you with a single best answer, but rather a whole collection of models, each with different domains of application. If you want a complete solution you have to analyse the results of the computation and piece together a compound model out of several models which work in different domains – this will not be a simple model with a neat closed form. Also there may be tough areas of the problem where ones does not find any acceptable models at all. Of course, these “cons” are relative – if one had used a standard universal algorithm (that is all models having the same domain as the problem and evaluated over that domain), then the resulting “best” model might well not perform well over the whole domain and its form might be correspondingly more complex as it had to deal with the whole problem at once.

An Example Application 
The example implementation I will describe is that of applying the above algorithm to predicting the number of sunspots (shown in fig 4 below). The fitness function is the inverse of the root mean squared error of the prediction of the model as compared to the actual data. The models are constructed with the nodes: PLUS, MINUS, TIMES, SAFEDIVIDE, SIN and COS, and the terminals: x, x1, x2, x4, x8, x16 (which stand for the current time period and then the number of sunspots with lags 1, 2, 4, 8, and 16 time periods respectively) the along with a random selection of numeric constants. The smaller the domain the greater the average model fitness. This is because it is much easier to “fit” an expression to a single point than “fit” longer sections of the graph, with fitting the whole graph being the most difficult. 

Of course, there is little point in fitting single points with expressions if there is not any generalisation across the graph. After all we already have an completely accurate set of expressions pointby-point: the original data set itself. On the other hand, if there are distinct regions of the problem space where different solutions make sense, being able to identify these regions and appropriate models for them would be very useful. If the context of the whole problem domain is sufficiently restricted (which is likely for most of the “test” or “toy” problems these techniques are tried upon). Figure 6, below, shows the maximum coverage of the models for the four runs. In each case early on a few models take over from the others in terms of the amount of problem space they occupy. Then as they produce descendants with variations, these descendants compete with them for problem space and the coverage of any particular model equals out.

The move to really learning and using contexts 
Given the picture of “context” as an abstraction of the background inputs to a model, implicit in the transfer of knowledge from learning to application (that I argued for in Edmonds, 1999). There is still a step to take in order for it to be truly “context” that is being utilised – a collection of conditions of application become a context if it is sensible to abstract them as a coherent unit. Now it may be possible for a human to analyse the results of the evolutionary algorithms just described and identify context – these would correspond to niches in the problem domain that a set of models competes to exploit – but the above algorithm itself does not do this. Rather the identification of context in a problem reveals something about the problem itself – it indicates that there are recognisable and distinct sub-cases where different sets of models/rules/solutions apply. In other words that there is a sufficient clustering or grouping of the conditions of applications of relevant models that it makes sense to abstract from this set of domains to a context. This is what a biologist does when identifying the “niche” of an organism (or group of organisms) – this is not highly detailed list of where this organism happened to live, but a relevant abstraction from this taking into account the way the organism survives.

Of course, it is not necessarily the case that the model domains will be clustered so that it is at all sensible to abstract them into explicit contexts. Rather this is a contingent property of the problem and the strategy, resources and limitations of the learner. The existence of meaningful contexts arises out of the fact that there happen to be heuristics that can do this clustering, otherwise even if all the relevant models are not universal in scope context, as such, might not arise. Such an abstraction requires a further level of learning not present in the above algorithm. Extra levels of learning require resources and so must be justified – so why would one need to cluster and identify contexts rather than directly manipulate the model domains themselves? In the natural world organisms do not usually bother to explicitly identify where they live. A simple answer is that an abstracted context is a far more compact and elegant representation of the conditions under which a whole collection of models might hold, but this is not a complete answer because the overhead in retaining the detail of the source models domains may not be an advantage compared to the advantage of knowing exactly when a model applies. A partial answer to this question will be offered in the next section.(essay代写)

51Due网站原创范文除特殊说明外一切图文著作权归51Due所有;未经51Due官方授权谢绝任何用途转载或刊发于媒体。如发生侵犯著作权现象,51Due保留一切法律追诉权。(essay代写)
更多essay代写范文欢迎访问我们主页 www.51due.com 当然有essay代写需求可以和我们24小时在线客服 QQ:800020041 联系交流。-X(essay代写)

上一篇:Co-evolving the Operators of V 下一篇:Homosexual behavior--论文代写范文精选