服务承诺





51Due提供Essay,Paper,Report,Assignment等学科作业的代写与辅导,同时涵盖Personal Statement,转学申请等留学文书代写。




私人订制你的未来职场 世界名企,高端行业岗位等 在新的起点上实现更高水平的发展




The Minimal Levels of Abstraction in the History of Modern Computing--论文代写范文精选
2016-02-20 来源: 51due教员组 类别: Essay范文
图灵获得的主要结果是可计算的,在他们的研究中关于逻辑和数学的基础。即使图灵建立了现代计算机的理论基础,但到了冯·诺依曼机的出现提供了一个真正的人机交互:计算变得复杂。下面的essay代写范文进行详述。
Abstract
From the advent of general-purpose, Turing-complete machines, the relation between operators, programmers, and users with computers can be seen in terms of interconnected informational organisms (inforgs) henceforth analysed with the method of levels of abstraction (LoAs), risen within the Philosophy of Information (PI). In this paper, the epistemological levellism proposed by L. Floridi in the PI to deal with LoAs will be formalised in constructive terms using category theory, so that information itself is treated as structure-preserving functions instead of Cartesian products. The milestones in the history of modern computing are then analysed via constructive levellism to show how the growth of system complexity lead to more and more information hiding.
Keywords: Epistemological levellism, Constructive levellism, Philosophy of Information, Computational interconnected informational organisms.
Introduction
In the recent debate among philosophers of computing, it emerges that there are several formal theoretical concepts of information (Sommaruga, 2009). This is hardly surprising, as “the marriage of physis and techne” (Floridi, 2010) became more and more complex after the computational turn, whose historical origin can be traced back in 1936, when Church, Turing and Post obtained the major results about computability in their research about logic and the foundations of mathematics. However, even if the universal Turing machine established the theoretical basis of modern computing, the advent of the Von Neumann machine (VNM) provided a real human-computer interaction: the more computing machineries became complex, the more intricate became the relations between operators, programmers, and users.
During the last decades, physical layers of machines have been gradually replaced by abstract computing devices—even computers themselves can be fully made virtual nowadays, especially in the cloud computing paradigm. Philosophy of Information (PI) is the framework in which we analyse these complex relations, following Floridi’s major contributions—for recent comments and discussions, see Allo (2011) and Demir (2012). In our approach, we borrow Floridi’s concept of ‘informational organism’ (inforg), while the proposed method is a variant of the epistemological levellism, i.e., the philosophical view that investigates reality at different levels, where levels are defined by observation or interpretation.
This variant is based on a rigorous, multi-levelled definition of information, where the levels are identified through the notion of abstraction (Floridi, 2011b, in particular, ch. 3), (Floridi, 2010, 2008), (Floridi and Sanders, 2004): the starting point considers numbers as symbols, thus providing an epistemological level of abstraction. Moreover, the underlying principle aliquid (stat) pro aliquo, something that stands for something else, can be carried on up to capture the complexity of modern computing systems. While this kind of non-reductionist approach seems to be the right one in dealing with general inforgs, there are some practical disadvantages in using the method of levels of abstraction (LoAs) as such with computational inforgs.
A computational inforg is an organism composed by (at least) a human being and by some kind of computing machine. Most often, the computing part is made of a VNM or some evolution: although other computational models beside Von Neumann’s exist, in this paper we will limit ourselves to VNM-based computational inforgs, as this paradigm is by far the most important in the history of modern computing. Because computational inforgs are aliquid pro aliquo, the history of VNM-based machines shows that the hiding of the computing technicalities to the human counterpart of the inforg and the growth in complexity of the inforg itself, both in the human and the machine sides, develop in pairs.
For example, when the end-user types one or more keywords in the web page of a search engine like Google, what he or she expects as a response is a list of web pages related to the entered keywords, certainly not to know how the search engine was programmed to obtain that output. Even in the case of a computer programmer this fact holds: the programmer wants to obtain a sound answer when running the program itself, independently from the hardware and the operative system used to code his or her algorithms. So, in both cases of end-users and programmers, some essential pieces of information are hidden, and this very fact is what makes computer systems interesting for human beings, a fact already noticed by Turing (1950) dealing with Lady Lovelace’s well-known objection.
So, a way to cope with the LoAs of computational inforgs in the case of hidden, implicit information should still be found. The method is based on the notion of observables, i.e. interpreted typed variables together with the corresponding statements of what features are under consideration; a LoA is the finite but non-empty set of the observables (Floridi, 2011b, 48). There are some examples of application of the method provided by Floridi, in particular: the study of some physical human attributes; Gassendi’s objections to Descartes’ Meditations; the game of chess; the semiotics of traffic lights in Rome and Oxford. None of these examples pertains to computational inforgs, which in our view is a clear limit; moreover, in that foundational paper Floridi declares that he “shall operate entirely within the boundaries of standard na¨ıve set theory” (Floridi, 2011b, 50). In order to deal with computational inforgs and, in particular, with the phenomenon of information hiding, it is useful to put the variables and LoAs in a perspective that goes beyond set theory.
Abstraction within Constructive Levellism
Category theory (MacLane, 1998) can give a reasonably manageable and precise account of the growth in complexity of computational inforgs. The idea is to let information be a domain, represented as a mathematical category, so that abstraction becomes a map which preserves the inner structure of its domain, i.e., a functor in mathematical terms, and, thus, the LoAs are distinguished by what kind of information gets hidden to the human interacting with the machine. If we adopt only methods and tools belonging to constructive mathematics, implicit information can be explained alongside explicit one, without the risk of being lost—at least to some extent. If we adopt a strict constructive attitude, for example P. Martin Lof’s type theory as the reasoning framework, then we gain the ability to ¨ access implicit information on request; on the contrary, if we stick on a less demanding system, where choice principles are available, we may not have algorithmic access to hidden/implicit information, even if present in the system. However, even within less demanding systems, this fact does not imply that epistemological levellism cannot be used constructively. In fact, there is a recent attempt to apply it into the more general perspective of philosophical constructionism, which is a first step in this direction (Floridi, 2011a).
The advantage of a constructive approach is that implicit knowledge is never lost but hidden, see Sambin and Valentini (1995) for the technical aspects, as in constructive mathematics the information content of statements is strictly preserved by proofs (Bridges and Richman, 1987). There is another crucial point in epistemological levellism, i.e., how LoAs are related. In other words, what is meant by abstraction. While preserving the possibility to work in a purely constructive environment, category theory gives us a general and rigorous definition of abstraction1.
Minimal Levels of Abstraction in Modern Computing
The modern era of computing was born in 1936, when Church, Post and Turing showed the existence of universal machines, thus founding the future general-purpose computers on a solid theoretical basis. Later, calculation became the processing of a program, represented as a number in input, applied to some data, denoted by another number as input, eventually producing a number as the final result. The numbers are not numerical quantities anymore: they denote (sequences of) symbols forming the program, the input data and the final result. Here, an epistemological LoA can be found, as the machine works on numbers, but the human beings using that machine think of those numbers as programs and data, as the inner nature of a universal machine is to be ‘generic’.
It is important to notice how the LoA is entirely on the human side of the inforg, while the inner structure of the LoA is reflected on the corresponding Level of Organisation (LoO) in the machinery part of the computational inforg—‘the system’ (Floridi, 2011b, 69). A LoO is a structure in itself, or de re, which is allegedly captured and uncovered by its description, and objectively formulated in some neutral observation language (Floridi, 2011b, 69). This view, inherited by classic AI (e.g., according to Newell’s and Simon’s views) is acceptable within epistemological levellism when there is a perfect correspondence between LoOs and LoAs.
For instance, if an observer inspects the source code of a program written in some programming language, the observer will realise that many parts are deputed to facilitate this interpretation, i.e., type declarations, function prototypes, etc., imposing an organisation to the program. Similarly, also data are organised and their representation is highly structured. In other words, some LoOs have been hierarchically built inside the machine so that each LoA can be externalised by a correspondent LoO and, consequently, some information gets hidden. Hiding is only half of this process, which is really abstraction: the other half is the ability to recover the concrete representation from the data/program knowing the details of the abstraction. This second part is fundamental to enable many activities related to programming, e.g., debugging. A consequence of the widespread use of LoOs is that inforgs involving modern computing start to become more complex: the general definition of information as data + meaning can be adequate only for the results of computation, but not for the whole process of information generation performed by the inforg, i.e., the human-machine system. In particular, programmability plays a crucial role, alongside computational efficiency and evolutionary adaptability (Conrad, 1995). The act of programming is the act of symbolically representing algorithms as numbers. Hence, it is inherently an abstraction, where information gets partially hidden. The more inforgs grow, the more their pragmatic wills (from humans) and needs (to the machines) grow, the more LoAs (on the human side) and LoOs (on the machine side) should be found.(essay代写)
51Due网站原创范文除特殊说明外一切图文著作权归51Due所有;未经51Due官方授权谢绝任何用途转载或刊发于媒体。如发生侵犯著作权现象,51Due保留一切法律追诉权。
更多essay代写范文欢迎访问我们主页 www.51due.com 当然有essay代写需求可以和我们24小时在线客服 QQ:800020041 联系交流。-X(essay代写)
