Wednesday, January 14, 2009


The Software Engineering Body of Knowledge (SWEBOK) is a product of the Software Engineering Coordinating Committee sponsored by the IEEE Computer Society.
The software engineering body of knowledge is an all-inclusive term that describes the sum of knowledge within the profession of software engineering. Since it is usually not possible to put the full body of knowledge of even an emerging discipline, such as software engineering, into a single document, there is a need for a Guide to the Software Engineering Body of Knowledge. This Guide will seek to identify and describe that subset of the body of knowledge that is generally accepted, even though software engineers must be knowledgeable not only in software engineering, but also, of course, in other related disciplines

In the history of software engineering the software engineering has evolved steadily from its founding days in the 1940s until today in the 2000s. Applications have evolved continuously. The ongoing goal to improve technologies and practices, seeks to improve the productivity of practitioners and the quality of applications to users.
Component-based software engineering (CBSE) (also known as Component-Based Development (CBD) or Software Componentry) is a branch of the software engineering discipline, with emphasis on decomposition of the engineered systems into functional or logical components with well-defined interfaces used for communication across the components. Components are considered to be a higher level of abstraction than objects and as such they do not share state and communicate by exchanging messages carrying data.
Software component
A software component is a system element offering a predefined service or event, and able to communicate with other components. Clemens Szyperski and David Messerschmitt give the following five criteria for what a software component shall be to fulfill the definition:
• Multiple-use
• Non-context-specific
• Composable with other components
• Encapsulated i.e., non-investigable through its interfaces
• A unit of independent deployment and versioning
A simpler definition can be: A component is an object written to a specification. It does not matter what the specification is: COM, Enterprise JavaBeans, etc., as long as the object adheres to the specification. It is only by adhering to the specification that the object becomes a component and gains features such as reusability.
Software components often take the form of objects or collections of objects (from object-oriented programming), in some binary or textual form, adhering to some interface description language (IDL) so that the component may exist autonomously from other components in a computer.
When a component is to be accessed or shared across execution contexts or network links, techniques such as serialization or marshalling are often employed to deliver the component to its destination.
Reusability is an important characteristic of a high quality software component. A software component should be designed and implemented so that it can be reused in many different programs.
It takes significant effort and awareness to write a software component that is effectively reusable. The component needs:
• to be fully documented;
• more thorough testing;
• robust input validity checking;
• to pass back useful error messages as appropriate;
• to be built with an awareness that it will be put to unforeseen uses;
• a mechanism for compensating developers who invest the (substantial) efforts implied above.
In the 1960s, scientific subroutine libraries were built that were reusable in a broad array of engineering and scientific applications. Though these subroutine libraries reused well-defined algorithms in an effective manner, they had a limited domain of application. Today, modern reusable components encapsulate both data structures and the algorithms that are applied to the data structures.
It builds on prior theories of software objects, software architectures, software frameworks and software design patterns, and the extensive theory of object-oriented programming and the object oriented design of all these. It claims that software components, like the idea of hardware components, used for example in telecommunications, can ultimately be made interchangeable and reliable.
Software development approaches
Every software development methodology has more or less it's own approach to software development. There is a set of more general approaches, which are developed into several specific methodologies. These approaches are:[1]
• Waterfall: linear framework type
• Prototyping: iterative framework type
• Incremental : combination of linear and iterative framework type
• Spiral : combination linear and iterative framework type
• Rapid Application Development (RAD) : Iterative Framework Type
Waterfall model
The waterfall model is a sequential development process, in which development is seen as flowing steadily downwards (like a waterfall) through the phases of requirements analysis, design, implementation, testing (validation), integration, and maintenance. The first formal description of the waterfall model is often cited to be an article published by Winston W. Royce[3] in 1970 although Royce did not use the term "waterfall" in this article.
Basic principles of the waterfall model are:[1]
• Project is divided into sequential phases, with some overlap and splashback acceptable between phases.
• Emphasis is on planning, time schedules, target dates, budgets and implementation of an entire system at one time.
Tight control is maintained over the life of the project through the use of extensive written documentation, as well as through formal reviews and approval/signoff by the user and information technology management occurring Prototyping
Software prototyping, is the framework of activities during software development of creating prototypes, i.e., incomplete versions of the software program being developed.
Basic principles of prototyping are:[1]
• Not a standalone, complete development methodology, but rather an approach to handling selected portions of a larger, more traditional development methodology (i.e. Incremental, Spiral, or Rapid Application Development (RAD)).
• Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.
• User is involved throughout the process, which increases the likelihood of user acceptance of the final implementation.
• Small-scale mock-ups of the system are developed following an iterative modification process until the prototype evolves to meet the users’ requirements.
• While most prototypes are developed with the expectation that they will be discarded, it is possible in some cases to evolve from prototype to working system.
• A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problem.
Incremental
Various methods are acceptable for combining linear and iterative systems development methodologies, with the primary objective of each being to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.
Basic principles of incremental development are:[1]
• A series of mini-Waterfalls are performed, where all phases of the Waterfall development model are completed for a small part of the systems, before proceeding to the next incremental, or
• Overall requirements are defined before proceeding to evolutionary, mini-Waterfall development of individual increments of the system, or
• The initial software concept, requirements analysis, and design of architecture and system core are defined using the Waterfall approach, followed by iterative Prototyping, which culminates in installation of the final prototype (i.e., working system).
Spiral
The spiral model is a software development process combining elements of both design and prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up concepts. Basic principles:[1]
• Focus is on risk assessment and on minimizing project risk by breaking a project into smaller segments and providing more ease-of-change during the development process, as well as providing the opportunity to evaluate risks and weigh consideration of project continuation throughout the life cycle.
• "Each cycle involves a progression through the same sequence of steps, for each portion of the product and for each of its levels of elaboration, from an overall concept-of-operation document down to the coding of each individual program."[4]
• Each trip around the spiral traverses four basic quadarants: (1) determine objectives, alternatives, and constrainst of the iteration; (2) Evaluate alternatives; Identify and resolve risks; (3) develop and verify deliverables from the iteration; and (4) plan the next iteration. [5]
• Begin each cycle with an identification of stakeholders and their win conditions, and end each cycle with review and commitment. [6]
Rapid Application Development (RAD)
Rapid application development (RAD) is a software development methodology, which involves iterative development and the construction of prototypes. Rapid application development is a term originally used to describe a software development process introduced by James Martin in 1991.
Basic principles:[1]
• Key objective is for fast development and delivery of a high quality system at a relatively low investment cost.
• Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.
• Aims to produce high quality systems quickly, primarily through the use of iterative Prototyping (at any stage of development), active user involvement, and computerized development tools. These tools may include Graphical User Interface (GUI) builders, Computer Aided Software Engineering (CASE) tools, Database Management Systems (DBMS), fourth-generation programming languages, code generators, and object-oriented techniques.
• Key emphasis is on fulfilling the business need, while technological or engineering excellence is of lesser importance.
• Project control involves prioritizing development and defining delivery deadlines or “timeboxes”. If the project starts to slip, emphasis is on reducing requirements to fit the timebox, not in increasing the deadline.
• Generally includes Joint Application Development (JAD), where users are intensely involved in system design, either through consensus building in structured workshops, or through electronically facilitated interaction.
• Active user involvement is imperative.
• Iteratively produces production software, as opposed to a throwaway prototype.
• Produces documentation necessary to facilitate future development and maintenance.
• Standard systems analysis and design techniques can be fitted into this framework.
Other software development approaches
Other method concepts are:
• Object oriented development methodologies, such as Grady Booch's Object-oriented design (OOD), also known as object-oriented analysis and design (OOAD). The Booch model includes six diagrams: class, object, state transition, interaction, module, and process.[7]
• Top-down programming: evolved in the 1970s by IBM researcher Harlan Mills (and Niklaus Wirth) in developed structured programming.
• Unified Process (UP) is an iterative software development methodology approach, based on UML. UP organizes the development of software into four phases, each consisting of one or more executable iterations of the software at that stage of development: Inception, Elaboration, Construction, and Guidelines. There are a number of tools and products available designed to facilitate UP implementation. One of the more popular versions of UP is the Rational Unified Process (RUP).

Modeling language
A modeling language is any artificial language that can be used to express information or knowledge or systems in a structure that is defined by a consistent set of rules. The rules are used for interpretation of the meaning of components in the structure. A modeling language can be graphical or textual.[14] Graphical modeling languages use a diagram techniques with named symbols that represent concepts and lines that connect the symbols and that represent relationships and various other graphical annotation to represent constraints. Textual modeling languages typically use standardised keywords accompanied by parameters to make computer-interpretable expressions.
Example of graphical modelling languages in the field of software engineering are:
• Business Process Modeling Notation (BPMN, and the XML form BPML) is an example of a Process Modeling language.
• EXPRESS and EXPRESS-G (ISO 10303-11) is an international standard general-purpose data modeling language.
• Extended Enterprise Modeling Language (EEML) is commonly used for business process modeling across a number of layers.
• Flowchart is a schematic representation of an algorithm or a stepwise process,
• Fundamental Modeling Concepts (FMC) modeling language for software-intensive systems.
• IDEF is a family of modeling languages, the most notable of which include IDEF0 for functional modeling, IDEF1X for information modeling, and IDEF5 for modeling ontologies.
• LePUS3 is an object-oriented visual Design Description Language and a formal specification language that is suitable primarily for modelling large object-oriented (Java, C++, C#) programs and design patterns.
• Specification and Description Language(SDL) is a specification language targeted at the unambiguous specification and description of the behaviour of reactive and distributed systems.
• Unified Modeling Language (UML) is a general-purpose modeling language that is an industry standard for specifying software-intensive systems. UML 2.0, the current version, supports thirteen different diagram techniques, and has widespread tool support.
Not all modeling languages are executable, and for those that are, the use of them doesn't necessarily mean that programmers are no longer required. On the contrary, executable modeling languages are intended to amplify the productivity of skilled programmers, so that they can address more challenging problems, such as parallel computing and distributed systems.
Programming paradigm
A programming paradigm is a fundamental style of computer programming, in contrast to a software engineering methodology, which is a style of solving specific software engineering problems. Paradigms differ in the concepts and abstractions used to represent the elements of a program (such as objects, functions, variables, constraints...) and the steps that compose a computation (assignation, evaluation, continuations, data flows...).
A programming language can support multiple paradigms. For example programs written in C++ or Object Pascal can be purely procedural, or purely object-oriented, or contain elements of both paradigms. Software designers and programmers decide how to use those paradigm elements. In object-oriented programming, programmers can think of a program as a collection of interacting objects, while in functional programming a program can be thought of as a sequence of stateless function evaluations. When programming computers or systems with many processors, process-oriented programming allows programmers to think about applications as sets of concurrent processes acting upon logically shared data structures.
Just as different groups in software engineering advocate different methodologies, different programming languages advocate different programming paradigms. Some languages are designed to support one particular paradigm (Smalltalk supports object-oriented programming, Haskell supports functional programming), while other programming languages support multiple paradigms (such as Object Pascal, C++, C#, Visual Basic, Common Lisp, Scheme, Python, Ruby and Oz).
Many programming paradigms are as well known for what techniques they forbid as for what they enable. For instance, pure functional programming disallows the use of side-effects; structured programming disallows the use of the goto statement. Partly for this reason, new paradigms are often regarded as doctrinaire or overly rigid by those accustomed to earlier styles.[citation needed] Avoiding certain techniques can make it easier to prove theorems about a program's correctness—or simply to understand its behavior.
A software framework is a re-usable design for a software system or subsystem. A software framework may include support programs, code libraries, a scripting language, or other software to help develop and glue together the different components of a software project. Various parts of the framework may be exposed through an API. .
Software development process
A Software development process is a structure imposed on the development of a software product. Synonyms include software life cycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process.
A largely growing body of software development organizations implement process methodologies. Many of them are in the defense industry, which in the U.S. requires a rating based on 'process models' to obtain contracts. The international standard for describing the method of selecting, implementing and monitoring the life cycle for software is ISO 12207.
A decades-long goal has been to find repeatable, predictable processes that improve productivity and quality. Some try to systematize or formalize the seemingly unruly task of writing software. Others apply project management techniques to writing software. Without project management, software projects can easily be delivered late or over budget. With large numbers of software projects not meeting their expectations in terms of functionality, cost, or delivery schedule, effective project management appears to be lacking.
Impact of globalization
Many students in the developed world have avoided degrees related to software engineering because of the fear of offshore outsourcing (importing software products or services from other countries) and of being displaced by foreign visa workers.[15] Although government statistics do not currently show a threat to software engineering itself; a related career, computer programming does appear to have been affected.[16][17] Often one is expected to start out as a computer programmer before being promoted to software engineer. Thus, the career path to software engineering may be rough, especially during recessions.
Some career counselors suggest a student also focus on "people skills" and business skills rather than purely technical skills because such "soft skills" are allegedly more difficult to offshore.[18] It is the quasi-management aspects of software engineering that appear to be what has kept it from being impacted by globalization.

The history of the Internet began with the ARPANET and connected mainframe computers on dedicated connections. The second stage involved adding desktop PCs which connected through telephone wires. The third stage was adding wireless connections to laptop computers. And currently the Internet is evolving to allow mobile phone Internet connectivity ubiquitously using cellular networks.
Prior to the widespread internetworking that led to the Internet, most communication networks were limited by their nature to only allow communications between the stations on the network, and the prevalent computer networking method was based on the central mainframe computer model. Several research programs began to explore and articulate principles of networking between separate physical networks. This led to the development of the packet switching model of digital networking. These research efforts included those of the laboratories of Donald Davies (NPL), Paul Baran (RAND Corporation), and Leonard Kleinrock's MIT and UCLA.
The research led to the development of several packet-switched networking solutions in the late 1960s and 1970s,[1] including ARPANET and the X.25 protocols. Additionally, public access and hobbyist networking systems grew in popularity, including unix-to-unix copy (UUCP) and FidoNet. They were however still disjointed separate networks, served only by limited gateways between networks. This led to the application of packet switching to develop a protocol for inter-networking, where multiple different networks could be joined together into a super-framework of networks. By defining a simple common network system, the Internet protocol suite, the concept of the network could be separated from its physical implementation. This spread of inter-network began to form into the idea of a global inter-network that would be called 'The Internet', and this began to quickly spread as existing networks were converted to become compatible with this. This spread quickly across the advanced telecommunication networks of the western world, and then began to penetrate into the rest of the world as it became the de-facto international standard and global network. However, the disparity of growth led to a digital divide that is still a concern today.
Following commercialisation and introduction of privately run Internet Service Providers in the 1980s, and its expansion into popular use in the 1990s, the Internet has had a drastic impact on culture and commerce. This includes the rise of near instant communication by e-mail, text based discussion forums, the World Wide Web. Investor speculation in new markets provided by these innovations would also lead to the inflation and collapse of the Dot-com bubble, a major market collapse. But despite this, the Internet continues to grow.
Before the Internet
In the 1950s and early 1960s, prior to the widespread inter-networking that led to the Internet, most communication networks were limited in that they only allowed communications between the stations on the network. Some networks had gateways or bridges between them, but these bridges were often limited or built specifically for a single use. One prevalent computer networking method was based on the central mainframe method, simply allowing its terminals to be connected via long leased lines. This method was used in the 1950s by Project RAND to support researchers such as Herbert Simon, in Pittsburgh, Pennsylvania, when collaborating across the continent with researchers in Sullivan, Illinois, on automated theorem proving and artificial intelligence

The Internet is a global system of interconnected computer networks that interchange data by packet switching using the standardized Internet Protocol Suite (TCP/IP). It is a "network of networks" that consists of millions of private and public, academic, business, and government networks of local to global scope that are linked by copper wires, fiber-optic cables, wireless connections, and other technologies.
The Internet carries various information resources and services, such as electronic mail, online chat, file transfer and file sharing, online gaming, and the inter-linked hypertext documents and other resources of the World Wide Web (WWW).
The terms Internet and World Wide Web are often used in every-day speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global data communications system. It is a hardware and software infrastructure that provides connectivity between computers. In contrast, the Web is one of the services communicated via the Internet. It is a collection of interconnected documents and other resources, linked by hyperlinks and URLs.
Creation
The USSR's launch of Sputnik spurred the United States to create the Advanced Research Projects Agency, known as ARPA, in February 1958 to regain a technological lead.[2][3] ARPA created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment (SAGE) program, which had networked country-wide radar systems together for the first time. J. C. R. Licklider was selected to head the IPTO, and saw universal networking as a potential unifying human revolution.
Licklider moved from the Psycho-Acoustic Laboratory at Harvard University to MIT in 1950, after becoming interested in information technology. At MIT, he served on a committee that established Lincoln Laboratory and worked on the SAGE project. In 1957 he became a Vice President at BBN, where he bought the first production PDP-1 computer and conducted the first public demonstration of time-sharing.
At the IPTO, Licklider recruited Lawrence Roberts to head a project to implement a network, and Roberts based the technology on the work of Paul Baran,[4] who had written an exhaustive study for the U.S. Air Force that recommended packet switching (as opposed to circuit switching) to make a network highly robust and survivable. After much work, the first two nodes of what would become the ARPANET were interconnected between UCLA and SRI (later SRI International) in Menlo Park, California, on October 29, 1969. The ARPANET was one of the "eve" networks of today's Internet.
Following on from the demonstration that packet switching worked on the ARPANET, the British Post Office, Telenet, DATAPAC and TRANSPAC collaborated to create the first international packet-switched network service. In the UK, this was referred to as the International Packet Switched Service (IPSS), in 1978. The collection of X.25-based networks grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. The X.25 packet switching standard was developed in the CCITT (now called ITU-T) around 1976.
X.25 was independent of the TCP/IP protocols that arose from the experimental work of DARPA on the ARPANET, Packet Radio Net and Packet Satellite Net during the same time period. Vinton Cerf and Robert Kahn developed the first description of the TCP protocols during 1973 and published a paper on the subject in May 1974. Use of the term "Internet" to describe a single global TCP/IP network originated in December 1974 with the publication of RFC 675, the first full specification of TCP that was written by Vinton Cerf, Yogen Dalal and Carl Sunshine, then at Stanford University. During the next nine years, work proceeded to refine the protocols and to implement them on a wide range of operating systems.
The first TCP/IP-based wide-area network was operational by January 1, 1983 when all hosts on the ARPANET were switched over from the older NCP protocols. In 1985, the United States' National Science Foundation (NSF) commissioned the construction of the NSFNET, a university 56 kilobit/second network backbone using computers called "fuzzballs" by their inventor, David L. Mills. The following year, NSF sponsored the conversion to a higher-speed 1.5 megabit/second network. A key decision to use the DARPA TCP/IP protocols was made by Dennis Jennings, then in charge of the Supercomputer program at NSF.
The opening of the network to commercial interests began in 1988. The US Federal Networking Council approved the interconnection of the NSFNET to the commercial MCI Mail system in that year and the link was made in the summer of 1989. Other commercial electronic e-mail services were soon connected, including OnTyme, Telemail and Compuserve. In that same year, three commercial Internet service providers (ISP) were created: UUNET, PSINET and CERFNET. Important, separate networks that offered gateways into, then later merged with, the Internet include Usenet and BITNET. Various other commercial and educational networks, such as Telenet, Tymnet, Compuserve and JANET were interconnected with the growing Internet. Telenet (later called Sprintnet) was a large privately funded national computer network with free dial-up access in cities throughout the U.S. that had been in operation since the 1970s. This network was eventually interconnected with the others in the 1980s as the TCP/IP protocol became increasingly popular. The ability of TCP/IP to work over virtually any pre-existing communication networks allowed for a great ease of growth, although the rapid growth of the Internet was due primarily to the availability of commercial routers from companies such as Cisco Systems, Proteon and Juniper, the availability of commercial Ethernet equipment for local-area networking, and the widespread implementation of TCP/IP on the UNIX operating system.
Growth
Although the basic applications and guidelines that make the Internet possible had existed for almost a decade, the network did not gain a public face until the 1990s. On August 6, 1991, CERN, which straddles the border between France and Switzerland, publicized the new World Wide Web project. The Web was invented by English scientist Tim Berners-Lee in 1989.
An early popular web browser was ViolaWWW, patterned after HyperCard and built using the X Window System. It was eventually replaced in popularity by the Mosaic web browser. In 1993, the National Center for Supercomputing Applications at the University of Illinois released version 1.0 of Mosaic, and by late 1994 there was growing public interest in the previously academic, technical Internet. By 1996 usage of the word Internet had become commonplace, and consequently, so had its use as a synecdoche in reference to the World Wide Web.
Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks, such as FidoNet, have remained separate). During the 1990s, it was estimated that the Internet grew by 100% per year, with a brief period of explosive growth in 1996 and 1997.[5] This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. [6]
University students' appreciation and contributions
New findings in the field of communications during the 1960s, 1970s and 1980s were quickly adopted by universities across North America.
Examples of early university Internet communities are Cleveland FreeNet, Blacksburg Electronic Village and NSTN in Nova Scotia.[7] Students took up the opportunity of free communications and saw this new phenomenon as a tool of liberation. Personal computers and the Internet would free them from corporations and governments (Nelson, Jennings, Stallman).
Graduate students played a huge part in the creation of ARPANET. In the 1960s, the network working group, which did most of the design for ARPANET's protocols, was composed mainly of graduate students.
Internet protocols
For more details on this topic, see Internet Protocol Suite.
The complex communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. While the hardware can often be used to support other software systems, it is the design and the rigorous standardization process of the software architecture that characterizes the Internet.
The responsibility for the architectural design of the Internet software systems has been delegated to the Internet Engineering Task Force (IETF).[9] The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting discussions and final standards are published in Request for Comments (RFCs), freely available on the IETF web site.
The principal methods of networking that enable the Internet are contained in a series of RFCs that constitute the Internet Standards. These standards describe a system known as the Internet Protocol Suite. This is a model architecture that divides methods into a layered system of protocols (RFC 1122, RFC 1123). The layers correspond to the environment or scope in which their services operate. At the top is the space (Application Layer) of the software application, e.g., a web browser application, and just below it is the Transport Layer which connects applications on different hosts via the network (e.g., client-server model). The underlying network consists of two layers: the Internet Layer which enables computers to connect to one-another via intermediate (transit) networks and thus is the layer that establishes internetworking and the Internet, and lastly, at the bottom, is a software layer that provides connectivity between hosts on the same local link (therefor called Link Layer), e.g., a local area network (LAN) or a dial-up connection. This model is also known as the TCP/IP model of networking. While other models have been developed, such as the Open Systems Interconnection (OSI) model, they are not compatible in the details of description, nor implementation.
The most prominent component of the Internet model is the Internet Protocol (IP) which provides addressing systems for computers on the Internet and facilitates the internetworking of networks. IP Version 4 (IPv4) is the initial version used on the first generation of the today's Internet and is still in dominant use. It was designed to address up to ~4.3 billion (109) Internet hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion. A new protocol version, IPv6, was developed which provides vastly larger addressing capabilities and more efficient routing of data traffic. IPv6 is currently in commercial deployment phase around the world.
IPv6 is not interoperable with IPv4. It essentially establishes a "parallel" version of the Internet not accessible with IPv4 software. This means software upgrades are necessary for every networking device that needs to communicate on the IPv6 Internet. Most modern computer operating systems are already converted to operate with both versions of the Internet Protocol. Network infrastructures, however, are still lagging in this development.
Internet structure
There have been many analyses of the Internet and its structure. For example, it has been determined that the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks.
Similar to the way the commercial Internet providers connect via Internet exchange points, research networks tend to interconnect into large subnetworks such as the following:
• GEANT
• GLORIAD
• The Internet2 Network (formally known as the Abilene Network)
• JANET (the UK's national research and education network)
These in turn are built around relatively smaller networks. See also the list of academic computer network organizations.
In computer network diagrams, the Internet is often represented by a cloud symbol, into and out of which network communications can pass.
Internet access
Common methods of home access include dial-up, landline broadband (over coaxial cable, fiber optic or copper wires), Wi-Fi, satellite and 3G technology cell phones.
Public places to use the Internet include libraries and Internet cafes, where computers with Internet connections are available. There are also Internet access points in many public places such as airport halls and coffee shops, in some cases just for brief use while standing. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels now also have public terminals, though these are usually fee-based. These terminals are widely accessed for various usage like ticket booking, bank deposit, online payment etc. Wi-Fi provides wireless access to computer networks, and therefore can do so to the Internet itself. Hotspots providing such access include Wi-Fi cafes, where would-be users need to bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A hotspot need not be limited to a confined location. A whole campus or park, or even an entire city can be enabled. Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services covering large city areas are in place in London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. The Internet can then be accessed from such places as a park bench.[11]
Apart from Wi-Fi, there have been experiments with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular phone networks, and fixed wireless services.
High-end mobile phones such as smartphones generally come with Internet access through the phone network. Web browsers such as Opera are available on these advanced handsets, which can also run a wide variety of other Internet software. More mobile phones have Internet access than PCs, though this is not as widely used. An Internet access provider and protocol matrix differentiates the methods used to get online.
Social impact
The Internet has made possible entirely new forms of social interaction, activities and organizing, thanks to its basic features such as widespread usability and access.
Social networking websites such as Facebook and MySpace have created a new form of socialization and interaction. Users of these sites are able to add a wide variety of items to their personal pages, to indicate common interests, and to connect with others. It is also possible to find a large circle of existing acquaintances, especially if a site allows users to utilize their real names, and to allow communication among large existing groups of people.
Sites like meetup.com exist to allow wider announcement of groups which may exist mainly for face-to-face meetings, but which may have a variety of minor interactions over their group's site at meetup.org, or other similar sites.
.

Hardware computer


Motherboard
• Motherboard - It is the "body" or mainframe of the computer, through which all other components interface.
• Central processing unit (CPU) - Performs most of the calculations which enable a computer to function, sometimes referred to as the "backbone or brain" of the computer.
o Computer fan - Used to lower the temperature of the computer; a fan is almost always attached to the CPU, and to the back of the case, also known as the 'muffin fan'.
• Firmware is system specific read only memory.
• Internal Buses - Connections to various internal components.
o Current
 PCI (being phased out for graphic cards but still used for other uses)
 PCI Express (PCI-E)
 USB
 FireWire
 HyperTransport
 Intel QuickPath (expected in 2008)
o Obsolete
 AGP (Obsolete graphic card bus)
 ISA (obsolete in PCs, but still used in industrial computers)
 VLB VESA Local Bus (outdated)
• External Bus Controllers - used to connect to external peripherals, such as printers and input devices. These ports may also be based upon expansion cards, attached to the internal buses.
Power supply

A case contro, and (usually) a cooling fan, and supplies power to run the rest of the computer, the most common types of power supplies are mechanic shed (old) but the standard for PCs actually are ATX and Micro ATX.
Video display controller
Main article: Graphics card
Produces the output for the visual display unit. This will either be built into the motherboard or attached in its own separate slot (PCI, PCI-E, PCI-E 2.0, or AGP), in the form of a graphics card.
Removable media devices
• CD (compact disc) - the most common type of removable media, inexpensive but has a short life-span.
o CD-ROM Drive - a device used for reading data from a CD.
o CD Writer - a device used for both reading and writing data to and from a CD.
• DVD (digital versatile disc) - a popular type of removable media that is the same dimensions as a CD but stores up to 6 times as much information. It is the most common way of transferring digital video.
o DVD-ROM Drive - a device used for reading data from a DVD.
o DVD Writer - a device used for both reading and writing data to and from a DVD.
o DVD-RAM Drive - a device used for rapid writing and reading of data from a special type of DVD.
• Blu-ray - a high-density optical disc format for the storage of digital information, including high-definition video.
o BD-ROM Drive - a device used for reading data from a Blu-ray disc.
o BD Writer - a device used for both reading and writing data to and from a Blu-ray disc.
• HD DVD - a high-density optical disc format and successor to the standard DVD. It was a discontinued competitor to the Blu-ray format.
• Floppy disk - an outdated storage device consisting of a thin disk of a flexible magnetic storage medium. Used today mainly for loading RAID drivers.
• Zip drive - an outdated medium-capacity removable disk storage system, first introduced by Iomega in 1994.
• USB flash drive - a flash memory data storage device integrated with a USB interface, typically small, lightweight, removable, and rewritable.
• Tape drive - a device that reads and writes data on a magnetic tape, used for long term storage.
Internal storage
Hardware that keeps data inside the computer for later use and remains persistent even when the computer has no power.
• Hard disk - for medium-term storage of data.
• Solid-state drive - a device similar to hard disk, but containing no moving parts and stores data in a digital format.
• RAID array controller - a device to manage several hard disks, to achieve performance or reliability improvement in what is called a RAID array.
Sound card
Enables the computer to output sound to audio devices, as well as accept input from a microphone. Most modern computers have sound cards built-in to the motherboard, though it is common for a user to install a separate sound card as an upgrade. Most sound cards, either built-in or added, have surround sound capabilities.
Networking
Connects the computer to the Internet and/or other computers.
• Modem - for dial-up connections or sending digital faxes. (outdated)
• Network card - for DSL/Cable internet, and/or connecting to other computers, using the Ethernet cord.
• Direct Cable Connection - Use of a null modem, connecting two computers together using their serial ports or a Laplink Cable, connecting two computers together with their parallel ports.
Other peripherals
In addition, hardware devices can include external components of a computer system. The following are either standard or very common.


Wheel mouse
Includes various input and output devices, usually external to the computer system
Input
• Text input devices
o Keyboard - a device to input text and characters by depressing buttons (referred to as keys), similar to a typewriter. The most common English-language key layout is the QWERTY layout.
• Pointing devices
o Mouse - a pointing device that detects two dimensional motion relative to its supporting surface.
o Optical Mouse - a newer technology that uses lasers, or more commonly LEDs to track the surface under the mouse to determine motion of the mouse, to be translated into mouse movements on the screen.
o Trackball - a pointing device consisting of an exposed protruding ball housed in a socket that detects rotation about two axes.
• Gaming devices
o Joystick - a general control device that consists of a handheld stick that pivots around one end, to detect angles in two or three dimensions.
o Gamepad - a general handheld game controller that relies on the digits (especially thumbs) to provide input.
o Game controller - a specific type of controller specialized for certain gaming purposes.
• Image, Video input devices
o Image scanner - a device that provides input by analyzing images, printed text, handwriting, or an object.
o Webcam - a low resolution video camera used to provide visual input that can be easily transferred over the internet.
• Audio input devices
o Microphone - an acoustic sensor that provides input by converting sound into electrical signals

Software

Software is a general term used to describe a collection of computer programs, procedures and documentation that perform some tasks on an operating system.
Terminology
The term includes:
• Application software such as word processors which perform productive tasks for users.
• Multimedia applications for playing multimedia.
• Firmware which is software programmed resident to electrically programmable memory devices on board mainboards or other types of integrated hardware carriers.
• Middleware which controls and co-ordinates distributed systems.
Software includes websites, programs, video games etc. that are coded by programming languages like C, C++, etc.
• System software such as operating systems, which interface with hardware to provide the necessary services for application software.
• Testware which is an umbrella term or container term for all utilities and application software that serve in combination for testing a software package but not necessarily may optionally contribute to operational purposes. As such, testware is not a standing configuration but merely a working environment for application software or subsets thereof.
"Software" is sometimes used in a broader context to mean anything which is not hardware but which is used with hardware, such as film, tapes and records. Updated Terminology
The term is now being used to include those that are non PC based like smartphones, palm OS, etc due to the proliferation of the mobile industry based on Symbian and Windows platforms.
"Software" is sometimes used in a broader context to mean anything which is not hardware but which is used with hardware, such as film, tapes and records.
Computer software, or just software is a general term used to describe a collection of computer programs, procedures and documentation that perform some tasks on a computer system.[1]


A screenshot of the OpenOffice.org Writer desktop software
The term includes:
• Application software such as word processors which perform productive tasks for users.
• Firmware which is software programmed resident to electrically programmable memory devices on board mainboards or other types of integrated hardware carriers.
• Middleware which controls and co-ordinates distributed systems.
Software includes websites, programs, video games etc. that are coded by programming languages like C, C++, etc.
• System software such as operating systems, which interface with hardware to provide the necessary services for application software.
• Testware which is an umbrella term or container term for all utilities and application software that serve in combination for testing a software package but not necessarily may optionally contribute to operational purposes. As such, testware is not a standing configuration but merely a working environment for application software or subsets thereof.
"Software" is sometimes used in a broader context to mean anything which is not hardware but which is used with hardware, such as film, tapes and records.[2]
Computer software is often regarded as anything but hardware, meaning that the "hard" are the parts that are tangible (able to hold) while the "soft" part is the intangible objects inside the computer. Software encompasses an extremely wide array of products and technologies developed using different techniques like programming languages, scripting languages etc. The types of software include web pages developed by technologies like HTML, PHP, Perl, JSP, ASP.NET, XML, and desktop applications like Microsoft Word, OpenOffice developed by technologies like C, C++, Java, C#, etc. Software usually runs on an underlying operating system (which is a software also) like Microsoft Windows, Linux (running GNOME and KDE), Sun Solaris etc. Software also includes video games like the Super Mario, Grand Theft Auto for personal computers or video game consoles.
Also a software usually runs on a software platform which can either be provided by the operating system or by operating system independent platforms like Java and .NET. Software written for one platform is usually unable to run on other platforms so that for instance, Microsoft Windows software will not be able to run on Mac OS because of the differences relating to the platforms and their own standards. These applications can work using software porting, interpreters or re-writing the source code for the specific platform.
Relationship to computer hardware
Computer software is so called to distinguish it from computer hardware, which encompasses the physical interconnections and devices required to store and execute (or run) the software. At the lowest level, software consists of a machine language specific to an individual processor. A machine language consists of groups of binary values signifying processor instructions which change the state of the computer from its preceding state. Software is an ordered sequence of instructions for changing the state of the computer hardware in a particular sequence. It is usually written in high-level programming languages that are easier and more efficient for humans to use (closer to natural language) than machine language. High-level languages are compiled or interpreted into machine language object code. Software may also be written in an assembly language, essentially, a mnemonic representation of a machine language using a natural language alphabet. Assembly language must be assembled into object code via an assembler.
The term "software" was first used in this sense by John W. Tukey in 1958.[3] In computer science and software engineering, computer software is all computer programs. The theory that is the basis for most modern software was first proposed by Alan Turing in his 1935 essay Computable numbers with an application to the Entscheidungsproblem.[4]
Computable numbers with an application to the Entscheidungsproblem.[4]
Types of software


A layer structure showing where Operating System is located on generally used software systems on desktops


An example text editor, Vim


Industrial Automation with robots engaged in vehicle assembly
Practical computer systems divide software systems into three major classes: system software, programming software and application software, although the distinction is arbitrary, and often blurred.
System software
System software helps run the computer hardware and computer system. It includes:
• device drivers,
• diagnostic tools,
• operating systems,
• servers,
• utilities,
• windowing systems,
The purpose of systems software is to insulate the applications programmer as much as possible from the details of the particular computer complex being used, especially memory and other hardware features, and such as accessory devices as communications, printers, readers, displays, keyboards, etc.
Programming software
Programming software usually provides tools to assist a programmer in writing computer programs, and software using different programming languages in a more convenient way. The tools include:
• compilers,
• debuggers,
• interpreters,
• linkers,
• text editors,
An Integrated development environment (IDE) merges those tools into a software bundle, and a programmer may not need to type multiple commands for compiling, interpreting, debugging, tracing, etc., because the IDE usually has an advanced graphical user interface, or GUI.]
Application software
Application software allows end users to accomplish one or more specific (non-computer related) tasks. Typical applications include:
• industrial automation,
• business software,
• computer games,
• databases,
• educational software,
• medical software,
Businesses are probably the biggest users of application software, but almost every field of human activity now uses some form of application software.

Computer architecture




Computer architecture in computer engineering is the conceptual design and fundamental operational structure of a computer system. It is a blueprint and functional description of requirements and design implementations for the various parts of a computer, focusing largely on the way by which the central processing unit (CPU) performs internally and accesses addresses in memory.
It may also be defined as the science and art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals.
The term “architecture” in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members in 1959 of the Machine Organization department in IBM’s main research center. Johnson had occasion to write a proprietary research communication about Stretch, an IBM-developed supercomputer for Los Alamos Scientific Laboratory; in attempting to characterize his chosen level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements aimed at the level of “system architecture” – a term that seemed more useful than “machine organization.” Subsequently Brooks, one of the Stretch designers, started Chapter 2 of a book (Planning a Computer System: Project Stretch, ed. W. Buchholz, 1962) by writing, “Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and then designing to meet those needs as effectively as possible within economic and technological constraints.” Brooks went on to play a major role in the development of the IBM System/360 line of computers, where “architecture” gained currency as a noun with the definition “what the user needs to know.” Later the computer world would employ the term in many less-explicit ways.
The first mention of the term architecture in the referred computer literature is in a 1964 article describing the IBM System/360.[3] The article defines architecture as the set of “attributes of a system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flow and controls, the logical design, and the physical implementation.” In the definition, the programmer perspective of the computer’s functional behavior is key. The conceptual structure part of an architecture description makes the functional behavior comprehensible, and extrapolatable to a range of use cases. Only later on did ‘internals’ such as “the way by which the CPU performs internally and accesses addresses in memory,” mentioned above, slip into the definition of computer architecture.
Machine language
Machine language is built up from discrete statements or instructions. on the processing architecture, a given instruction may specify:
• Particular registers for arithmetic, addressing, or control functions
• Particular memory locations or offsets
• Particular addressing modes used to interpret the operands
More complex operations are built up by combining these simple instructions, which (in a von Neumann machine) are executed sequentially, or as otherwise directed by control flow instructions.
Some operations available in most instruction sets include:
• moving
o set a register (a temporary "scratchpad" location in the CPU itself) to a fixed constant value
o move data from a memory location to a register, or vice versa. This is done to obtain the data to perform a computation on it later, or to store the result of a computation.
o read and write data from hardware devices
• computing
o add, subtract, multiply, or divide the values of two registers, placing the result in a register
o perform bitwise operations, taking the conjunction/disjunction (and/or) of corresponding bits in a pair of registers, or the negation (not) of each bit in a register
o compare two values in registers (for example, to see if one is less, or if they are equal)
• affecting program flow
o jump to another location in the program and execute instructions there
o jump to another location if a certain condition holds
o jump to another location, but save the location of the next instruction as a point to return to (a call)
Some computers include "complex" instructions in their instruction set. A single "complex" instruction does something that may take many instructions on other computers. Such instructions are typified by instructions that take multiple steps, control multiple functional units, or otherwise appear on a larger scale than the bulk of simple instructions implemented by the given processor. Some examples of "complex" instructions include:
• saving many registers on the stack at once
• moving large blocks of memory
• complex and/or floating-point arithmetic (sine, cosine, square root, etc.)
• performing an atomic test-and-set instruction
• instructions that combine ALU with an operand from memory rather than a register
A complex instruction type that has become particularly popular recently is the SIMD or Single-Instruction Stream Multiple-Data Stream operation or vector instruction, an operation that performs the same arithmetic operation on multiple pieces of data at the same time. SIMD have the ability of manipulating large vectors and matrices in minimal time. SIMD instructions allow easy parallelization of algorithms commonly involved in sound, image, and video processing. Various SIMD implementations have been brought to market under trade names such as MMX, 3DNow! and AltiVec.
The design of instruction sets is a complex issue. There were two stages in history for the microprocessor. One using CISC or complex instruction set computer where many instructions were implemented. In the 1970s places like IBM did research and found that many instructions were used that could be eliminated. The result was the RISC, reduced instruction set computer, architecture which uses a smaller set of instructions. A simpler instruction set may offer the potential for higher speeds, reduced processor size, and reduced power consumption; a more complex one may optimize common operations, improve memory/cache efficiency, or simplify programming.
Instruction set implementation
Any given instruction set can be implemented in a variety of ways. All ways of implementing an instruction set give the same programming model, and they all are able to run the same binary executables. The various ways of implementing an instruction set give different tradeoffs between cost, performance, power consumption, size, etc.
When designing microarchitectures, engineers use blocks of "hard-wired" electronic circuitry (often designed separately) such as adders, multiplexers, counters, registers, ALUs etc. Some kind of register transfer language is then often used to describe the decoding and sequencing of each instruction of an ISA using this physical microarchitecture. There are two basic ways to build a control unit to implement this description (although many designs use middle ways or compromises):
1. Early computer designs and some of the simpler RISC computers "hard-wired" the complete instruction set decoding and sequencing (just like the rest of the microarchitecture).
2. Other designs employ microcode routines and/or tables to do this—typically as on chip ROMs and/or PLAs (although separate RAMs have been used historically).
There are also some new CPU designs which compiles the instruction set to a writable RAM or FLASH inside the CPU (such as the Rekursiv processor and the Imsys Cjip)[1], or an FPGA (reconfigurable computing). The Western Digital MCP-1600 is an older example, using a dedicated, separate ROM for microcode.
An ISA can also be emulated in software by an interpreter. Naturally, due to the interpretation overhead, this is slower than directly running programs on the emulated hardware, unless the hardware running the emulator is an order of magnitude faster. Today, it is common practice for vendors of new ISAs or microarchitectures to make software emulators available to software developers before the hardware implementation is ready.
Often the details of the implementation have a strong influence on the particular instructions selected for the instruction set. For example, many implementations of the instruction pipeline only allow a single memory load or memory store per instruction, leading to a load-store architecture (RISC). For another example, some early ways of implementing the instruction pipeline led to a delay slot.
The demands of high-speed digital signal processing have pushed in the opposite direction -- forcing instructions to be implemented in a particular way. For example, in order to perform digital filters fast enough, the MAC instruction in a typical digital signal processor (DSP) must be implemented using a kind of Harvard architecture that can fetch an instruction and two data words simultaneously, and it requires a single-cycle multiply-accumulate multiplier.
Instruction set design
Some instruction set designers reserve one or more opcodes for some kind of software interrupt. For example, MOS Technology 6502 uses 00H, Zilog Z80 uses the eight codes C7,CF,D7,DF,E7,EF,F7,FFH[1] while Motorola 68000 use codes in the range A000..AFFFH.
Fast virtual machines are much easier to implement if an instruction set meets the Popek and Goldberg virtualization requirements.
The NOP slide used in Immunity Aware Programming is much easier to implement if the "unprogrammed" state of the memory is interpreted as a NOP.
On systems with multiple processors, non-blocking synchronization algorithms are much easier to implement if the instruction set includes support for something like "fetch-and-increment" or "load linked/store conditional (LL/SC)" or "atomic compare and swap". Code density
In early computers, program memory was expensive, so minimizing the size of a program to make sure it would fit in the limited memory was often central. Thus the combined size of all the instructions needed to perform a particular task, the code density, was an important characteristic of any instruction set. Computers with high code density also often had (and have still) complex instructions for procedure entry, parameterized returns, loops etc (therefore retroactively named Complex Instruction Set Computers, CISC). However, more typical, or frequent, "CISC" instructions merely combine a basic ALU operation, such as "add", with the access of one or more operands in memory (using addressing modes such as direct, indirect, indexed etc). Certain architectures may allow two or three operands (including the result) directly in memory or may be able to perform functions such as automatic pointer increment etc. Software-implemented instruction sets may have even more complex and powerful instructions.
Reduced instruction-set computers, RISC, were first widely implemented during a period of rapidly-growing memory subsystems and sacrifice code density in order to simplify implementation circuitry and thereby try to increase performance via higher clock frequencies and more registers. RISC instructions typically perform only a single operation, such as an "add" of registers or a "load" from a memory location into a register; they also normally use a fixed instruction width, whereas a typical CISC instruction set has many instructions shorter than this fixed length. Fixed-width instructions are less complicated to handle than variable-width instructions for several reasons (not having to check whether an instruction straddles a cache line or virtual memory page boundary[2] for instance), and are therefore somewhat easier to optimize for speed. However, as RISC computers normally require more and often longer instructions to implement a given task, they inherently make less optimal use of bus bandwidth and cache memories.
Minimal instruction set computers (MISC) are a form of stack machine, where there are few separate instructions (16-64), so that multiple instructions can be fit into a single machine word. These type of cores often take little silicon to implement, so they can be easily realized in an FPGA or in a multi-core form. Code density is similar to RISC; the increased instruction density is offset by requiring more of the primitive instructions to do a task.
There has been research into executable compression as a mechanism for improving code density. The mathematics of Kolmogorov complexity describes the challenges and limits of this.
Number of operands
Instruction sets may be categorized by the number of operands (registers, memory locations, or immediate values) in their most complex instructions. This does not refer to the arity of the operators, but to the number of operands explicitly specified as part of the instruction. Thus, implicit operands stored in a special-purpose register or on top of the stack are not counted.
(In the examples that follow, a, b, and c refer to memory addresses, and reg1 and so on refer to machine registers.)
• 0-operand ("zero address machines") -- these are also called stack machines, and all operations take place using the top one or two positions on the stack. Add two numbers in five instructions: #a, load, #b, load, add, #c, store;
• 1-operand ("one address machines") -- often called accumulator machines -- include most early computers. Each instruction performs its operation using a single operand specifier. The single accumulator register is implicit -- source, destination, or often both -- in almost every instruction: load a, add b, store c;
• 2-operand -- many RISC machines fall into this category, though many CISC machines also fall here as well. For a RISC machine (requiring explicit memory loads), the instructions would be: load a, reg1; load b, reg2; add reg1,reg2; store reg2,c;
• 3-operand CISC -- some CISC machines fall into this category. The above example here might be performed in a single instruction in a machine with memory operands: add a, b,c, or more typically (most machines permit a maximum of two memory operations even in three-operand instructions): move a, reg1; add reg1,b, c;
• 3-operand RISC -- most RISC machines fall into this category, because it allows "better reuse of data"[2]. In a typical three-operand RISC machines, all three operands must be registers, so explicit load/store instructions are needed. An instruction set with 32 registers requires 15 bits to encode three register operands, so this scheme is typically limited to instructions sets with 32-bit instructions or longer. Example: load a, reg1; load b, reg2; add reg1+reg2->reg3; store reg3,c;
• more operands -- some CISC machines permit a variety of addressing modes that allow more than 3 operands (registers or memory accesses), such as the VAX "POLY" polynomial evaluation instruction.

An instruction set is a list of all the instructions, and all their variations, that a processor can execute.
Instructions include:
• Arithmetic such as add and subtract
• Logic instructions such as and, or, and not
• Data instructions such as move, input, output, load, and store
• Control flow instructions such as goto, if ... goto, call, and return.
An instruction set, or instruction set architecture (ISA), is the part of the computer architecture related to programming, including the native data types, instructions, registers, addressing modes, memory architecture, interrupt and exception handling, and external I/O. An ISA includes a specification of the set of opcodes (machine language), the native commands implemented by a particular CPU design.
Instruction set architecture is distinguished from the microarchitecture, which is the set of processor design techniques used to implement the instruction set. Computers with different microarchitectures can share a common instruction set. For example, the Intel Pentium and the AMD Athlon implement nearly identical versions of the x86 instruction set, but have radically different internal designs.
This concept can be extended to unique ISAs like TIMI (Technology-Independent Machine Interface) present in the IBM System/38 and IBM AS/400. TIMI is an ISA that is implemented as low-level software and functionally resembles what is now referred to as a virtual machine. It was designed to increase the longevity of the platform and applications written for it, allowing the entire platform to be moved to very different hardware without having to modify any software except that which comprises TIMI itself. This allowed IBM to move the AS/400 platform from an older CISC architecture to the newer POWER architecture without having to recompile any parts of the OS or software associated with it. Nowadays there are several open source Operating Systems which could be easily ported on any existing general purpose CPU, because the compilation is the essential part of their design (e.g. new software installation).

Monday, January 12, 2009


HOW IMPORTANT IS COMPUTER TECHNOLOGY IN EDUCATION ???
The incorporation of information technology in the education sector is important to meet the challenges presented by new trends, especially with the global communication of knowledge. It is essential that the students become familiar with the concept and use of information technology in order to equip them for future job market. Similarly, the faculty can achieve better quality in teaching methodology.

Information technology opens up prospects for a form for learning that can be customized to students. Using IT tools, such as multimedia, e-mail, presentation handouts, commercial, courseware, CD-ROM materials, computer simulations, computer lab/classroom, www-based resources, teaching can be organized so that the pupils can themselves control the learning process. Educational courses, based on the learners’ skills, can be designed in new and more effective ways.

A pool of information is globally available that enables teaching with real world situations, for example, a communication professor who teaches advertising requires students to locate various advertising agencies on the internet. The presentations are then critiqued according to the various principles taught in the course.

Collaborative activities among student can be facilitated using networked computer labs. Online discussion forums allow students to discuss topics specified by the instructor. Team projects can be completed where team members are located at different geographical locations. This education can be made available outside of working hours, on weekends and at remote locations.

Computer simulations are extremely useful, especially in scientific studies. Students can explore various facets of human anatomy by simulated dissection, learning how structures relate to one another. Visual tools enable students to better understand concepts.

Millions of people use home computers for education and information. Many of the educational software programs are used by the children and adults in home. Edutainment programs specific geared towards home markets combine education with entertainment so they can compete with television and electric games. Encyclopedias, dictionaries, atlases, almanacs, telephone dictionary, medical references, and other specialized now come in low-cost CD ROM version-offer with multimedia capability. More up-to-the-minute information is available from the internet and other online. Of course internet connections also provide E-mail, discussion groups, and other communication options for home users.

Most computer games offer great graphic simulations. The same technology that mesmerizes us can also unlock out creativity. There are many examples. Word processors help many of us to become writers, graphics software brings out the artists among us, and desktop publishing systems put the power of the press in more hands.

COMPUTER: INTRODUCTION



A computer is a machine which manipulates data according to list of instructions which makes it an ideal example of a data processing system.

Computers take numerous physical forms. Early electronic computers were the size of a large room, consuming as a much power as several hundred modern personal computers. Modern computers are based on comparatively tiny integrated circuits and are millions to billions of times more capable while occupying a fraction of the space. Today, simple computers may be made small enough to fit into a wrist watch and be powered form a watch battery. Personal computers in various forms are icons of the information technology age and are what most people think of as “a computer”. However, the most common form of computer in use today is by far the embedded computer. Embedded computers are small, simple devices that are often used to control other devices-for example, they may be found in machines ranging from fighter aircraft to industrial robots, digital cameras, and even children’s toys.

The ability to store and execute lists of instructions called programs makes computers extremely versatile and distinguishes them from calculators. The church- Turing thesis is a mathematical statement of this versatility: Any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, computers with capability and complexity ranging from that of a personal digital assistant to a supercomputer are all able to perform the same computational tasks given enough time and storage capacity.

INFORMATION AND TECHNOLOGY




By the mid 20th century humans had achieved a mastery of technology sufficient to leave the surface of the earth for the first time and explore space.

Technology is a broad concept that deals with a species usage and knowledge of tools and craft, and how it affects a species ability to control and adapt to its environment. In human society, it is a consequence of science and engineering, although several technological advances predate the two concepts. Technology is a term with origins in the Greek “technologia”. “techne”,(“craft”) and “logia” (“saying”). However, a strict definition is elusive; “technology” can refer to material objects of use to humanity, such as machines, hardware or utensils, bit can also encompass broader themes, including systems, methods of organizations, and techniques.

The term can either be applied generally or to specific areas: examples include “construction technology”, “medical technology”, or “state-of-the-art technology”.
People’s use of technology began with conversation of natural resources into simple tools. The prehistorical discovery of the ability to control fire increased the available the sources of food and the invention of the wheel helped humans in traveling in and controlling their environment. Recent technological developments, including the printing press, the telephone, and the internet, have lessened physical barriers to communications and allowed humans to interact on a global scale. However, not all technology has been used peaceful purposes; the developments of weapons of ever-increasing destructive power gas progressed throughout history, from clubs to nuclear weapons.

Technology has affected society and its surround in a number of ways. In many societies, technology has helped develop more advanced economies (including today’s global economy) and has allowed the rise of leisure class. Many technological processes produce unwanted by-product, known as pollution, and deplete natural resources, to the detriment of the earth and its environment.

Various implementations of technology influence the values of society and new technology often raises new ethical questions. Examples include the rise of the notion of efficiency in terms of human productivity, a term originally applied only to machines, and the challenge of traditional norms.

Philosophical debates have arisen over the present and future use of technology in society, with disagreements over whether technology improves the human conditions or worsens it. Neo-Luddism, anarcho-primitivsm, and similar movements criticize pervasiveness of technology in the modern world, claiming that it harms the environment and alienates people: proponents of ideologies such as transhumanism and techno-progressivsm view continued technological progress as beneficial to society and humans condition. Indeed, until recently, it was believed that development of technology was restricted only human beings, but recent scientific studies indicate that other primates and certain dolphin communities have developed simple tools and learned to pass their knowledge to other generations.