Monday, December 29, 2008

I.M

Theodor Holm Nelson (born 1937) is an American sociologist, philosopher, and pioneer of information technology. He coined the term “hypertext” in 1963 and published it in 1965. He also is credited with first use of the words hypermedia, transclusion, virtuality, intertwingularity and teledildonics. The main thrust of his work has been to make computers easily accessible to ordinary people. His motto is:

A user interface should be so simple that a beginner in an emergency can understand it within ten seconds.

Ted Nelson promotes four maxims: “most people are fools, most authority is malignant, God does not exist, and everything is wrong”. (See chapter II, 3rd paragraph, 3rd and 4th sentence in: “The Curse of Xanadu”[1].)

Nelson co-founded Itty bitty machine company, or “ibm”, which was a small computer retail store operating from 1977 to 1980 in Evanston, Illinois. The Itty bitty machine company was one of the few retail stores to sell the original Apple I computer. In 1978 he had a significant impact upon IBM’s thinking when he outlined his vision of the potential of personal computing to the team that three years later launched the IBM PC[5].

Ted Nelson is currently working on a new information structure, ZigZag[6], which is described on the Xanadu project website, which also hosts two versions of the Xanadu code. He is also currently developing XanaduSpace[7] - a system for the exploration of connected parallel documents (an early version of this software may be freely downloaded from [3]. He is a visiting fellow at Oxford University - based at the Oxford Internet Institute - where he works in the fields of information, computers, and human-machine interfaces.

He is the son of the late Emmy Award-winning director Ralph Nelson and the Academy Award-winning actress Celeste Holm.

Tim Berners-Lee

Sir Timothy John Berners-Lee OM KBE FRS FREng FRSA (born 8 June 1955) is an English computer scientist credited with inventing the World Wide Web. On 25 December 1990 he implemented the first successful communication between an HTTP client and server via the Internet with the help of Robert Cailliau and a young student staff at CERN. He was ranked Joint First alongside Albert Hofmann in The Telegraph’s list of 100 greatest living geniuses.[2] Berners-Lee is the director of the World Wide Web Consortium (W3C), which oversees the Web’s continued development, the founder of the World Wide Web Foundation and he is a senior researcher and holder of the 3Com Founders Chair at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).[3]

Timeline1976A Physics graduate of The Queen’s College, Oxford University, UK. Principal engineer with PlesseyTelecommunications in PooleFounding.

1980First hypertext system called “Enquire”

1981-1984Director of ImageComputer Systems

1989Started at CERN, Geneva Switzerland and writes his “www proposal”

1990Invents World Wide Web server and client software for NeXTStep.

1995Received a “Kilby Young Innovator” award by the The Kilby Awards Foundation and was a co-recipient of the ACM Software Systems Award.

July, 1996Was awarded a Distinguished Fellowship of the British Computer Society

CurrentlyThe Director of the W3C and also a Principal Research Scientist at the Massachusetts Institute of Technology Laboratory for Computer Science (MIT LCS).

A director of The Web Science Research Initiative (WSRI) [5], and a member of the advisory board of the MIT Center for Collective Intelligence[1][6]

Berners-Lee believes the future of Semantic Web holds immense potential for how machines will collaborate in the coming days. In an interview with an Indian publication, he shared his views as:

“It is evolving at the moment. The data Web is in small stages, but it is a reality, for instance there is a Web of data about all kinds of things, like there is a Web of data about proteins, it is in very early stages. When it comes to publicly accessible data, there is an explosion of data Web in the life sciences community. When you look about data for proteins and genes, and cell biology and biological pathways, lots of companies are very excited. We have a healthcare and life sciences interest group at the Consortium, which is coordinating lot of interest out there.”[citation needed]

He has also become one of the pioneer voices in favour of Net Neutrality..[8]

He feels that ISPs should not intercept customers’ browsing activities, the way companies like Phorm do. He has such strong views about this that he would change ISPs to get away from such activities.[9][10]

Inventing the World Wide Web

This NeXT Computer was used by Berners-Lee at CERN and became the world’s first Web server.

While an independent contractor at CERN from June to December 1980, Berners-Lee proposed a project based on the concept of hypertext, to facilitate sharing and updating information among researchers.[18] While there, he built a prototype system named ENQUIRE. After leaving CERN, in 1980, he went to work at John Poole’s Image Computer Systems Ltd in Bournemouth but returned to CERN in 1984 as a fellow. In 1989, CERN was the largest Internet node in Europe, and Berners-Lee saw an opportunity to join hypertext with the Internet: “I just had to take the hypertext idea and connect it to the Transmission Control Protocol and domain name system ideas and — ta-da! — the World Wide Web.”[19] He wrote his initial proposal in March 1989, and in 1990, with the help of Robert Cailliau, produced a revision which was accepted by his manager, Mike Sendall. He used similar ideas to those underlying the Enquire system to create the World Wide Web, for which he designed and built the first web browser and editor (WorldWideWeb, running on the NeXTSTEP operating system) and the first Web server, CERN HTTPd (short for HyperText Transfer Protocol daemon).

The first Web site built was at CERN[20][21][22][23] and was first put online on 6 August 1991. It provided an explanation about what the World Wide Web was, how one could own a browser and how to set up a Web server. It was also the world’s first Web directory, since Berners-Lee maintained a list of other Web sites apart from his own.

In 1994, Berners-Lee founded the World Wide Web Consortium (W3C) at the Massachusetts Institute of Technology. It comprised various companies that were willing to create standards and recommendations to improve the quality of the Web. Berners-Lee made his idea available freely, with no patent and no royalties due. The World Wide Web Consortium decided that their standards must be based on royalty-free technology, so they can be easily adopted by anyone.[24]

Creation of internet

A 1946 comic science-fiction story, A Logic Named Joe, by Murray Leinster laid out the Internet and many of its strengths and weaknesses. However, it took more than a decade before reality began to catch up with this vision.

The USSR’s launch of Sputnik spurred the United States to create the Advanced Research Projects Agency, known as ARPA, in February 1958 to regain a technological lead.[2][3] ARPA created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment (SAGE) program, which had networked country-wide radar systems together for the first time. J. C. R. Licklider was selected to head the IPTO, and saw universal networking as a potential unifying human revolution.

Licklider moved from the Psycho-Acoustic Laboratory at Harvard University to MIT in 1950, after becoming interested in information technology. At MIT, he served on a committee that established Lincoln Laboratory and worked on the SAGE project. In 1957 he became a Vice President at BBN, where he bought the first production PDP-1 computer and conducted the first public demonstration of time-sharing.

At the IPTO, Licklider recruited Lawrence Roberts to head a project to implement a network, and Roberts based the technology on the work of Paul Baran,[citation needed] who had written an exhaustive study for the U.S. Air Force that recommended packet switching (as opposed to circuit switching) to make a network highly robust and survivable. After much work, the first two nodes of what would become the ARPANET were interconnected between UCLA and SRI International in Menlo Park, California, on October 29, 1969. The ARPANET was one of the “eve” networks of today’s Internet. Following on from the demonstration that packet switching worked on the ARPANET, the British Post Office, Telenet, DATAPAC and TRANSPAC collaborated to create the first international packet-switched network service. In the UK, this was referred to as the International Packet Switched Service (IPSS), in 1978. The collection of X.25-based networks grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. The X.25 packet switching standard was developed in the CCITT (now called ITU-T) around 1976. X.25 was independent of the TCP/IP protocols that arose from the experimental work of DARPA on the ARPANET, Packet Radio Net and Packet Satellite Net during the same time period. Vinton Cerf and Robert Kahn developed the first description of the TCP protocols during 1973 and published a paper on the subject in May 1974. Use of the term “Internet” to describe a single global TCP/IP network originated in December 1974 with the publication of RFC 675, the first full specification of TCP that was written by Vinton Cerf, Yogen Dalal and Carl Sunshine, then at Stanford University. During the next nine years, work proceeded to refine the protocols and to implement them on a wide range of operating systems.

The first TCP/IP-based wide-area network was operational by January 1, 1983 when all hosts on the ARPANET were switched over from the older NCP protocols. In 1985, the United States’ National Science Foundation (NSF) commissioned the construction of the NSFNET, a university 56 kilobit/second network backbone using computers called “fuzzballs” by their inventor, David L. Mills. The following year, NSF sponsored the conversion to a higher-speed 1.5 megabit/second network. A key decision to use the DARPA TCP/IP protocols was made by Dennis Jennings, then in charge of the Supercomputer program at NSF.

The opening of the network to commercial interests began in 1988. The US Federal Networking Council approved the interconnection of the NSFNET to the commercial MCI Mail system in that year and the link was made in the summer of 1989. Other commercial electronic e-mail services were soon connected, including OnTyme, Telemail and Compuserve. In that same year, three commercial Internet service providers (ISP) were created: UUNET, PSINET and CERFNET. Important, separate networks that offered gateways into, then later merged with, the Internet include Usenet and BITNET. Various other commercial and educational networks, such as Telenet, Tymnet, Compuserve and JANET were interconnected with the growing Internet. Telenet (later called Sprintnet) was a large privately funded national computer network with free dial-up access in cities throughout the U.S. that had been in operation since the 1970s. This network was eventually interconnected with the others in the 1980s as the TCP/IP protocol became increasingly popular. The ability of TCP/IP to work over virtually any pre-existing communication networks allowed for a great ease of growth, although the rapid growth of the Internet was due primarily to the availability of commercial routers from companies such as Cisco Systems, Proteon and Juniper, the availability of commercial Ethernet equipment for local-area networking and the widespread implementation of TCP/IP on the UNIX operating system.

Growth

Although the basic applications and guidelines that make the Internet possible had existed for almost a decade, the network did not gain a public face until the 1990s. On August 6, 1991, CERN, which straddles the border between France and Switzerland, publicized the new World Wide Web project. The Web was invented by English scientist Tim Berners-Lee in 1989.

An early popular web browser was ViolaWWW, patterned after HyperCard and built using the X Window System. It was eventually replaced in popularity by the Mosaic web browser. In 1993, the National Center for Supercomputing Applications at the University of Illinois released version 1.0 of Mosaic, and by late 1994 there was growing public interest in the previously academic, technical Internet. By 1996 usage of the word Internet had become commonplace, and consequently, so had its use as a synecdoche in reference to the World Wide Web.

Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks, such as FidoNet, have remained separate). During the 1990s, it was estimated that the Internet grew by 100% per year, with a brief period of explosive growth in 1996 and 1997.[4] This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network

Internet structure

There have been many analyses of the Internet and its structure. For example, it has been determined that the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks.

Similar to the way the commercial Internet providers connect via Internet exchange points, research networks tend to interconnect into large subnetworks such as the following:

GEANT

GLORIAD

The Internet2 Network (formally known as the Abilene Network)

JANET (the UK’s national research and education network)

These in turn are built around relatively smaller networks. See also the list of academic computer network organizations.

In computer network diagrams, the Internet is often represented by a cloud symbol, into and out of which network communications can pass.

History

Back in 1991 Wi-Fi was invented by NCR Corporation/AT&T (later on Lucent & Agere Systems) in Nieuwegein, the Netherlands. Initially meant for cashier systems the first wireless products were brought on the market under the name WaveLAN with speeds of 1Mbps/2Mbps. Vic Hayes who is the inventor of Wi-Fi has been named ‘father of Wi-Fi’ and was with his team involved in designing standards such as IEEE 802.11b, 802.11a and 802.11g. In 2003, Vic retired from Agere Systems. Agere Systems suffered from strong competition in the market even though their products were cutting edge, as many opted for cheaper Wi-Fi solutions. Agere’s 802.11abg all-in-one chipset (code named: WARP) never hit the market, Agere Systems decided to quit the Wi-Fi market in late 2004.

Wi-Fi: How it works

The typical Wi-Fi setup contains one or more Access Points (APs) and one or more clients. An AP broadcasts its SSID (Service Set Identifier, Network name) via packets that are called beacons, which are broadcasted every 100ms. The beacons are transmitted at 1Mbps, and are relatively short and therefore are not of influence on performance. Since 1Mbps is the lowest rate of Wi-Fi it assures that the client who receives the beacon can communicate at at least 1Mbps. Based on the settings (i.e. the SSID), the client may decide whether to connect to an AP. Also the firmware running on the client Wi-Fi card is of influence. Say two AP’s of the same SSID are in range of the client, the firmware may decide based on signal strength (Signal-to-noise ratio) to which of the two AP’s it will connect. The Wi-Fi standard leaves connection criteria and roaming totally open to the client. This is a strength of Wi-Fi, but also means that one wireless adapter may perform substantially better than the other. Since Windows XP there is a feature called zero configuration which makes the user show any network available and let the end user connect to it on the fly. In the future wireless cards will be more and more controlled by the operating system. Microsoft’s newest feature called SoftMAC will take over from on-board firmware. Having said this, roaming criteria will be totally controlled by the operating system. Wi-Fi transmits in the air, it has the same properties as a non-switched ethernet network. Even collisions can therefore appear like in non-switched ethernet LAN’s.

Advantages of Wi-Fi

Unlike packet radio systems, Wi-Fi uses unlicensed radio spectrum and does not require

regulatory approval for individual deployers.

Allows LANs to be deployed without cabling, potentially reducing the costs of network

deployment and expansion. Spaces where cables cannot be run, such as outdoor areas

and historical buildings, can host wireless LANs.

Wi-Fi products are widely available in the market. Different brands of access points and

client network interfaces are interoperable at a basic level of service.

Competition amongst vendors has lowered prices considerably since their inception.

Wi-Fi networks support roaming, in which a mobile client station such as a laptop computer

can move from one access point to another as the user moves around a building or area.

Many access points and network interfaces support various degrees of encryption to protect

traffic from interception.

Wi-Fi is a global set of standards. Unlike cellular carriers, the same Wi-Fi client works

in different countries around the world.

Multimedia

Multimedia is media and content that utilizes a combination of different content forms. The term can be used as a noun (a medium with multiple content forms) or as an adjective describing a medium as having multiple content forms. The term is used in contrast to media which only utilize traditional forms of printed or hand-produced material. Multimedia includes a combination of text, audio, still images, animation, video, and interactivity content forms.

Multimedia is usually recorded and played, displayed or accessed by information content processing devices, such as computerized and electronic devices, but can also be part of a live performance. Multimedia (as an adjective) also describes electronic media devices used to store and experience multimedia content. Multimedia is similar to traditional mixed media in fine art, but with a broader scope. The term “rich media” is synonymous for interactive multimedia. Hypermedia can be considered one particular multimedia application.

Multimedia may be broadly divided into linear and non-linear categories. Linear active content progresses without any navigation control for the viewer such as a cinema presentation. Non-linear content offers user interactivity to control progress as used with a computer game or used in self-paced computer based training. Hypermedia is an example of non-linear content.

Multimedia presentations can be live or recorded. A recorded presentation may allow interactivity via a navigation system. A live multimedia presentation may allow interactivity via an interaction with the presenter or performer.

Multimedia presentations may be viewed in person on stage, projected, transmitted, or played locally with a media player. A broadcast may be a live or recorded multimedia presentation. Broadcasts and recordings can be either analog or digital electronic media technology. Digital online multimedia may be downloaded or streamed. Streaming multimedia may be live or on-demand.Multimedia games and simulations may be used in a physical environment with special effects, with multiple users in an online network, or locally with an offline computer, game system, or simulator.

History of the term

In 1965 the term Multi-media was used to describe the Exploding Plastic Inevitable, a performance that combined live rock music, cinema, experimental lighting and performance art.[citation needed]

In the intervening forty years the word has taken on different meanings. In the late 1970s the term was used to describe presentations consisting of multi-projector slide shows timed to an audio track.[citation needed] In the 1990s it took on its current meaning. In common usage the term multimedia refers to an electronically delivered combination of media including video, still images, audio, text in such a way that can be accessed interactively.[1] Much of the content on the web today falls within this definition as understood by millions.

Some computers which were marketed in the 1990s were called “multimedia” computers because they incorporated a CD-ROM drive, which allowed for the delivery of several hundred megabytes of video, picture, and audio data.

USAGE:

Multimedia finds its application in various areas including, but not limited to, advertisements, art, education, entertainment, engineering, medicine, mathematics, business, scientific research and spatial temporal applications.

No comments:

Post a Comment