Tuesday, January 19, 2016

4

This article is about the type of website. For other uses, see Wiki (disambiguation).
"WikiNode" redirects here. For the WikiNode of Wikipedia, see Wikipedia:WikiNode.
"Wiki format" redirects here. For the type of markup language, see Wiki markup
A wiki (Listeni/ˈwɪki/ wik-ee) is a website whichallows collaborative modification of its content and structure directly from the web browser. In a typical wiki, text is written using a simplified markup language (known as "wiki markup"), and often edited with the help of a rich-text editor.[1]
A wiki is run using wiki software, otherwise known as a wiki engine. There are dozens of different wiki engines in use, both standalone and part of other software, such as bug tracking systems. Some wiki engines are open source, whereas others are proprietary. Some permit control over different functions (levels of access); for example, editing rights may permit changing, adding or removing material. Others may permit access without enforcing access control. Other rules may also be imposed to organize content. A wiki engine is a type of content management system, but it differs from most other such systems, including blog software, in that the content is created without any defined owner or leader, and wikis have little implicit structure, allowing structure to emerge according to the needs of the users.[2]
The encyclopedia project Wikipedia is by far the most popular wiki-based website, and is in fact one of the most widely viewed sites of any kind of the world, having been ranked in the top ten since 2007. (Wikipedia is not a single wiki but rather a collection of hundreds of wikis, one for each language.) There are at least tens of thousands of other wikis in use, both public and private, including wikis functioning as knowledge managementresources, notetaking tools, community websites and intranets.

3

Main article: Project Xanadu
Nelson founded Project Xanadu in 1960, with the goal of creating a computer network with a simple user interface. The effort is documented in his 1974 book Computer Lib / Dream Machines and the 1981 Literary Machines. Much of his adult life has been devoted to working on Xanadu and advocating for it.
The Xanadu project itself failed to flourish, for a variety of reasons which are disputed. Journalist Gary Wolf published an unflattering history of Nelson and his project in the June 1995 issue of Wired magazine, calling it "the longest-running vaporware project in the history of computing".[6] On his own website, Nelson expressed his disgust with the criticisms, referring to Wolf as "Gory Jackal", and threatened to sue him.[7] He also outlined his objections in a letter to Wired,[8] and released a detailed rebuttal of the article.[9]
Nelson has stated that some aspects of his vision are being fulfilled by Tim Berners-Lee's invention of the World Wide Web, but he dislikes the World Wide Web, XML and all embedded markup – regarding Berners-Lee's work as a gross over-simplification of his original vision:
HTML is precisely what we were trying to PREVENT— ever-breaking links, links going outward only, quotes you can't follow to their origins, no version management, no rights management.[10]
Jaron Lanier explains the difference between the World Wide Web and Nelson's vision, and the implications:
A core technical difference between a Nelsonian network and what we have become familiar with online is that [Nelson's] network links were two-way instead of one-way. In a network with two-way links, each node knows what other nodes are linked to it. ... Two-way linking would preserve context. It's a small simple change in how online information should be stored that couldn't have vaster implications for culture and the economy.[11]

Other projects[edit]

2

HTTP functions as a request-response protocol in the client-server computing model. A web browser, for example, may be the client and an application running on a computer hosting a web site may be the server. The client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content, or performs other functions on behalf of the client, returns a response message to the client. The response contains completion status information about the request and may also contain requested content in its message body.
A web browser is an example of a user agent (UA). Other types of user agent include the indexing software used by search providers (web crawlers), voice browsersmobile apps, and other software that accesses, consumes, or displays web content.
HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites often benefit from web cache servers that deliver content on behalf ofupstream servers to improve response time. Web browsers cache previously accessed web resources and reuse them when possible to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers.
HTTP is an application layer protocol designed within the framework of the Internet Protocol Suite. Its definition presumes an underlying and reliable transport layer protocol,[2] and Transmission Control Protocol (TCP) is commonly used. However HTTP can use unreliable protocols such as the User Datagram Protocol (UDP), for example in Simple Service Discovery Protocol (SSDP).
HTTP resources are identified and located on the network by uniform resource locators (URLs), using the uniform resource identifier (URI) schemes http and https. URIs and hyperlinks in Hypertext Markup Language (HTML) documents form inter-linked hypertext documents.
HTTP/1.1 is a revision of the original HTTP (HTTP/1.0). In HTTP/1.0 a separate connection to the same server is made for every resource request. HTTP/1.1 can reuse a connection multiple times to download images, scriptsstylesheetsetc after the page has been delivered. HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead.

1

From Wikipedia, the free encyclopedia
This article is about websites in general. For the Internet domain .website, see List of Internet top-level domains.
Not to be confused with WebCite.

NASA.gov homepage as it appeared in April 2015
website, also written as web site,[1] or simply site,[2] is a set of related web pages typically served from a single web domain. A website is hosted on at least one web server, accessible via a network such as the Internetor a private local area network through an Internet address known as a uniform resource locator (URL). All publicly accessible websites collectively constitute the World Wide Web.
Web pages, which are the building blocks of websites, are documents, typically written in plain text interspersed with formatting instructions of Hypertext Markup Language (HTMLXHTML). They may incorporate elements from other websites with suitable markup anchors. Webpages are accessed and transported with the Hypertext Transfer Protocol (HTTP), which may optionally employ encryption (HTTP Secure, HTTPS) to provide security and privacy for the user of the webpage content. The user's application, often a web browser, renders the page content according to its HTML markup instructions onto a display terminal.
The pages of a website can usually be accessed from a simple Uniform Resource Locator (URL) called the web address. The URLs of the pages organize them into a hierarchy, although hyperlinking between them conveys the reader's perceived site structure and guides the reader's navigation of the site which generally includes a home page with most of the links to the site's web content, and a supplementary about, contact and link page.
Some websites require a subscription to access some or all of their content. Examples of subscription websites include many business sites, parts of news websites, academic journal websites, gaming websites, file-sharing websites, message boards, web-based emailsocial networking websites, websites providing real-time stock market data, and websites providing various other services (e.g., websites offering storing and/or sharing of images, files and so forth).