6 Egyptian script is dissimilar from Mesopotamian cuneiform, but similarities in concepts and in earliest attestation suggest that the idea of writing may have come to Egypt from Mesopotamia. 7 In 1999, Archaeology magazine reported that the earliest Egyptian glyphs date back to 3400 bc, which "challenge the commonly held belief that early logographs, pictographic symbols representing a specific place, object, or quantity, first evolved into more complex phonetic symbols in Mesopotamia." 8 Similar. In addition, the script is still undeciphered, and there is debate about whether the script is true writing at all or, instead, some kind of proto-writing or nonlinguistic sign system. An additional possibility is the undeciphered Rongorongo script of Easter Island. It is debated whether this is true writing and, if it is, whether it is another case of cultural diffusion of writing. The oldest example is from 1851, 139 years after their first contact with Europeans. One explanation is that the script was inspired by Spain 's written annexation proclamation in 1770. 9 Various other known cases of cultural diffusion of writing exist, where the general concept of writing was transmitted from one culture to another, but the specifics of the system were independently developed.
Statement of Purpose history masters Sample Graduate robert
History of writing ancient numbers for how the writing of numbers began. It is generally agreed that true writing of language (not only numbers) was independently conceived and developed in at least two ancient civilizations and possibly more. The two places where it is most certain that the concept of writing was both conceived and developed independently are in ancient. Sumer (in, mesopotamia around 3100 bc, and. Mesoamerica by 300 bc, 3 because no precursors have been found to either of these in their respective regions. Mesoamerican scripts are known, the oldest being from the. Olmec or, zapotec of, mexico. Independent writing systems also arose. Egypt around 3100 bc and in China around 1200 bc in Shang dynasty 4 but historians debate whether these writing systems were developed completely independently of Sumerian writing or whether either or both were inspired by sumerian writing via a process of cultural diffusion. That is, it is possible that the concept of representing language by using about writing, though not necessarily the specifics of how such a system worked, was passed on by traders or merchants traveling between the two regions. Ancient Chinese characters are considered by many to be an independent invention because there is no evidence of contact between ancient China and the literate civilizations of the near East, 5 and because of the distinct differences between the mesopotamian and Chinese approaches to logography.
In the history of how writing systems have evolved in different human civilizations, more complete writing systems were preceded by proto-writing, systems of ideographic or early mnemonic symbols. True writing, in which the content of a linguistic utterance is encoded so that another reader can reconstruct, with a fair degree of accuracy, the exact utterance written down, is a later development. It is distinguished from review proto-writing, which typically avoids encoding grammatical words and affixes, making it more difficult or impossible to reconstruct the exact meaning intended by the writer unless a great deal of context is already known in advance. One of the earliest forms of written expression is cuneiform. 2, contents, inventions of writing edit, see also: List of languages by first written accounts. Sumer, an ancient civilization of southern. Mesopotamia, is believed to be the place where written language was first invented around 3100. Writing numbers for the purpose of record keeping began long before the writing of language.
To start, table headings and Explanation, supplier or System - the suppliers full name or earlier/later names may be shown. Hardware options may be included with the system name. Cpu and Precision - this includes the cpu chip type and an indication of the precision as shown in the original ccta results. This gives Base: Precision. For example, 2:23 indicates 23 binary digits and 16:6 six hexadecimal digits. These were important when considering accuracy where hexadecimal single precision was not that good. Precision numbers are as follows (single and double). The history of writing traces the development of expressing language by letters or other marks 1 and also the studies and descriptions of these developments.
Hints for Successful Graduate Applications department of History
Although this produces code outside the definition of Whetstone instructions, which include a specific proportion of procedure calls, it is a valid technique to obtain the best performance out of modern systems and may well be the compiler default optimisation level. As reflected in the pc results, a good compiler can halve the execution time by in-lining, careful choice of instructions and sequence, and omission of intermediate stores/loads. With in-lining and global optimisation, a small number of compilers identified that the dominant loop did not have to be executed and immediately lead to an apparent more than doubling of mwips speeds. This was identified by the 1980 enhancements and fixed in 1987, essentially by changing the name of one variable. Unlike some other standard benchmarks, Whetstone results were generally verified as part of the ccta system appraisal, in project related benchmarking sessions portfolio or during acceptance trials. It was also standard practice to run the tests with different levels of optimisation and obvious over optimised results were not published. Besides the global optimisation problem, two other areas of complication have been observed.
The first is where loop variables can be too large for index registers yet the program still runs with a truncated count. This is catered for by having a double loop to control the running time. The second complication becomes apparent as systems become faster and underflow can slow down the execution rate of the first two loops. This can be fixed by changing the values of variables t and t1 to be closer.5. The latest Intel compiler appears to over optimise the loop with integer arithmetic. Here, a series of variables are calculated which produce array indices of constant values and therefore only need to be calculated once. It would seem that the only way the problem would arise was if the compiler carried out the indexing calculations, maybe to determine that array accesses are not going to be out of bounds.
The benchmark was still being run by dec in 1996 with results of Alpha-based systems available. The Intel microprocessors were designed at the height of popularity of the Whetstone benchmark. Examining the instruction set of the math coprocessor, with instructions for sin, cos, atan, sqrt and log, possibly indicates a complete hardware implementation (the one and only?) to match the benchmark. The design also includes 80 bit registers which ensure fast double precision operation. Although rightly not used as one of the main performance measurement tools, the Whetstone benchmark was still run by Intel in 1996, with results of 486 systems, dx4 and Pentium overdrive processors being available. The benchmark also formed a small part (2) of the Intel icomp benchmark.
As can be seen in pc results, the Intel P4 processor obtains poor results relative to cpu mhz. This might be due to the length of the P4s execution pipelines and the relatively few instructions in the benchmarks timing loops. Compiler Optimisation, the benchmark is very simple, comprising some 150 statements with eight active loops, three of which execute via procedure calls. Three loops carry out floating point calculations, two functions, one assignments, one fixed point arithmetic and one branching statements. The dominant loop, usually accounting for 30 to 50 of the time, carries out floating point calculations via procedure calls. The tests only reference a small amount of data which will fit in the L1 cache of any cpu. Hence, l2 cache and memory speed should have no influence on performance ratings. Speeds are invariably proportional to cpu mhz on a given type of processor. The code was designed to be non-optimisable and optimising compilers did not have a significant impact until the introduction of in-lining of subroutine instructions.
Statement of Purpose, personal History, diversity - the GradCafe
See m and m, to Start, throwing The Stone, the benchmark results were published within ccta as "Commercial in Confidence" and supplied to customers when required for a particular procurement. By 1979, results were available for about 200 systems from 30 suppliers. Although the main emphasis was on comparing speeds via fortran, limited results were also available via algol, pl/i, apl, pascal, basic, simula and Coral besides from varying optimising options. Along with results in single and double precision (and extended precision where appropriate more than 500 measurements were available. By this time, the Whetstone benchmark speed rating had become the default definition of minicomputer mips (Millions of Instructions Per Second its significance being exaggerated when a minicomputer supplier somehow acquired the table of Whetstone benchmark results and published some of them in the computer. Whetstone performance ratings are known to have been a serious consideration in the design of the digital vax systems and other minicomputers of the same vintage, where some were reluctant to publish double precision results which did not match vax speeds. Dec benchmarking publications show that Whetstone results were given essay serious consideration until 1986.
In 1980, i added facilities to time each of the eight loops to produce speed ratings in Millions of Integer Instructions and Floating point Operations Per Second (mips and mflops). Mips represent a relative measurement where dec vax 11/780. This was to identify the tricks that some compilers were getting up to and to provide more meaningful measures for supercomputers. The last alterations to the benchmark were in 1987, in conjunction with Bangor University, who made slight changes intended to avoid over optimisation whilst still executing identical functions. The benchmarks were also converted to fortran 77 standards. At a later stage, i produced compatible versions using Fortran, basic, c and java programming languages for use on PCs (see pc results). These included further changes to repeat the tests via outer loops to prevent speed calculation inaccuracy due to timer resolution. 2005 - the Whetstone benchmark has been compiled to run as a 64 bit program via windows xp pro x64 and modified to demonstrate performance of dual Core cpus. Also available are 32 bit versions that use sse daphnia floating point instructions via the latest Microsoft compiler.
fully vectorisable version fovp12 (using arrays instead of simple variables). This provides mwips ratings at different vector lengths (array dimensions). At the time, results of the livermore kernels benchmark were available for top of the range scientific systems but it was considered that it would be useful to be able to have rough performance comparisons with less glamorous systems. Results are given later in the tables at vector length 256. Also during 1978, the standard versions were modified to calculate mwips, using cpu timers. Vector version reference - r longbottom, "Performance of Multi-user Supercomputing Facilities 4th International Conference on Supercomputing, April 1989. It appears that my vectorisable version was used long after I departed from the supercomputer scene. Later Results From Here (mainly for workstations).
Harold produced the coprxx suite for cobol and a scientific program based on Brian Wichmann's work. The first Whetstone benchmark, known as hjc11 (later alpr12 was written in Algol 60 and completed hotel in november 1972. The fortran codes (HJC12 and hjc12D) were published in April 1973 as fopr12 and fopr13. The first results published were for ibm and icl mainframes in 1973. The speed rating was calculated in terms of Kilo Whetstone Instructions Per Second or kwips. Later, millions or mwips was used. Rolling The Stone, during the 1970's, i was head of the ccta scientific Systems Branch with responsibilities for evaluating new systems, advising on procurements and supervising acceptance trials at both government Departments and Universities.
Vip pet Spa san Clemente
In The beginning, before the introduction of high level languages, general computer performance comparisons were usually based on instruction execution times. These were combined to produce an overall rating using a mix of instructions, the most well known one being the gibson Mix for scientific applications, devised by j points gibson of ibm. In 1957, the uk government formed the technical Support Unit to evaluate and advise on computers, employing engineers from the telecommunications service. This unit eventually became part of the central procurement body later known as the central Computer and Telecommunications Agency (ccta). Tsu engineers produced numerous calculations between 19, using an adp mix, the gibson Mix and a process Control Mix. To start, whetting The Stone, during the late 1960's, the uk national Physical Laboratory had an English Electric (ICL) kdf9 scientific computer with one of the first implementations of Algol 60, the Whetstone translator-interpreter. Brian Wichmann modified the interpreter to record statistics on the intermediate Whetstone instructions and produced a suite of simple statements which could be used to evaluate the efficiency of compilers and overall performance of a processor (see icl kdf9 benchmark results in the table. In 1971 roy wickens, one of the founding members of tsu, abandoned producing a portable benchmark using real programs as it was becoming too expensive. He asked Harold Curnow to produce modular synthetic benchmark suites.