Monday, November 28, 2011

Cs14








 Parallel processing : is the ability to carry out multiple operations or tasks simultaneously. The term is used in the contexts of both human cognition, particularly in the ability of the brain to simultaneously process incoming stimuli, and in parallel computing by machines.


Ultra SPARC: is a microprocessor developed by Sun Microsystems who is now a part of Oracle Corporation and fabricated by Texas Instruments that implements the SPARC V9 instruction set architecture (ISA). It was introduced in mid-1995. It was the first microprocessor from Sun Microsystems to implement the SPARC V9 ISA. Marc Tremblay was a co-microarchitect.





Assembly Language: is a low-level programming language for computers, microprocessors, microcontrollers, and other programmable devices. It implements a symbolic representation of the machine codes and other constants needed to program a given CPU architecture. This representation is usually defined by the hardware manufacturer, and is based on mnemonics that symbolize processing steps (instructions), processor registers, memory locations, and other language features. An assembly language is thus specific to a certain physical (or virtual) computer architecture. This is in contrast to most high-level programming languages, which, ideally, are portable.

A utility program called an assembler is used to translate assembly language statements into the target computer's machine code. The assembler performs a more or less isomorphic translation (a one-to-one mapping) from mnemonic statements into machine instructions and data. This is in contrast with high-level languages, in which a single statement generally results in many machine instructions.

Many advanced assemblers offer additional mechanisms to facilitate program development, control the assembly process, and aid debugging. In particular, most modern assemblers include a macro facility (described below), and are called macro assemblers.



Low-level programming language:
In computer science, a low-level programming language is a programming language that provides little or no abstraction from a computer's instruction set architecture. Generally this refers to either machine code or assembly language. The word "low" refers to the small or nonexistent amount of abstraction between the language and machine language; because of this, low-level languages are sometimes described as being "close to the hardware."

Low-level languages can be converted to machine code without using a compiler or interpreter, and the resulting code runs directly on the processor. A program written in a low-level language can be made to run very fast, and with a very small memory footprint; an equivalent program in a high-level language will be more heavyweight. Low-level languages are simple, but are considered difficult to use, due to the numerous technical details which must be remembered.

By comparison, a high-level programming language isolates the execution semantics of a computer architecture from the specification of the program, which simplifies development.

Low-level programming languages are sometimes divided into two categories: first generation, and second generation.

http://en.wikipedia.org/wiki/Low-level_programming_language

High Level Language :
A high-level programming language is a programming language with strong abstraction from the details of the computer. In comparison to low-level programming languages, it may use natural language elements, be easier to use, or be from the specification of the program, making the process of developing a program simpler and more understandable with respect to a low-level language. The amount of abstraction provided defines how "high-level" a programming language is.[1]

The first high-level programming language to be designed for a computer was Plankalkül, created by Konrad Zuse. However, it was not implemented in his time, and his original contributions were isolated from other developments.


Loader:(equipment)
loader is a heavy equipment machine often used in construction, primarily used to load material (such as asphalt, demolition debris, dirt, snow, feed, gravel, logs, raw minerals, recycled material, rock, sand, and wood chips) into or onto another type of machinery (such as a dump truck, conveyor belt, feed-hopper, or rail car).

Assembler (means one that assembles) may refer to:
Assembler (computer programming), for an assembly language, a computer program to translate between lower-level representations of computer programs. An assembler converts basic computer instructions into a pattern of bits which can be easily understood by the computer and the processor can use it to perform its basic operations.
Assembler (bioinformatics), a program to perform genome assembly
Assembler (nanotechnology), a conjectured construction machine that would manipulate and build with individual atoms or molecules
A stage name of avant-garde electronic musician Nobukazu Takemura
The Assembler species, a fictional alien race in Star Wars



----------------------------------------------------------------------------------------------------------
OS LEVEL:
Operating system-level virtualization is a server virtualization method where the kernel of an operating system allows for multiple isolated user-space instances, instead of just one. Such instances (often called containers, VEs, VPSs or jails) may look and feel like a real server, from the point of view of its owner. On Unix systems, this technology can be thought of as an advanced implementation of the standard chroot mechanism. In addition to isolation mechanisms, the kernel often provides resource management features to limit the impact of one container's activities on the other containers.

-----------------------------------------------------------
Race:
May refers to:
~Classifications
~Surnames
~Other uses
-------------------------------------------------------------
Windows:
Microsoft Windows is a series of operating systems produced by Microsoft.

Microsoft introduced an operating environment named Windows on November 20, 1985 as an add-on to MS-DOS in response to the growing interest in graphical user interfaces (GUIs).[2] Microsoft Windows came to dominate the world's personal computer market, overtaking Mac OS, which had been introduced in 1984. As of August 2011, Windows has approximately 82.58% of the market share of the client operating systems according to Usage share of operating systems.

The most recent client version of Windows is Windows 7; the most recent server version is Windows Server 2008 R2; the most recent mobile version is Windows Phone 7.

--------------------------------------
Operating System 

operating system (OS) is a set of programs that manage computer hardware resources and provide common services for application software. The operating system is the most important type of system software in a computer system. A user cannot run an application program on the computer without an operating system, unless the application program is self booting.

Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting for cost allocation of processor time, mass storage, printing, and other resources.

For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between application programs and the computer hardware,[1][2] although the application code is usually executed directly by the hardware and will frequently call the OS or be interrupted by it. Operating systems are found on almost any device that contains a computer—from cellular phones and video game consoles to supercomputers and web servers.






ISA LEVEL:

(ISA), is the part of the computer architecture related to programming, including the native data types, instructions, registers, addressing modes, memory architecture, interrupt and exception handling, and external I/O. An ISA includes a specification of the set of opcodes (machine language), and the native commands implemented by a particular processor.

Instruction set architecture is distinguished from the microarchitecture, which is the set of processor design techniques used to implement the instruction set. Computers with different microarchitectures can share a common instruction set. For example, the Intel Pentium and the AMD Athlon implement nearly identical versions of the x86 instruction set, but have radically different internal designs.

Some virtual machines that support bytecode for Smalltalk, the Java virtual machine, and Microsoft's Common Language Runtime virtual machine as their ISA implement it by translating the bytecode for commonly used code paths into native machine code, and executing less-frequently-used code paths by interpretation; Transmeta implemented the x86 instruction set atop VLIW processors in the same fashion.


Pentium 4:
Pentium 4 was a line of single-core desktop and laptop central processing units (CPUs), introduced by Intel on November 20, 2000[1] and shipped through August 8, 2008.[2] They had a 7th-generation x86 microarchitecture, called NetBurst, which was the company's first all-new design since the introduction of the P6 microarchitecture of the Pentium Pro CPUs in 1995. NetBurst differed from P6 (Pentium III, II, etc.) by featuring a very deep instruction pipeline to achieve very high clock speeds[3] (up to 3.8 GHz) limited only by TDPs reaching up to 115 W in 3.4 GHz –3.8 GHz Prescott and Prescott 2M cores.[4] In 2004, the initial 32-bit x86 instruction set of the Pentium 4 microprocessors was extended by the 64-bit x86-64 set. The performance difference between a Pentium III at 1.13 GHz and a Pentium 4 at 1.3 GHz would have been hardly noticeable. So the Pentium 4 clock frequency needed to be approximately 1.15 higher than a Pentium 3 to achieve the same performance. [5]

The first Pentium 4 cores, codenamed Willamette, were clocked from 1.3 GHz to 2 GHz. They were released on November 20, 2000, using the Socket 423 system. Notable with the introduction of the Pentium 4 was the 400 MT/s FSB. It actually operated at 100 MHz but the FSB was quad-pumped, meaning that the maximum transfer rate was four times the base clock of the bus, so it was marketed to run at 400 MHz. The AMD Athlon's double-pumped FSB was running at 200 MT/s or 266 MT/s at that time.

Pentium 4 CPUs introduced the SSE2 and, in the Prescott-based Pentium 4s, SSE3 instruction sets to accelerate calculations, transactions, media processing, 3D graphics, and games. Later versions featured Hyper-Threading Technology (HTT), a feature to make one physical CPU work as two logical CPUs. Intel also marketed a version of their low-end Celeron processors based on the NetBurst microarchitecture (often referred to as Celeron 4), and a high-end derivative, Xeon, intended for multiprocessor servers and workstations. In 2005, the Pentium 4 was complemented by the Pentium D and Pentium Extreme Edition dual-core CPUs.








EMBEDDED COMPUTER
:Embedded computers can be compared to "computers on a chip". All in one so to speak. You will find them in all kind of appareils that surround us. Washingmachines, ticketmachines at the subway, camera's, cars, motors, sewing machines, clocks. Everywhere needing something to regulate, control of check something.

Supercomputers
Mainframes
Mini computers
Microcomputers
Terminals
Embedded computers



The ranking of embedded computers is at the bottom of the computing spectrum. But that does not mean they are less important. On the contrary. Historically, embedded computing is associated with self contained pre-programmed computing. Meaning there are mostly no connections outside the environment where that particular type of computing takes (physically) place and influences the working of that embedded computing device.
Except off course for that part of the (selfcontained) environment it should take or have control of.It may be clear that these devices are mostly used




Trap:
Trapping is a term most commonly used in the prepress industry to describe the compensation for misregistration between printing units on a multicolor press. This misregistration causes unsightly gaps or white-space on the final printed work. Trapping involves creating overlaps (spreads) or underlaps (chokes) of objects during the print production process to eliminate misregistration on the press.[1]
[edit]
Background

Misregistration in the graphical workflow may be caused by a number of reasons:
inaccuracies in the image setter
instability of the image carrier, eg stretch in film or plate
inaccuracy in the film to plate or film to film copying steps
instability of the press
instability of the final media
human error

These inaccuracies are inherent to the graphical production process and although they can be minimized they will never completely disappear - any mechanical process will always show some margin of error. The small gaps showing up as a result can however be hidden by creating overlaps between two adjacent colors.
[edit]
Trapping methods

One approach to trapping is to change the submitted artwork. In general, all digital files produced using any current professional software have some level of trapping provided already, via application default values. Additional trapping may also be necessary, but all traps should be as unobtrusive as possible.

Traps can be applied at several stages in the digital workflow, using one of two trapping technologies: vector-based and raster-based. The right choice will depend on the type of products (packaging applications including flexo-printing have other requirements than commercial printing on offset systems) and the degree of interactivity or automation that is wanted.

In-RIP trapping moves the trapping to the RIP so that it is done at the last moment. The process is automatic, though it is possible to set up zones to allow different automatic rules for different areas, or to disable trapping for areas previously manually trapped.
[edit]
Trapping decision making

Certain basic rules have to be observed.

First the decision should be made if a trap is needed between two specific inks, in other words, if these two abutting colors are printed is there a risk of gaps showing up when misregistration happens.

In case the two colors in question are spot colors, trapping is always needed: from the moment the artwork is imaged on film or plate, they are handled separately and ultimately will be printed on two different printing units. The same applies if one of the colors is a spot, the other a process color.

The decision becomes a bit more tricky if the two colors are process colors and will each be printed as a combination of the basic printing colors Cyan, Magenta, Yellow and Black. In this case the decision whether to trap or not will be defined by the amount of ‘common’ color.

Another factor that will influence the visibility of the traps is the direction of the trap. The decision which color should be spread or choked is usually decided upon the relative luminance of the colors in question. The ‘lighter’ color should always be spread into the darker. Again this reflects the way the human eye perceives color: since the darker colors define the shapes we see, distortion of the lighter color will result in less visible distortion overall. The ‘lightness’ or ‘darkness’ of a color is usually defined as its ‘neutral density’.

A major exception to this rule should be applied when opaque spot colors are used. Other colors, regardless of the relative luminance should always be trapped to (spread under) these spot colors, If several of these spot colors are used (a common practice in the packaging market), it is not the luminance of the color but the order of printing that will be the decisive element: the first color to be printed should always spread under the next color.

Example use of a trap.

The thinner the traps created, the less visible they will be. Therefore the trap width should be set to the strict minimum, dictated by the maximum amount of misregistration or error margin of the whole production workflow up to the printing press. Since the printing technology and the quality of the paper are the most important causes for misregistration it is possible to come up with some rules of thumb. E.g. for quality offset printing it is generally accepted that the trapping width should be between 1 and 1/2 print dots. When printing at 150 lpi the traps should be between 1/150 and 1/300 inch. (0.48 pt. and 0.24 pt., 0.16 mm. and 0.08 mm.). These values are usually multiplied with a factor of 1.5 or 2 whenever one of the colors is Black. First of all the trap will not be visible since the lighter color will be spread underneath the -almost- opaque black. For the same reason, in many cases, black ink will be set to "overprint" colors in the background, eliminating the more complex process of "spreading or choking". Since black is a very dark color, white gaps caused by misregistration will be the more visible. On top of that -in wet-in-wet offset printing- black is the first color to be laid down on paper, causing relatively more distortion of the paper and thus at higher risk of showing misregistration.

Whenever a trap between two colors is created this trap will contain the sum of the two colors in question whenever at least one of them is a spot color. In case the two colors are process colors, the trap will contain the highest value of each of the CMYK-components. This trap color is always darker than the darker of the two abutting colors. In some cases, more specifically when the two colors are light pastel-like colors, this might result in a trap that is perceived as too visible, In this case it might be desirable to reduce the amount of color in the trap. This should however be limited: the trap should never be lighter than the darkest color since this would have the same effect as misregistration: a light colored ‘gap’ between the two colors. Trap color reduction is also not recommended when solid spot colors are used. In this case reduction would cause the spot color in the trap to be printed not as a solid but as a screened tint.

Trapping towards a rich black (a black with a support screen of another color added to it to give it a ‘deeper’ look and making it more opaque - often called "undercolor" ), will follow the same rules as trapping to a ‘normal’ black. However, a stay-away should be created for the supporting color. This will prevent misregistration from revealing the undercolor at the edges of the rich black object. In short, a stay-away pushes the undercolor away from the edge of the rich black, and is usually created with a single color black stroke, set to "knock-out".

Blends or ‘vignettes’ often offer special challenges to trapping. The lighter part of a blend needs to spread into the background, the darker part needs to be choked. If a trap over the full length of the blend is needed, this would result in a very visible ‘staircase’. The solution here is the creation of a sliding trap: a trap that should not only gradually change color but also position. The trap can be created so that it ‘slides’ all the way, but this not often the desired effect either since it might distort the original artwork too much. Often the ‘sliding’ factor is set to a point where the neutral densities of blend and background reach a certain difference.



Mouse:
In computing, a mouse is a pointing device that functions by detecting two-dimensional motion relative to its supporting surface. Physically, a mouse consists of an object held under one of the user's hands, with one or more buttons. It sometimes features other elements, such as "wheels", which allow the user to perform various system-dependent operations, or extra buttons or features that can add more control or dimensional input. The mouse's motion typically translates into the motion of a cursor on a display, which allows for fine control of a graphical user interface.

Regarding the first illustration, although [push]buttons have traditionally been relatively compact, usually round or square, the whole front half of this mouse consists principally of two spring-loaded regions with a narrow slit between. Pressing down on either of these regions operates its corresponding switch; they are very-wide buttons. Between them, the convex surface is the edge of a wheel on an axle that extends to the left and right. Rotating this wheel typically scrolls the image on the screen, but can do other tasks. This wheel's axis, spring-loaded, can move downward to operate a switch, thus functioning as a third button. The mouse is operated with the cord facing away from the operator. While this mouse has a cord, cordless mice have become popular.



Microarchitecture:




omputer organization" redirects here. For organizations that make computers, see List of computer system manufacturers. For one classification of computer architectures, see Flynn's taxonomy. For another classification of instruction set architectures, see Instruction set#Number of operands.

Intel Core microarchitecture

In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design.




ADDER:
an adder or summer is a digital circuit that performs addition of numbers. In many computers and other kinds of processors, adders are used not only in the arithmetic logic unit(s), but also in other parts of the processor, where they are used to calculate addresses, table indices, and similar.

Although adders can be constructed for many numerical representations, such as binary-coded decimal or excess-3, the most common adders operate on binary numbers. In cases where two's complement or ones' complement is being used to represent negative numbers, it is trivial to modify an adder into an adder–subtractor. Other signed number representations require a more complex adder.





INTERPRETER:


computer science, an interpreter normally means a computer program that executes, i.e. performs, instructions written in a programming language. An interpreter may be a program that either
executes the source code directly
translates source code into some efficient intermediate representation (code) and immediately executes this
explicitly executes stored precompiled code[1] made by a compiler which is part of the interpreter system

Early versions of the Lisp programming language and Dartmouth BASIC would be examples of type 1. Perl, Python, MATLAB, and Ruby are examples of type 2, while UCSD Pascal is type 3: Source programs are compiled ahead of time and stored as machine independent code, which is then linked at run-time and executed by an interpreter and/or compiler (for JIT systems). Some systems, such as Smalltalk, contemporary versions of BASIC, Java and others, may also combine 2 and 3.

While interpreting and compiling are the two main means by which programming languages are implemented, these are not fully mutually exclusive categories, one of the reasons being that most interpreting systems also perform some translation work, just like compilers. The terms "interpreted language" or "compiled language" merely mean that the canonical implementation of that language is an interpreter or a compiler; a high level language is basically an abstraction which is (ideally) independent of particular implementations.





=============
Digital Logic system

Digital electronics represent signals by discrete bands of analog levels, rather than by a continuous range. All levels within a band represent the same signal state. Relatively small changes to the analog signal levels due to manufacturing tolerance, signal attenuation or parasitic noise do not leave the discrete envelope, and as a result are ignored by signal state sensing circuitry.

In most cases the number of these states is two, and they are represented by two voltage bands: one near a reference value (typically termed as "ground" or zero volts) and a value near the supply voltage, corresponding to the "false" ("0") and "true" ("1") values of the Boolean domain respectively.

Digital techniques are useful because it is easier to get an electronic device to switch into one of a number of known states than to accurately reproduce a continuous range of values.

Digital electronic circuits are usually made from large assemblies of logic gates, simple electronic representations of Boolean logic functions.




1hz clock\;
The hertz is equivalent to cycles per second.[2] In defining the second the CIPM declared that "the standard to be employed is the transition between the hyperfine levels F = 4, M = 0 and F = 3, M = 0 of the ground state 2S1/2 of the caesium 133 atom, unperturbed by external fields, and that the frequency of this transition is assigned the value 9 192 631 770 hertz"[3] thereby effectively defining the hertz and the second simultaneously.

In English, hertz is used as a plural.[4] As an SI unit, Hz can be prefixed; commonly used multiples are kHz (kilohertz, 103 Hz), MHz (megahertz, 106 Hz), GHz (gigahertz, 109 Hz) and THz (terahertz, 1012 Hz). One hertz simply means "one cycle per second" (typically that which is being counted is a complete cycle); 100 Hz means "one hundred cycles per second", and so on. The unit may be applied to any periodic event—for example, a clock might be said to tick at 1 Hz, or a human heart might be said to beat at 1.2 Hz. The "frequency" (activity) of aperiodic or stochastic events, such as radioactive decay, is expressed in becquerels.Hertz
Unit system: SI derived unit
Unit of... Frequency
Symbol: Hz
Named after: Heinrich Hertz
In SI base units: 1 Hz = 1/s


Even though angular velocity, angular frequency and hertz all have the dimensions of 1/s, angular velocity and angular frequency are not expressed in hertz,[5] but rather in an appropriate angular unit such as radians per second. Thus a disc rotating at 60 revolutions per minute (rpm) is said to be rotating at either 2π rad/s or 1 Hz, where the former measures the angular velocity and the latter reflects the number of complete revolutions per second. The conversion between a frequency f measured in hertz and an angular velocity ω measured in radians per second are:
and .

This SI unit is named after Heinrich Hertz. As with every SI unit whose name is derived from the proper name of a person, the first letter of its symbol is upper case (Hz). When an SI unit is spelled out in English, it should always begin with a lower case letter (hertz), except where any word would be capitalized, such as at the beginning of a sentence or in capitalized material such as a title. Note that "degree Celsius" conforms to this rule because the "d" is lowercase. —Based on The International System of Unit


Flip Flop:

In electronics, a flip-flop or latch is a circuit that has two stable states and can be used to store state information. The circuit can be made to change state by signals applied to one or more control inputs and will have one or two outputs. It is the basic storage element in sequential logic. Flip-flops and latches are a fundamental building block of digital electronics systems used in computers, communications, and many other types of systems.

Flip-flops and latches are used as data storage elements. Such data storage can be used for storage of state, and such a circuit is described as sequential logic. When used in a finite-state machine, the output and next state depend not only on its current input, but also on its current state (and hence, previous inputs). It can also be used for counting of pulses, and for synchronizing variably-timed input signals to some reference timing signal.

Flip-flops can be either simple (transparent or opaque) or clocked (synchronous or edge-triggered); the simple ones are commonly called latches.[1] The word latch is mainly used for storage elements, while clocked devices are described as flip-flops.[2]





Dynamic Ram:

Dynamic random-access memory (DRAM) is a type of random-access memory that stores each bit of data in a separate capacitor within an integrated circuit. The capacitor can be either charged or discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1. Since capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to SRAM and other static memory.

The main memory (the "RAM") in personal computers is Dynamic RAM (DRAM). It is the RAM in laptop, notebook and workstation computers as well as some of the RAM of home game consoles (PlayStation3, Xbox 360 and Wii),

The advantage of DRAM is its structural simplicity: only one transistor and a capacitor are required per bit, compared to six transistors in SRAM. This allows DRAM to reach very high densities. Unlike flash memory, DRAM is volatile memory (cf. non-volatile memory), since it loses its data quickly when power is removed. The transistors and capacitors used are extremely small; billions can fit on a single memory chip.


STATIC RAM

Static random-access memory (SRAM) is a type of semiconductor memory where the word static indicates that, unlike dynamic RAM (DRAM), it does not need to be periodically refreshed, as SRAM uses bistable latching circuitry to store each bit. SRAM exhibits data remanence,[1] but is still volatile in the conventional sense that data is eventually lost when the memory is not powered.

BLUEGENE:

Blue Gene is a computer architecture project to produce several supercomputers, designed to reach operating speeds in the PFLOPS (petaFLOPS) range, and currently reaching sustained speeds of nearly 500 TFLOPS (teraFLOPS). It is a cooperative project among IBM (particularly IBM Rochester and the Thomas J. Watson Research Center), the Lawrence Livermore National Laboratory, the United States Department of Energy (which is partially funding the project), and academia. There are four Blue Gene projects in development: Blue Gene/L, Blue Gene/C, Blue Gene/P, and Blue Gene/Q.

The project was awarded the National Medal of Technology and Innovation


PIPELINE:

Pipeline transport is the transportation of goods through a pipe. Most commonly, liquids and gases are sent, but pneumatic tubes that transport solid capsules using compressed air are also used.

As for gases and liquids, any chemically stable substance can be sent through a pipeline. Therefore sewage, slurry, water, or even beer pipelines exist; but arguably the most valuable are those transporting fuels: oil (oleoduct), natural gas (gas grid), and biofuels.




MEDIA PROCESSOR:

A media processor is a microprocessor-based system-on-a-chip which is designed to deal with digital streaming data in real-time (e.g. display refresh) rates. These devices can also be considered a class of digital signal processors (DSPs).

Unlike similar graphics processing units (GPUs), which are used for computer displays, media processors are targeted at digital televisions and set-top boxes.

The streaming digital media classes include:


CACHE

In computer engineering, a cache ( /ˈkæʃ/ kash[1]) is a component that transparently stores data so that future requests for that data can be served faster. The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere. If requested data is contained in the cache (cache hit), this request can be served by simply reading the cache, which is comparatively faster. Otherwise (cache miss), the data has to be recomputed or fetched from its original storage location, which is comparatively slower. Hence, the greater the number of requests that can be served from the cache, the faster the overall system performance becomes.

To be cost efficient and to enable an efficient use of data, caches are relatively small. Nevertheless, caches have proven themselves in many areas of computing because access patterns in typical computer applications have locality of reference. References exhibit temporal locality if data is requested again that has been recently requested already. References exhibit spatial locality if data is requested that is physically stored close to data that has been requested already.